What happened?
Town: 6e883f67-6e8c-40f4-9de8-065c8a2363de
Rig: abc14699-6206-4887-85b1-95d5312b5c28
A triage polecat assigned to a triage batch bead cannot resolve its triage request — the gt_triage_resolve RPC is returning HTTP 500. This has produced a chain of escalations because each failed resolve creates a new "Triage agent cannot resolve request" escalation that itself cannot be triaged, which auto-creates another one, etc.
Escalation chain observed in this session:
- (medium) "Feature branch is identical to main — nothing to merge" (convoy 53586680, closed by mayor)
- (high) "GitHub token rejected — cannot push merge to main" (user rotated token)
- (high) "Token rotation did not reach agent session — still 401" (mayor reset refinery agent)
- (high) "Triage agent cannot resolve request — gt_triage_resolve returning 500" ← THIS BUG
Recent triage beads that failed this way:
- dcf13a4d-f849-4e41-8ab2-fc06f10cf06b (assigned to Maple aed76ca2)
- d26ecf85-f8c4-419a-957e-3587a77fcac9 (assigned to Toast 9e5e06f5)
Triage-request beads left open because the resolver is failing:
- b18b79eb-e488-4a4b-92ed-fa160a428aaf (for escalation 766bf336)
Impact: triage system is effectively offline for this rig. Escalations accumulate. Mayor has to manually close triage requests via gt_bead_update to keep the control plane from looping. Polecat work (convoy 7e700d7e Phase 0 Discovery Restart) continues to produce branches but cannot land because the triage/merge flow is blocked.
Ask: please look at the gt_triage_resolve endpoint server-side logs around 2026-04-23T12:20Z for this rig/town and identify the 500 cause. Likely candidates: DB constraint violation when closing a triage_request with certain metadata shapes, null-deref on escalations lacking convoy_id / source_bead_id, or a regression in the escalation-resolution path.
Area
Agent Dispatch / Scheduling
Context
- Town ID: 6e883f67-6e8c-40f4-9de8-065c8a2363de
- Agent: Mayor (89596b86-efa1-483b-a22e-01fc1bac3f52)
- Rig ID: abc14699-6206-4887-85b1-95d5312b5c28
Filed automatically by the Mayor via gt_report_bug.
What happened?
Town: 6e883f67-6e8c-40f4-9de8-065c8a2363de
Rig: abc14699-6206-4887-85b1-95d5312b5c28
A triage polecat assigned to a triage batch bead cannot resolve its triage request — the gt_triage_resolve RPC is returning HTTP 500. This has produced a chain of escalations because each failed resolve creates a new "Triage agent cannot resolve request" escalation that itself cannot be triaged, which auto-creates another one, etc.
Escalation chain observed in this session:
Recent triage beads that failed this way:
Triage-request beads left open because the resolver is failing:
Impact: triage system is effectively offline for this rig. Escalations accumulate. Mayor has to manually close triage requests via gt_bead_update to keep the control plane from looping. Polecat work (convoy 7e700d7e Phase 0 Discovery Restart) continues to produce branches but cannot land because the triage/merge flow is blocked.
Ask: please look at the gt_triage_resolve endpoint server-side logs around 2026-04-23T12:20Z for this rig/town and identify the 500 cause. Likely candidates: DB constraint violation when closing a triage_request with certain metadata shapes, null-deref on escalations lacking convoy_id / source_bead_id, or a regression in the escalation-resolution path.
Area
Agent Dispatch / Scheduling
Context
Filed automatically by the Mayor via
gt_report_bug.