Problem
With Opus 4.7 + xhigh or max effort, assistant turns routinely include long extended-thinking phases between tool calls. The UI shows tool ticker but no sign that the model is actively reasoning when it's in a thinking block.
Real-world impact in our setup (3 team members using CloudCLI daily via Claude Max):
- Turn gets 45-120s silent while model reasons in a thinking block
- User sees no streaming text, believes the session is stuck
- User types "are you still there?" or similar, which interrupts the ongoing stream
- This creates a synthetic-stop in the JSONL (
model: <synthetic>, stop_reason: stop_sequence, text: "No response requested.")
- The original work is lost and context is contaminated with the synthetic stop
Measured on today's sessions (2026-04-21): 63% of user turns ended in synthetic-stop due to this pattern. Turns naturally lasted 40-80s for routine cockpit/TODO work on Opus 4.7 xhigh.
Current State
The server-side adapter (server/providers/claude/adapter.js) already emits thinking events from the SDK (lines 103, 169). But the frontend doesn't appear to expose a visible "Model is thinking..." indicator during these phases — only tool-use badges are prominent.
Related:
Requested Feature
A persistent, minimally-styled status indicator in the assistant message area that shows:
- "Thinking…" when a
thinking content block is active (rendered live during stream)
- "Using tool: X" when a
tool_use is active
- Optional: elapsed seconds since last visible text produced
This would give users confidence that the model is working, reducing the interrupt pattern.
Bonus: if the indicator updates at least every 15-20s with something fresh (even "Still thinking, 45s…"), it functions as a proof-of-life signal.
Workaround we deployed
Added behavioral rules in CLAUDE.md asking the model to emit short status lines between tool calls ("Memory gelesen, erstelle jetzt Kanban-Karten"). Partial fix but relies on model compliance mid-chain. A native UI indicator would be more reliable.
Environment
@cloudcli-ai/cloudcli@1.29.5
@anthropic-ai/claude-agent-sdk@0.2.114 (bundled)
- Claude Max OAuth → Opus 4.7 with
xhigh effort
- Typical turn: 5-15 tool calls, 1-3 thinking blocks, 30-120s total
Problem
With Opus 4.7 +
xhighormaxeffort, assistant turns routinely include long extended-thinking phases between tool calls. The UI shows tool ticker but no sign that the model is actively reasoning when it's in a thinking block.Real-world impact in our setup (3 team members using CloudCLI daily via Claude Max):
model: <synthetic>,stop_reason: stop_sequence,text: "No response requested.")Measured on today's sessions (2026-04-21): 63% of user turns ended in synthetic-stop due to this pattern. Turns naturally lasted 40-80s for routine cockpit/TODO work on Opus 4.7 xhigh.
Current State
The server-side adapter (
server/providers/claude/adapter.js) already emitsthinkingevents from the SDK (lines 103, 169). But the frontend doesn't appear to expose a visible "Model is thinking..." indicator during these phases — only tool-use badges are prominent.Related:
Requested Feature
A persistent, minimally-styled status indicator in the assistant message area that shows:
thinkingcontent block is active (rendered live during stream)tool_useis activeThis would give users confidence that the model is working, reducing the interrupt pattern.
Bonus: if the indicator updates at least every 15-20s with something fresh (even "Still thinking, 45s…"), it functions as a proof-of-life signal.
Workaround we deployed
Added behavioral rules in
CLAUDE.mdasking the model to emit short status lines between tool calls ("Memory gelesen, erstelle jetzt Kanban-Karten"). Partial fix but relies on model compliance mid-chain. A native UI indicator would be more reliable.Environment
@cloudcli-ai/cloudcli@1.29.5@anthropic-ai/claude-agent-sdk@0.2.114(bundled)xhigheffort