feat(report): session outlier detection and per-model one-shot rates#81
feat(report): session outlier detection and per-model one-shot rates#81lfl1337 wants to merge 8 commits intogetagentseal:mainfrom
Conversation
…ory costs are zero
…anel Single Top Sessions panel shows 5 highest-cost sessions with activity column and outlier highlighting via red cost color (outlier = >2x project average). Removes the redundant second panel that showed the same session list.
firstlook
Details
|
|
Hi, computeModelOneShotRates() attributes edit/one-shot counts to only turn.assistantCalls[0].model, so any turn containing multiple assistant calls (potentially across different models) will miscount one-shot rates per model. Severity: action required | Category: correctness How to fix: Attribute turns across models Agent prompt to fix - you can give this to your LLM of choice:
Found by Qodo code review |
…n turn The first call in a multi-step turn is often a tool-request; the final call is the one that produces the edit. Using assistantCalls[last] correctly attributes the outcome to the model that generated it.
|
Addressed in 44f5f6f. Changed attribution from Note on the broader concern: in Claude Code sessions, all API calls within a single turn use the same model (no mid-turn model switching). Other providers (Cursor, Codex, Copilot) produce exactly one |

Summary
Closes additional points from #12 (power-user proposals: outlier detection + per-model efficiency).
Architecture
Pure computation split into a new `src/analytics.ts` module - no React/Ink imports, directly unit-testable with plain `SessionSummary` fixtures:
Design decisions
Depends on
If PR #77 lands first, I will rebase - its `getShortModelName` fix makes the per-model one-shot rates more accurate (no more `gpt-4o-mini` showing up as `GPT-4o`).
Test plan