|
| 1 | +# Feature Map |
| 2 | + |
| 3 | +## Dashboard (`/`) |
| 4 | + |
| 5 | +**Team overview — cost, activity, members at a glance.** |
| 6 | + |
| 7 | +### Controls |
| 8 | + |
| 9 | +- Search by name/email |
| 10 | +- Filter by billing group |
| 11 | +- Time range: 24h / 3d / 7d / 14d / 30d |
| 12 | + |
| 13 | +### Stat Cards |
| 14 | + |
| 15 | +- **Spend** — total team spend, $/day average |
| 16 | +- **Billing Cycle** — day X of Y, days left, reset date |
| 17 | +- **Anomalies** — open count, red border when active |
| 18 | +- **Active** — active members count and % of team |
| 19 | +- **Requests** — total agent requests, /day average |
| 20 | +- **Lines** — total lines added, /day average |
| 21 | + |
| 22 | +### Charts |
| 23 | + |
| 24 | +- **Daily Spend Trend** — area chart with avg line, provisional zone (last 2d), spike detection |
| 25 | +- **Model Cost Comparison** — table with $/request, relative multiplier (1x–8x+), color-coded |
| 26 | +- **Top Spenders** — horizontal bar chart, top 8 |
| 27 | +- **Daily Spend by User** — stacked bar, top 6 + Others, clickable legend |
| 28 | + |
| 29 | +### Members Table |
| 30 | + |
| 31 | +- Sortable by: spend, activity, requests, lines, $/req, context, name |
| 32 | +- Filterable by badge type |
| 33 | +- Columns: rank, name, email, spend, requests, lines, $/req, model, profile badges, ranks |
| 34 | + |
| 35 | +### Badges (per user, max 2) |
| 36 | + |
| 37 | +| Category | Badges | |
| 38 | +| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
| 39 | +| Usage | Power User, Deep Thinker, Low Usage | |
| 40 | +| Spend | Cost Efficient, Premium Model, Over Budget | |
| 41 | +| Context | Long Sessions, Short Sessions | |
| 42 | +| Adoption | AI-Native (80%+), High Adoption (55%+), Moderate (30%+), Low Adoption (10%+), Manual Coder (<10%) — based on composite score (accept rate + engagement + consistency) | |
| 43 | + |
| 44 | +### Ranks |
| 45 | + |
| 46 | +- Spend rank ($N) — blue |
| 47 | +- Activity rank (AN) — green |
| 48 | + |
| 49 | +--- |
| 50 | + |
| 51 | +## User Detail (`/users/[email]`) |
| 52 | + |
| 53 | +**Per-user deep dive — KPIs, trends, tools, models, anomalies.** |
| 54 | + |
| 55 | +### Header |
| 56 | + |
| 57 | +- Name, email, role, billing group link |
| 58 | +- Profile badges (same as dashboard) |
| 59 | + |
| 60 | +### KPI Cards |
| 61 | + |
| 62 | +- **Cycle Spend** — total $ in billing cycle, $/day |
| 63 | +- **$/Req** — cost per agent request |
| 64 | +- **Agent Reqs** — total requests in time range |
| 65 | +- **Diffs Accepted** — % of agent diffs accepted (accepts/applies), raw counts |
| 66 | +- **Team Rank** — spend and activity rank of N |
| 67 | + |
| 68 | +### Charts |
| 69 | + |
| 70 | +- **Spend Trend** — same area chart as dashboard, per-user |
| 71 | +- **AI Adoption** — tier label (AI-Native/High/Moderate/Low/Manual) with one-line description. Score bar (0-100). Three stat pills: diffs accepted %, requests/day, active days. Composite score from Accept Rate (40%), Engagement Intensity (40%), Consistency (20%). Tooltips on hover with raw numbers. |
| 72 | + |
| 73 | +### Sections |
| 74 | + |
| 75 | +- **Cost Breakdown** — per-model table: requests, $/req, total $, included vs overage bar, errors |
| 76 | +- **Tools & Features** — top 10 MCP tools + top 10 commands used |
| 77 | +- **Context Efficiency** — avg cache read/req, org median, vs org ratio, rank, color-coded band |
| 78 | +- **Model Preferences** — model, days used, requests |
| 79 | +- **Daily Activity Table** — date, model, requests, spend, lines +/-, accepts, tabs, version (spike rows highlighted) |
| 80 | +- **Anomaly History** — detected date, type, severity, message, status |
| 81 | + |
| 82 | +--- |
| 83 | + |
| 84 | +## Insights (`/insights`) |
| 85 | + |
| 86 | +**Team analytics — adoption, efficiency, trends.** |
| 87 | + |
| 88 | +### Stat Cards |
| 89 | + |
| 90 | +- Avg DAU, Commands total, Agent lines accepted, Tab lines accepted, MCP tools count |
| 91 | + |
| 92 | +### Sections |
| 93 | + |
| 94 | +- **Plan Exhaustion** — users who exceeded plan, days to exhaust, buckets (1-3d, 4-7d, 8-14d, 15+d) |
| 95 | +- **Model Rankings** — biggest spenders, most/least cost efficient, full scorecard |
| 96 | +- **DAU Chart** — daily active users by type (DAU, Cloud Agent, CLI) |
| 97 | +- **Model Adoption Share** — stacked area, top 5 models over time |
| 98 | +- **Model Usage Breakdown** — table: model, messages, users, % of total |
| 99 | +- **Top File Extensions** — horizontal bar by AI lines accepted |
| 100 | +- **Commands Adoption** — top 20 commands, usage counts |
| 101 | +- **MCP Tool Adoption** — top 20 tools by server, call counts |
| 102 | +- **Client Versions** — pie chart + table with "latest" / "needs update" badges |
| 103 | + |
| 104 | +--- |
| 105 | + |
| 106 | +## Anomalies (`/anomalies`) |
| 107 | + |
| 108 | +**Incident monitoring and response tracking.** |
| 109 | + |
| 110 | +### Stat Cards |
| 111 | + |
| 112 | +- Open Anomalies, Resolved, Open Incidents, Avg MTTD, Avg MTTI, Avg MTTR |
| 113 | + |
| 114 | +### Sections |
| 115 | + |
| 116 | +- **Open Incidents** — table with acknowledge/resolve actions |
| 117 | +- **All Anomalies** — table: date, user, type, severity, metric, message, status |
| 118 | + |
| 119 | +--- |
| 120 | + |
| 121 | +## Settings (`/settings`) |
| 122 | + |
| 123 | +**Detection thresholds, budget, billing groups.** |
| 124 | + |
| 125 | +### Detection Config |
| 126 | + |
| 127 | +- Static thresholds (max spend/cycle, max requests/day) |
| 128 | +- Z-score detection (std dev multiplier, lookback window) |
| 129 | +- Spend trend detection (spike multiplier, lookback, cycle outlier multiplier) |
| 130 | +- Collection schedule (cron interval) |
| 131 | +- Team budget alert threshold |
| 132 | + |
| 133 | +### Billing Groups |
| 134 | + |
| 135 | +- Group management: create, rename, assign members |
| 136 | +- HiBob CSV import with change preview |
| 137 | +- Backup export/import |
0 commit comments