Skip to content

Commit 37d2bca

Browse files
ofershapcursoragent
andcommitted
feat: AI adoption scoring, redesigned adoption card, diffs accepted KPI
- Add composite AI adoption score (accept rate 40%, engagement 40%, consistency 20%) with tier labels (AI-Native/High/Moderate/Low/Manual) - Collect AI code tracking data from /analytics/ai-code/commits endpoint - Redesign user detail adoption card: tier headline, description, score bar, stat pills with tooltips - Replace misleading Accept Rate KPI (accepts/(accepts+rejects) ~100%) with Diffs Accepted (accepts/applies) showing actual acceptance rate - Fix consistency calculation: use elapsed days in window instead of full 30 - Add adoption badges to dashboard members table - Update FEATURES.md and cursor rules with new metrics documentation Co-authored-by: Cursor <cursoragent@cursor.com>
1 parent fc12553 commit 37d2bca

11 files changed

Lines changed: 989 additions & 223 deletions

File tree

.cursor/rules/cursor-api-data-guide.mdc

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,16 +120,31 @@ Legacy field from the old fixed-pricing model. It's NOT the current billing mech
120120
- `acceptedLinesAdded / totalLinesAdded` as "AI %" - WRONG for agent users, only valid for tab/composer-heavy users
121121
- `totalApplies` - may undercount in YOLO/auto-apply mode
122122
- `usageBasedReqs` vs `agentRequests` - the gap between these is unclear; likely `usageBasedReqs` counts only requests that consumed usage-based billing
123+
- `nonAiLinesAdded` from ai-code/commits - NOT "manually typed code". It's "unattributed lines" — lines where Cursor's server couldn't trace back to an AI generation event. Breaks on squash merges (attribution from individual commits is lost), initial commits to new repos (no prior AI tracking context), and code that went through review cycles. Verified against real data: a 100% agent-driven user showed 43% "nonAI" due to squash merges and initial commits. Do NOT use this as "% manual work".
124+
125+
### AI Code Tracking API (`/analytics/ai-code/commits`) Limitations
126+
127+
Investigated Feb 2026 by comparing real user data against git history and studying git-ai's approach:
128+
129+
1. **Squash merges lose attribution**: When a PR is squash-merged, the final commit has no record of the individual AI edits from development. All lines appear as `nonAiLinesAdded`. This is the #1 source of false "manual" attribution for teams using squash merge workflows.
130+
131+
2. **Initial commits to new repos**: The first commit to a new repo has no prior AI tracking context. Even if the agent wrote 100% of the code, it all shows as `nonAiLinesAdded`.
132+
133+
3. **Only Cursor Source Control panel commits are tracked**: Terminal `git commit`, VS Code source control, and external git tools produce no tracking data at all.
134+
135+
4. **git-ai (open source alternative) solves this differently**: Uses real-time IDE checkpoints (pre/post edit hooks) stored in `.git/ai/`, intersected with the actual diff at commit time. Survives rebases, squash merges, cherry-picks by rewriting authorship logs. Requires installing their tool — not replicable from API data.
136+
137+
5. **Industry consensus (DX Framework, Cursor's own research)**: Lines-of-code metrics are unreliable for measuring adoption. The recommended approach combines accept rate + engagement intensity + consistency. See "AI Adoption Score" in Derived Metrics.
123138

124139
### Now Collected
125140
- Per-request token/cost data from `/teams/filtered-usage-events` - gives per-model cost breakdown per user. Stored in `usage_events` table. Collected incrementally (since last timestamp).
126141
- Command adoption from `/analytics/team/commands` - team-level command usage. Stored in `analytics_commands` table.
127142
- Plan mode adoption from `/analytics/team/plans` - plan mode usage by model. Stored in `analytics_plans` table.
128143
- Per-user MCP tool usage from `/analytics/by-user/mcp` - which MCP tools each user uses. Stored in `analytics_user_mcp` table.
129144
- Per-user command usage from `/analytics/by-user/commands` - which commands each user uses. Stored in `analytics_user_commands` table.
145+
- AI Code Tracking from `/analytics/ai-code/commits` - per-commit AI vs manual line attribution. Stored in `ai_code_commits` table (aggregated per user per day). Provides `tabLinesAdded`, `composerLinesAdded`, `nonAiLinesAdded` per commit. Used for AI Adoption % on user detail page. Only tracks commits made through Cursor's Source Control panel — terminal git commits are not captured.
130146

131147
### Not Currently Collected (but available)
132-
- AI Code Tracking data from `/analytics/ai-code/commits` - would give us accurate AI vs human line attribution
133148
- Per-user breakdowns from `/analytics/by-user/*` endpoints for: agent-edits, tabs, models, plans, ask-mode, client-versions, top-file-extensions (we collect mcp and commands per-user, but not these others)
134149
- Leaderboard from `/analytics/team/leaderboard` - ranks users by tab accepts and agent edits. We chose NOT to collect this because it introduces a third ranking system that conflicts with our own spend_rank and activity_rank, confusing stakeholders.
135150
- `cmdkUsages`, `subscriptionIncludedReqs`, `apiKeyReqs`, `bugbotUsages` - available in daily usage but not stored
@@ -158,6 +173,22 @@ Low accept rate could mean: picky reviewer (good), bad prompting (fixable), or w
158173

159174
Highly task-dependent. A debugging session produces 0 lines. A scaffolding task produces 500. Not a quality metric.
160175

176+
### AI Adoption Score (0-100%)
177+
Composite score from three signals in `daily_usage`, weighted:
178+
- **Accept Rate** (40%) — `total_accepts / total_applies` — trust in AI output
179+
- **Engagement Intensity** (40%) — `agent_requests / active_days`, normalized against team p90 — how heavily they lean on AI
180+
- **Consistency** (20%) — `active_days / period_days` — regular usage vs sporadic
181+
182+
Tiers: AI-Native (80%+), High (55%+), Moderate (30%+), Low (10%+), Manual (<10%).
183+
184+
Why this works better than commit-based AI%:
185+
- Available for ALL users (not just the ~18% who commit through Cursor's Source Control)
186+
- Not affected by squash merges, initial commits, or external git tools
187+
- Validated by Cursor's own productivity research (accept rate correlates with developer proficiency) and the DX AI Measurement Framework (combine frequency + acceptance + consistency)
188+
- Actually differentiating: accept rate ranges 0-100% across real team data, engagement intensity ranges 2-130 reqs/day
189+
190+
The commit-based AI% from `ai_code_commits` is shown as a secondary "AI Code %" metric with a caveat about attribution limitations.
191+
161192
## Daily Spend Data Sources
162193

163194
`usage_events` (from `/teams/filtered-usage-events`) is the most reliable source for daily spend data. It has per-request cost (`total_cents`) with full billing cycle history and no retention window. `daily_spend` (from `/teams/groups` billing groups API) has only ~2 days retention and systematically underreports compared to `usage_events`.

.cursor/rules/features-file.mdc

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
description: Keep FEATURES.md up to date when adding new features
3+
globs: "src/app/**/*.{ts,tsx}"
4+
alwaysApply: false
5+
---
6+
7+
# Feature Map Maintenance
8+
9+
When adding a new user-facing feature (page, section, chart, badge, metric, setting, or API endpoint), update `FEATURES.md` to reflect the change. Keep entries short — one line per item, grouped under the correct page heading.

.cursor/rules/project-context.mdc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ Single cron endpoint `POST /api/cron` does both: collect → detect → alert in
6767

6868
- `/` — Team overview: stat cards, model cost comparison table ($/request relative multipliers), daily spend trend (sourced from `usage_events` with `daily_spend` fallback, last 2 days marked provisional), spend breakdown by user, members table with search/sort, **group filter dropdown**, time range picker (24h/3d/7d/14d/30d), billing cycle progress
6969
- `/insights` — Analytics: DAU chart, model adoption, model efficiency rankings, MCP tool usage, file extensions, client versions
70-
- `/users/[email]` — Per-user detail: KPI cards (cycle spend, $/req, agent reqs, accept rate, team rank), spend trend chart, usage profile radar (activity, intensity, tab usage, precision, on plan, power user), cost breakdown by model, tools & features (MCP tools + commands per user), model preferences, daily activity table, anomaly history
70+
- `/users/[email]` — Per-user detail: KPI cards (cycle spend, $/req, agent reqs, diffs accepted, team rank), spend trend chart, AI adoption card (tier, score bar, stat pills with tooltips), cost breakdown by model, tools & features (MCP tools + commands per user), model preferences, daily activity table, anomaly history
7171
- `/anomalies` — MTTD/MTTI/MTTR metrics, open incidents (acknowledge/resolve), anomaly table
7272
- `/settings` — Detection thresholds, **billing group management** (rename, assign, create), **HiBob CSV import** with change preview
7373

@@ -90,7 +90,7 @@ Single cron endpoint `POST /api/cron` does both: collect → detect → alert in
9090

9191
## Database Tables
9292

93-
members, daily_usage, spending, usage_events, anomalies, incidents, config, collection_log, metadata, daily_spend, billing_groups, group_members, billing_group_members, analytics_dau, analytics_model_usage, analytics_agent_edits, analytics_tabs, analytics_mcp, analytics_file_extensions, analytics_client_versions, analytics_commands, analytics_plans, analytics_user_mcp, analytics_user_commands
93+
members, daily_usage, spending, usage_events, anomalies, incidents, config, collection_log, metadata, daily_spend, billing_groups, group_members, billing_group_members, analytics_dau, analytics_model_usage, analytics_agent_edits, analytics_tabs, analytics_mcp, analytics_file_extensions, analytics_client_versions, analytics_commands, analytics_plans, analytics_user_mcp, analytics_user_commands, ai_code_commits
9494

9595
## Important Caveats
9696

FEATURES.md

Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
# Feature Map
2+
3+
## Dashboard (`/`)
4+
5+
**Team overview — cost, activity, members at a glance.**
6+
7+
### Controls
8+
9+
- Search by name/email
10+
- Filter by billing group
11+
- Time range: 24h / 3d / 7d / 14d / 30d
12+
13+
### Stat Cards
14+
15+
- **Spend** — total team spend, $/day average
16+
- **Billing Cycle** — day X of Y, days left, reset date
17+
- **Anomalies** — open count, red border when active
18+
- **Active** — active members count and % of team
19+
- **Requests** — total agent requests, /day average
20+
- **Lines** — total lines added, /day average
21+
22+
### Charts
23+
24+
- **Daily Spend Trend** — area chart with avg line, provisional zone (last 2d), spike detection
25+
- **Model Cost Comparison** — table with $/request, relative multiplier (1x–8x+), color-coded
26+
- **Top Spenders** — horizontal bar chart, top 8
27+
- **Daily Spend by User** — stacked bar, top 6 + Others, clickable legend
28+
29+
### Members Table
30+
31+
- Sortable by: spend, activity, requests, lines, $/req, context, name
32+
- Filterable by badge type
33+
- Columns: rank, name, email, spend, requests, lines, $/req, model, profile badges, ranks
34+
35+
### Badges (per user, max 2)
36+
37+
| Category | Badges |
38+
| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
39+
| Usage | Power User, Deep Thinker, Low Usage |
40+
| Spend | Cost Efficient, Premium Model, Over Budget |
41+
| Context | Long Sessions, Short Sessions |
42+
| Adoption | AI-Native (80%+), High Adoption (55%+), Moderate (30%+), Low Adoption (10%+), Manual Coder (<10%) — based on composite score (accept rate + engagement + consistency) |
43+
44+
### Ranks
45+
46+
- Spend rank ($N) — blue
47+
- Activity rank (AN) — green
48+
49+
---
50+
51+
## User Detail (`/users/[email]`)
52+
53+
**Per-user deep dive — KPIs, trends, tools, models, anomalies.**
54+
55+
### Header
56+
57+
- Name, email, role, billing group link
58+
- Profile badges (same as dashboard)
59+
60+
### KPI Cards
61+
62+
- **Cycle Spend** — total $ in billing cycle, $/day
63+
- **$/Req** — cost per agent request
64+
- **Agent Reqs** — total requests in time range
65+
- **Diffs Accepted** — % of agent diffs accepted (accepts/applies), raw counts
66+
- **Team Rank** — spend and activity rank of N
67+
68+
### Charts
69+
70+
- **Spend Trend** — same area chart as dashboard, per-user
71+
- **AI Adoption** — tier label (AI-Native/High/Moderate/Low/Manual) with one-line description. Score bar (0-100). Three stat pills: diffs accepted %, requests/day, active days. Composite score from Accept Rate (40%), Engagement Intensity (40%), Consistency (20%). Tooltips on hover with raw numbers.
72+
73+
### Sections
74+
75+
- **Cost Breakdown** — per-model table: requests, $/req, total $, included vs overage bar, errors
76+
- **Tools & Features** — top 10 MCP tools + top 10 commands used
77+
- **Context Efficiency** — avg cache read/req, org median, vs org ratio, rank, color-coded band
78+
- **Model Preferences** — model, days used, requests
79+
- **Daily Activity Table** — date, model, requests, spend, lines +/-, accepts, tabs, version (spike rows highlighted)
80+
- **Anomaly History** — detected date, type, severity, message, status
81+
82+
---
83+
84+
## Insights (`/insights`)
85+
86+
**Team analytics — adoption, efficiency, trends.**
87+
88+
### Stat Cards
89+
90+
- Avg DAU, Commands total, Agent lines accepted, Tab lines accepted, MCP tools count
91+
92+
### Sections
93+
94+
- **Plan Exhaustion** — users who exceeded plan, days to exhaust, buckets (1-3d, 4-7d, 8-14d, 15+d)
95+
- **Model Rankings** — biggest spenders, most/least cost efficient, full scorecard
96+
- **DAU Chart** — daily active users by type (DAU, Cloud Agent, CLI)
97+
- **Model Adoption Share** — stacked area, top 5 models over time
98+
- **Model Usage Breakdown** — table: model, messages, users, % of total
99+
- **Top File Extensions** — horizontal bar by AI lines accepted
100+
- **Commands Adoption** — top 20 commands, usage counts
101+
- **MCP Tool Adoption** — top 20 tools by server, call counts
102+
- **Client Versions** — pie chart + table with "latest" / "needs update" badges
103+
104+
---
105+
106+
## Anomalies (`/anomalies`)
107+
108+
**Incident monitoring and response tracking.**
109+
110+
### Stat Cards
111+
112+
- Open Anomalies, Resolved, Open Incidents, Avg MTTD, Avg MTTI, Avg MTTR
113+
114+
### Sections
115+
116+
- **Open Incidents** — table with acknowledge/resolve actions
117+
- **All Anomalies** — table: date, user, type, severity, metric, message, status
118+
119+
---
120+
121+
## Settings (`/settings`)
122+
123+
**Detection thresholds, budget, billing groups.**
124+
125+
### Detection Config
126+
127+
- Static thresholds (max spend/cycle, max requests/day)
128+
- Z-score detection (std dev multiplier, lookback window)
129+
- Spend trend detection (spike multiplier, lookback, cycle outlier multiplier)
130+
- Collection schedule (cron interval)
131+
- Team budget alert threshold
132+
133+
### Billing Groups
134+
135+
- Group management: create, rename, assign members
136+
- HiBob CSV import with change preview
137+
- Backup export/import

src/app/dashboard-client.tsx

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -139,10 +139,13 @@ export function DashboardClient({ initialData }: DashboardClientProps) {
139139
"premium-model": "cpr",
140140
"cost-efficient": "cpr",
141141
"power-user": "reqs",
142-
"tab-completer": "reqs",
143142
"deep-thinker": "spend",
144143
"light-user": "reqs",
145-
balanced: "reqs",
144+
"manual-coder": "lines",
145+
"low-adoption": "lines",
146+
"moderate-adoption": "lines",
147+
"high-adoption": "lines",
148+
"ai-native": "lines",
146149
};
147150

148151
const handleBadgeFilter = useCallback(
@@ -178,7 +181,8 @@ export function DashboardClient({ initialData }: DashboardClientProps) {
178181
(u) =>
179182
u.usage_badge === badgeFilter ||
180183
u.spend_badge === badgeFilter ||
181-
u.context_badge === badgeFilter,
184+
u.context_badge === badgeFilter ||
185+
u.adoption_badge === badgeFilter,
182186
);
183187
}
184188
const sorted = [...users].sort((a, b) => {

0 commit comments

Comments
 (0)