Skip to content

Commit ac62fe6

Browse files
authored
feat: user dashboard (#1354)
* feat(settings): add usage dashboard * feat: change layout for dashboard * feat: add input/output in dashboard * feat: change style for dashboard * feat(settings): add dashboard charts * feat(settings): polish usage dashboard * fix(usage): harden stats and dashboard
1 parent 5035c95 commit ac62fe6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+5270
-33
lines changed
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Plan
2+
3+
## Data Model
4+
5+
- Add `deepchat_usage_stats` keyed by `message_id`.
6+
- Store final per-message usage snapshots:
7+
- session, provider, model
8+
- input/output/total tokens
9+
- cached input tokens
10+
- estimated USD cost
11+
- local usage date
12+
- source (`backfill` or `live`)
13+
14+
## Backfill
15+
16+
- Trigger in `AFTER_START` with a non-blocking hook.
17+
- Scan only `deepchat_messages` joined with `deepchat_sessions`.
18+
- Use message metadata provider/model first, then session fallback.
19+
- Persist backfill status in config under `dashboardStatsBackfillV1`.
20+
- Re-running is safe because stats rows are upserted by `message_id`.
21+
22+
## Live Recording
23+
24+
- Extend stream usage metadata with optional `cached_tokens`.
25+
- Persist cached input tokens into assistant message metadata.
26+
- Upsert stats from `DeepChatMessageStore.finalizeAssistantMessage` and `setMessageError`.
27+
28+
## Dashboard Query
29+
30+
- Expose `newAgentPresenter.getUsageDashboard()`.
31+
- Aggregate summary, 365-day calendar, provider breakdown, and model breakdown from `deepchat_usage_stats`.
32+
33+
## UI
34+
35+
- Add `DashboardSettings.vue` as a scrollable settings page.
36+
- Keep the visual language aligned with the current project theme.
37+
- Show loading, empty, running backfill, and failed backfill states.
38+
- Render four summary cards only; remove the cache hit rate card from the dashboard overview.
39+
- Adopt the official `shadcn-vue chart` component with `Unovis` for dashboard chart rendering.
40+
- Rebuild the overview layout as `1 large + 3 small`, with total tokens as the hero chart.
41+
- Replace the total-token number card with a donut-based hero chart that visualizes input/output ratio.
42+
- Visualize cached input tokens with a compact horizontal stacked bar for cached versus uncached input.
43+
- Visualize estimated cost with a 30-day area chart while keeping the total cost as the primary value.
44+
- Reuse `recordingStartedAt` to render a locale-specific, number-first "days with DeepChat" summary card in the renderer.
45+
- Keep provider/model ranking queries unchanged, but render them as horizontal token bar charts with internal scrolling.
46+
- Translate changed dashboard copy per locale instead of falling back to English sentence structure.
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# Settings Dashboard
2+
3+
## Goal
4+
5+
Add a dedicated dashboard page under settings to show token usage, cached token usage, estimated cost, and a GitHub-like contribution calendar.
6+
7+
## User Stories
8+
9+
- As a user, I want to see my total token usage, cached token usage, and estimated cost in one place.
10+
- As an existing user, I want the dashboard to initialize from the current `deepchat_messages` table once, without scanning legacy tables.
11+
- As a user, I want the dashboard to keep growing from newly recorded usage without repeatedly recomputing from old chat tables.
12+
13+
## Acceptance Criteria
14+
15+
- A new settings route named `settings-dashboard` is available after provider settings.
16+
- The dashboard reads from a dedicated `deepchat_usage_stats` table only.
17+
- Existing users get a one-time background backfill from current `deepchat_messages`.
18+
- Historical backfill sets cached input tokens to `0`.
19+
- New assistant message finalization and error finalization upsert usage rows into `deepchat_usage_stats`.
20+
- Price estimation uses current provider pricing first and falls back to `aihubmix` for the same model id when needed.
21+
- The page contains four overview cards arranged as `1 large + 3 small`: a total-token hero chart, a cached-token ratio card, an estimated-cost trend card, and a days-with-DeepChat card.
22+
- The total-token hero chart uses the shared `shadcn-vue chart + Unovis` visual language, keeps the donut semantics for input/output composition, and shows exact values plus percentages.
23+
- The cached-token card uses the same chart system to visualize cached versus uncached input tokens and shows exact values plus percentages.
24+
- The estimated-cost card uses the same chart system to show the total estimated cost plus a lightweight 30-day area trend.
25+
- The "days with DeepChat" card is derived from the earliest recorded usage date and rendered in a number-first, locale-specific layout.
26+
- The page contains a 365-day contribution calendar and provider/model breakdowns.
27+
- Provider and model breakdown cards render horizontal token bar charts with internal scrolling without growing the full page indefinitely.
28+
29+
## Non-Goals
30+
31+
- No backfill from legacy `messages` or `conversations` tables.
32+
- No delete-triggered rollback of accumulated usage stats.
33+
- No additional day-level rollup table in v1.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Tasks
2+
3+
1. Add shared dashboard types and cached token usage plumbing.
4+
2. Add `deepchat_usage_stats` table and wire it into `SQLitePresenter`.
5+
3. Record live usage stats on assistant finalize and error finalize.
6+
4. Implement one-time historical backfill and dashboard query methods in `NewAgentPresenter`.
7+
5. Add settings route, dashboard page, and i18n strings.
8+
6. Add focused main/renderer tests and run format, i18n, and lint.

package.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,8 @@
115115
"@lingual/i18n-check": "0.8.12",
116116
"@mcp-ui/client": "^5.13.3",
117117
"@pinia/colada": "^0.20.0",
118+
"@unovis/ts": "1.6.4",
119+
"@unovis/vue": "1.6.4",
118120
"@tailwindcss/typography": "^0.5.19",
119121
"@tailwindcss/vite": "^4.1.18",
120122
"@tiptap/core": "^2.11.7",

src/main/presenter/deepchatAgentPresenter/accumulator.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,7 @@ export function accumulate(state: StreamState, event: LLMCoreStreamEvent): void
123123
state.metadata.inputTokens = event.usage.prompt_tokens
124124
state.metadata.outputTokens = event.usage.completion_tokens
125125
state.metadata.totalTokens = event.usage.total_tokens
126+
state.metadata.cachedInputTokens = event.usage.cached_tokens
126127
break
127128
}
128129
case 'stop': {

src/main/presenter/deepchatAgentPresenter/messageStore.ts

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,14 @@ import type {
88
MessageMetadata
99
} from '@shared/types/agent-interface'
1010
import type { SearchResult } from '@shared/types/core/search'
11+
import logger from '@shared/logger'
1112
import type { DeepChatMessageRow } from '../sqlitePresenter/tables/deepchatMessages'
13+
import {
14+
buildUsageStatsRecord,
15+
parseMessageMetadata,
16+
resolveUsageModelId,
17+
resolveUsageProviderId
18+
} from '../usageStats'
1219

1320
export class DeepChatMessageStore {
1421
private sqlitePresenter: SQLitePresenter
@@ -81,6 +88,7 @@ export class DeepChatMessageStore {
8188
'sent',
8289
metadata
8390
)
91+
this.persistUsageStats(messageId, metadata, 'live')
8492
}
8593

8694
updateCompactionMessage(
@@ -112,6 +120,7 @@ export class DeepChatMessageStore {
112120
'error',
113121
metadata
114122
)
123+
this.persistUsageStats(messageId, metadata, 'live')
115124
}
116125

117126
getMessages(sessionId: string): ChatMessageRecord[] {
@@ -376,4 +385,55 @@ export class DeepChatMessageStore {
376385
summaryUpdatedAt
377386
}
378387
}
388+
389+
private persistUsageStats(
390+
messageId: string,
391+
metadataRaw: string,
392+
source: 'backfill' | 'live'
393+
): void {
394+
const usageStatsTable = this.sqlitePresenter.deepchatUsageStatsTable
395+
if (!usageStatsTable) {
396+
return
397+
}
398+
399+
const messageRow = this.sqlitePresenter.deepchatMessagesTable.get(messageId)
400+
if (!messageRow || messageRow.role !== 'assistant') {
401+
return
402+
}
403+
404+
try {
405+
const metadata = parseMessageMetadata(metadataRaw)
406+
if (metadata.messageType === 'compaction') {
407+
return
408+
}
409+
410+
const sessionRow = this.sqlitePresenter.deepchatSessionsTable.get(messageRow.session_id)
411+
const providerId = resolveUsageProviderId(metadata, sessionRow?.provider_id)
412+
const modelId = resolveUsageModelId(metadata, sessionRow?.model_id)
413+
414+
if (!providerId || !modelId) {
415+
return
416+
}
417+
418+
const usageRecord = buildUsageStatsRecord({
419+
messageId: messageRow.id,
420+
sessionId: messageRow.session_id,
421+
createdAt: messageRow.created_at,
422+
updatedAt: messageRow.updated_at,
423+
providerId,
424+
modelId,
425+
metadata,
426+
source
427+
})
428+
429+
if (!usageRecord) {
430+
return
431+
}
432+
433+
usageStatsTable.upsert(usageRecord)
434+
} catch (error) {
435+
logger.error('Failed to persist deepchat usage stats', { messageId, source }, error)
436+
return
437+
}
438+
}
379439
}

src/main/presenter/deepchatAgentPresenter/process.ts

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -225,5 +225,8 @@ function buildUsageSnapshot(state: StreamState): Record<string, number> {
225225
if (typeof state.metadata.outputTokens === 'number') {
226226
usage.outputTokens = state.metadata.outputTokens
227227
}
228+
if (typeof state.metadata.cachedInputTokens === 'number') {
229+
usage.cachedInputTokens = state.metadata.cachedInputTokens
230+
}
228231
return usage
229232
}
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
import { LifecycleHook, LifecycleContext } from '@shared/presenter'
2+
import { LifecyclePhase } from '@shared/lifecycle'
3+
import { presenter } from '@/presenter'
4+
5+
export const usageStatsBackfillHook: LifecycleHook = {
6+
name: 'usage-stats-backfill',
7+
phase: LifecyclePhase.AFTER_START,
8+
priority: 21,
9+
critical: false,
10+
execute: async (_context: LifecycleContext) => {
11+
if (!presenter) {
12+
throw new Error('usageStatsBackfillHook: Presenter not initialized')
13+
}
14+
15+
const newAgentPresenter = presenter.newAgentPresenter as unknown as {
16+
startUsageStatsBackfill?: () => Promise<void>
17+
}
18+
if (!newAgentPresenter.startUsageStatsBackfill) {
19+
return
20+
}
21+
22+
void newAgentPresenter.startUsageStatsBackfill().catch((error) => {
23+
console.error('usageStatsBackfillHook: failed to start usage stats backfill:', error)
24+
})
25+
}
26+
}

src/main/presenter/lifecyclePresenter/hooks/index.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ export { eventListenerSetupHook } from './ready/eventListenerSetupHook'
1111
export { traySetupHook } from './after-start/traySetupHook'
1212
export { windowCreationHook } from './after-start/windowCreationHook'
1313
export { legacyImportHook } from './after-start/legacyImportHook'
14+
export { usageStatsBackfillHook } from './after-start/usageStatsBackfillHook'
1415
export { trayDestroyHook } from './beforeQuit/trayDestroyHook'
1516
export { floatingDestroyHook } from './beforeQuit/floatingDestroyHook'
1617
export { presenterDestroyHook } from './beforeQuit/presenterDestroyHook'

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,28 @@ const SUPPORTED_IMAGE_SIZES = {
6666
// Add list of models with configurable sizes
6767
const SIZE_CONFIGURABLE_MODELS = ['gpt-image-1', 'gpt-4o-image', 'gpt-4o-all']
6868

69+
function getOpenAIChatCachedTokens(usage: unknown): number | undefined {
70+
if (!usage || typeof usage !== 'object') {
71+
return undefined
72+
}
73+
74+
const promptTokensDetails = (usage as { prompt_tokens_details?: unknown }).prompt_tokens_details
75+
const inputTokensDetails = (usage as { input_tokens_details?: unknown }).input_tokens_details
76+
const promptCachedTokens =
77+
promptTokensDetails && typeof promptTokensDetails === 'object'
78+
? (promptTokensDetails as { cached_tokens?: unknown }).cached_tokens
79+
: undefined
80+
const inputCachedTokens =
81+
inputTokensDetails && typeof inputTokensDetails === 'object'
82+
? (inputTokensDetails as { cached_tokens?: unknown }).cached_tokens
83+
: undefined
84+
const cachedTokens =
85+
typeof promptCachedTokens === 'number' ? promptCachedTokens : inputCachedTokens
86+
return typeof cachedTokens === 'number' && Number.isFinite(cachedTokens)
87+
? cachedTokens
88+
: undefined
89+
}
90+
6991
export class OpenAICompatibleProvider extends BaseLLMProvider {
7092
protected openai!: OpenAI
7193
protected isNoModelsApi: boolean = false
@@ -976,7 +998,8 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
976998
yield createStreamEvent.usage({
977999
prompt_tokens: result.usage.input_tokens || 0,
9781000
completion_tokens: result.usage.output_tokens || 0,
979-
total_tokens: result.usage.total_tokens || 0
1001+
total_tokens: result.usage.total_tokens || 0,
1002+
cached_tokens: getOpenAIChatCachedTokens(result.usage)
9801003
})
9811004
}
9821005

@@ -1142,6 +1165,7 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
11421165
prompt_tokens: number
11431166
completion_tokens: number
11441167
total_tokens: number
1168+
cached_tokens?: number
11451169
}
11461170
| undefined = undefined
11471171

@@ -1156,7 +1180,10 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
11561180

11571181
// 1. 处理非内容事件 (如 usage, reasoning, tool_calls)
11581182
if (chunk.usage) {
1159-
usage = chunk.usage
1183+
usage = {
1184+
...chunk.usage,
1185+
cached_tokens: getOpenAIChatCachedTokens(chunk.usage)
1186+
}
11601187
}
11611188

11621189
// 原生 reasoning 内容处理(直接产出)

0 commit comments

Comments
 (0)