Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions docs/specs/settings-dashboard/plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Plan

## Data Model

- Add `deepchat_usage_stats` keyed by `message_id`.
- Store final per-message usage snapshots:
- session, provider, model
- input/output/total tokens
- cached input tokens
- estimated USD cost
- local usage date
- source (`backfill` or `live`)

## Backfill

- Trigger in `AFTER_START` with a non-blocking hook.
- Scan only `deepchat_messages` joined with `deepchat_sessions`.
- Use message metadata provider/model first, then session fallback.
- Persist backfill status in config under `dashboardStatsBackfillV1`.
- Re-running is safe because stats rows are upserted by `message_id`.

## Live Recording

- Extend stream usage metadata with optional `cached_tokens`.
- Persist cached input tokens into assistant message metadata.
- Upsert stats from `DeepChatMessageStore.finalizeAssistantMessage` and `setMessageError`.

## Dashboard Query

- Expose `newAgentPresenter.getUsageDashboard()`.
- Aggregate summary, 365-day calendar, provider breakdown, and model breakdown from `deepchat_usage_stats`.

## UI

- Add `DashboardSettings.vue` as a scrollable settings page.
- Keep the visual language aligned with the current project theme.
- Show loading, empty, running backfill, and failed backfill states.
- Render four summary cards only; remove the cache hit rate card from the dashboard overview.
- Adopt the official `shadcn-vue chart` component with `Unovis` for dashboard chart rendering.
- Rebuild the overview layout as `1 large + 3 small`, with total tokens as the hero chart.
- Replace the total-token number card with a donut-based hero chart that visualizes input/output ratio.
- Visualize cached input tokens with a compact horizontal stacked bar for cached versus uncached input.
- Visualize estimated cost with a 30-day area chart while keeping the total cost as the primary value.
- Reuse `recordingStartedAt` to render a locale-specific, number-first "days with DeepChat" summary card in the renderer.
- Keep provider/model ranking queries unchanged, but render them as horizontal token bar charts with internal scrolling.
- Translate changed dashboard copy per locale instead of falling back to English sentence structure.
33 changes: 33 additions & 0 deletions docs/specs/settings-dashboard/spec.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Settings Dashboard

## Goal

Add a dedicated dashboard page under settings to show token usage, cached token usage, estimated cost, and a GitHub-like contribution calendar.

## User Stories

- As a user, I want to see my total token usage, cached token usage, and estimated cost in one place.
- As an existing user, I want the dashboard to initialize from the current `deepchat_messages` table once, without scanning legacy tables.
- As a user, I want the dashboard to keep growing from newly recorded usage without repeatedly recomputing from old chat tables.

## Acceptance Criteria

- A new settings route named `settings-dashboard` is available after provider settings.
- The dashboard reads from a dedicated `deepchat_usage_stats` table only.
- Existing users get a one-time background backfill from current `deepchat_messages`.
- Historical backfill sets cached input tokens to `0`.
- New assistant message finalization and error finalization upsert usage rows into `deepchat_usage_stats`.
- Price estimation uses current provider pricing first and falls back to `aihubmix` for the same model id when needed.
- The page contains four overview cards arranged as `1 large + 3 small`: a total-token hero chart, a cached-token ratio card, an estimated-cost trend card, and a days-with-DeepChat card.
- The total-token hero chart uses the shared `shadcn-vue chart + Unovis` visual language, keeps the donut semantics for input/output composition, and shows exact values plus percentages.
- The cached-token card uses the same chart system to visualize cached versus uncached input tokens and shows exact values plus percentages.
- The estimated-cost card uses the same chart system to show the total estimated cost plus a lightweight 30-day area trend.
- The "days with DeepChat" card is derived from the earliest recorded usage date and rendered in a number-first, locale-specific layout.
- The page contains a 365-day contribution calendar and provider/model breakdowns.
- Provider and model breakdown cards render horizontal token bar charts with internal scrolling without growing the full page indefinitely.

## Non-Goals

- No backfill from legacy `messages` or `conversations` tables.
- No delete-triggered rollback of accumulated usage stats.
- No additional day-level rollup table in v1.
8 changes: 8 additions & 0 deletions docs/specs/settings-dashboard/tasks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Tasks

1. Add shared dashboard types and cached token usage plumbing.
2. Add `deepchat_usage_stats` table and wire it into `SQLitePresenter`.
3. Record live usage stats on assistant finalize and error finalize.
4. Implement one-time historical backfill and dashboard query methods in `NewAgentPresenter`.
5. Add settings route, dashboard page, and i18n strings.
6. Add focused main/renderer tests and run format, i18n, and lint.
2 changes: 2 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,8 @@
"@lingual/i18n-check": "0.8.12",
"@mcp-ui/client": "^5.13.3",
"@pinia/colada": "^0.20.0",
"@unovis/ts": "1.6.4",
"@unovis/vue": "1.6.4",
"@tailwindcss/typography": "^0.5.19",
"@tailwindcss/vite": "^4.1.18",
"@tiptap/core": "^2.11.7",
Expand Down
1 change: 1 addition & 0 deletions src/main/presenter/deepchatAgentPresenter/accumulator.ts
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ export function accumulate(state: StreamState, event: LLMCoreStreamEvent): void
state.metadata.inputTokens = event.usage.prompt_tokens
state.metadata.outputTokens = event.usage.completion_tokens
state.metadata.totalTokens = event.usage.total_tokens
state.metadata.cachedInputTokens = event.usage.cached_tokens
break
}
case 'stop': {
Expand Down
60 changes: 60 additions & 0 deletions src/main/presenter/deepchatAgentPresenter/messageStore.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,14 @@ import type {
MessageMetadata
} from '@shared/types/agent-interface'
import type { SearchResult } from '@shared/types/core/search'
import logger from '@shared/logger'
import type { DeepChatMessageRow } from '../sqlitePresenter/tables/deepchatMessages'
import {
buildUsageStatsRecord,
parseMessageMetadata,
resolveUsageModelId,
resolveUsageProviderId
} from '../usageStats'

export class DeepChatMessageStore {
private sqlitePresenter: SQLitePresenter
Expand Down Expand Up @@ -81,6 +88,7 @@ export class DeepChatMessageStore {
'sent',
metadata
)
this.persistUsageStats(messageId, metadata, 'live')
}

updateCompactionMessage(
Expand Down Expand Up @@ -112,6 +120,7 @@ export class DeepChatMessageStore {
'error',
metadata
)
this.persistUsageStats(messageId, metadata, 'live')
}

getMessages(sessionId: string): ChatMessageRecord[] {
Expand Down Expand Up @@ -376,4 +385,55 @@ export class DeepChatMessageStore {
summaryUpdatedAt
}
}

private persistUsageStats(
messageId: string,
metadataRaw: string,
source: 'backfill' | 'live'
): void {
const usageStatsTable = this.sqlitePresenter.deepchatUsageStatsTable
if (!usageStatsTable) {
return
}

const messageRow = this.sqlitePresenter.deepchatMessagesTable.get(messageId)
if (!messageRow || messageRow.role !== 'assistant') {
return
}

try {
const metadata = parseMessageMetadata(metadataRaw)
if (metadata.messageType === 'compaction') {
return
}

const sessionRow = this.sqlitePresenter.deepchatSessionsTable.get(messageRow.session_id)
const providerId = resolveUsageProviderId(metadata, sessionRow?.provider_id)
const modelId = resolveUsageModelId(metadata, sessionRow?.model_id)

if (!providerId || !modelId) {
return
}

const usageRecord = buildUsageStatsRecord({
messageId: messageRow.id,
sessionId: messageRow.session_id,
createdAt: messageRow.created_at,
updatedAt: messageRow.updated_at,
providerId,
modelId,
metadata,
source
})

if (!usageRecord) {
return
}

usageStatsTable.upsert(usageRecord)
} catch (error) {
logger.error('Failed to persist deepchat usage stats', { messageId, source }, error)
return
}
}
}
3 changes: 3 additions & 0 deletions src/main/presenter/deepchatAgentPresenter/process.ts
Original file line number Diff line number Diff line change
Expand Up @@ -225,5 +225,8 @@ function buildUsageSnapshot(state: StreamState): Record<string, number> {
if (typeof state.metadata.outputTokens === 'number') {
usage.outputTokens = state.metadata.outputTokens
}
if (typeof state.metadata.cachedInputTokens === 'number') {
usage.cachedInputTokens = state.metadata.cachedInputTokens
}
return usage
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import { LifecycleHook, LifecycleContext } from '@shared/presenter'
import { LifecyclePhase } from '@shared/lifecycle'
import { presenter } from '@/presenter'

export const usageStatsBackfillHook: LifecycleHook = {
name: 'usage-stats-backfill',
phase: LifecyclePhase.AFTER_START,
priority: 21,
critical: false,
execute: async (_context: LifecycleContext) => {
if (!presenter) {
throw new Error('usageStatsBackfillHook: Presenter not initialized')
}

const newAgentPresenter = presenter.newAgentPresenter as unknown as {
startUsageStatsBackfill?: () => Promise<void>
}
if (!newAgentPresenter.startUsageStatsBackfill) {
return
}

void newAgentPresenter.startUsageStatsBackfill().catch((error) => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Serialize usage backfill after legacy import completion

In the AFTER_START phase, this launches startUsageStatsBackfill() as fire-and-forget, but the existing legacyImportHook (priority 20) also starts import asynchronously and returns immediately; this means both jobs run concurrently on first upgrade. Since runUsageStatsBackfill() takes a one-time snapshot with deepchatMessagesTable.listAssistantUsageCandidates() and then marks backfill completed, any legacy messages imported after that snapshot never get rows in deepchat_usage_stats, so migrated users can see permanently undercounted dashboard totals.

Useful? React with 👍 / 👎.

console.error('usageStatsBackfillHook: failed to start usage stats backfill:', error)
})
}
}
1 change: 1 addition & 0 deletions src/main/presenter/lifecyclePresenter/hooks/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ export { eventListenerSetupHook } from './ready/eventListenerSetupHook'
export { traySetupHook } from './after-start/traySetupHook'
export { windowCreationHook } from './after-start/windowCreationHook'
export { legacyImportHook } from './after-start/legacyImportHook'
export { usageStatsBackfillHook } from './after-start/usageStatsBackfillHook'
export { trayDestroyHook } from './beforeQuit/trayDestroyHook'
export { floatingDestroyHook } from './beforeQuit/floatingDestroyHook'
export { presenterDestroyHook } from './beforeQuit/presenterDestroyHook'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,28 @@ const SUPPORTED_IMAGE_SIZES = {
// Add list of models with configurable sizes
const SIZE_CONFIGURABLE_MODELS = ['gpt-image-1', 'gpt-4o-image', 'gpt-4o-all']

function getOpenAIChatCachedTokens(usage: unknown): number | undefined {
if (!usage || typeof usage !== 'object') {
return undefined
}

const promptTokensDetails = (usage as { prompt_tokens_details?: unknown }).prompt_tokens_details
const inputTokensDetails = (usage as { input_tokens_details?: unknown }).input_tokens_details
const promptCachedTokens =
promptTokensDetails && typeof promptTokensDetails === 'object'
? (promptTokensDetails as { cached_tokens?: unknown }).cached_tokens
: undefined
const inputCachedTokens =
inputTokensDetails && typeof inputTokensDetails === 'object'
? (inputTokensDetails as { cached_tokens?: unknown }).cached_tokens
: undefined
const cachedTokens =
typeof promptCachedTokens === 'number' ? promptCachedTokens : inputCachedTokens
return typeof cachedTokens === 'number' && Number.isFinite(cachedTokens)
? cachedTokens
: undefined
}

export class OpenAICompatibleProvider extends BaseLLMProvider {
protected openai!: OpenAI
protected isNoModelsApi: boolean = false
Expand Down Expand Up @@ -976,7 +998,8 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
yield createStreamEvent.usage({
prompt_tokens: result.usage.input_tokens || 0,
completion_tokens: result.usage.output_tokens || 0,
total_tokens: result.usage.total_tokens || 0
total_tokens: result.usage.total_tokens || 0,
cached_tokens: getOpenAIChatCachedTokens(result.usage)
})
}

Expand Down Expand Up @@ -1142,6 +1165,7 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
prompt_tokens: number
completion_tokens: number
total_tokens: number
cached_tokens?: number
}
| undefined = undefined

Expand All @@ -1156,7 +1180,10 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {

// 1. 处理非内容事件 (如 usage, reasoning, tool_calls)
if (chunk.usage) {
usage = chunk.usage
usage = {
...chunk.usage,
cached_tokens: getOpenAIChatCachedTokens(chunk.usage)
}
}

// 原生 reasoning 内容处理(直接产出)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,22 @@ const SUPPORTED_IMAGE_SIZES = {
// 添加可设置尺寸的模型列表
const SIZE_CONFIGURABLE_MODELS = ['gpt-image-1', 'gpt-4o-image', 'gpt-4o-all']

function getOpenAIResponseCachedTokens(
usage:
| {
input_tokens_details?: {
cached_tokens?: number
}
}
| null
| undefined
): number | undefined {
const cachedTokens = usage?.input_tokens_details?.cached_tokens
return typeof cachedTokens === 'number' && Number.isFinite(cachedTokens)
? cachedTokens
: undefined
}

export class OpenAIResponsesProvider extends BaseLLMProvider {
protected openai!: OpenAI
private isNoModelsApi: boolean = false
Expand Down Expand Up @@ -521,7 +537,8 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
yield createStreamEvent.usage({
prompt_tokens: result.usage.input_tokens || 0,
completion_tokens: result.usage.output_tokens || 0,
total_tokens: result.usage.total_tokens || 0
total_tokens: result.usage.total_tokens || 0,
cached_tokens: getOpenAIResponseCachedTokens(result.usage)
})
}

Expand Down Expand Up @@ -645,6 +662,7 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
prompt_tokens: number
completion_tokens: number
total_tokens: number
cached_tokens?: number
}
| undefined = undefined

Expand Down Expand Up @@ -954,7 +972,8 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
usage = {
prompt_tokens: response.usage.input_tokens || 0,
completion_tokens: response.usage.output_tokens || 0,
total_tokens: response.usage.total_tokens || 0
total_tokens: response.usage.total_tokens || 0,
cached_tokens: getOpenAIResponseCachedTokens(response.usage)
}
yield createStreamEvent.usage(usage)
}
Expand Down
Loading
Loading