feat: 添加使用日志按用户/令牌维度的统计与导出功能#4498
Conversation
- 新增后端统计API(GET /api/log/statistics),支持按用户+令牌+时间范围聚合日志 - 新增后端Excel导出API(GET /api/log/statistics/export),使用excelize生成含合计行的报表 - 前端新增统计Drawer面板,含VChart图表(调用次数分布、消耗分布、调用趋势) - 统计汇总表格显示消耗额度和百万Token单位的Tokens统计 - 管理员可见「统计」按钮,支持导出为Excel文件
WalkthroughAdds backend endpoints and model queries to compute, trend, and export per-model usage statistics as XLSX; updates Go module deps. Frontend adds a statistics drawer, hook, UI actions, chart specs, and an export flow to fetch and download those statistics. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as React Frontend
participant API as Gin Controller
participant DB as Database Model
participant Excel as Excel Library
rect rgba(100,150,200,0.5)
Note over Client,DB: Statistics Retrieval Flow
Client->>Client: Submit stats form (username, token_name, model_name, date range)
Client->>API: GET /api/log/statistics?...
API->>DB: GetLogStatistics(...) and GetLogStatisticsTrend(...)
DB-->>API: ModelStatistics[], TrendPoint[]
API-->>Client: JSON { data: { models, trend } }
Client->>Client: Render charts and summary table
end
rect rgba(150,100,200,0.5)
Note over Client,Excel: Excel Export Flow
Client->>Client: Click "导出 Excel"
Client->>API: GET /api/log/statistics/export?...
API->>DB: GetLogStatistics(...)
DB-->>API: ModelStatistics[]
API->>Excel: Build workbook, write headers/rows, append totals
Excel-->>API: XLSX bytes
API-->>Client: Response (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, Content-Disposition)
Client->>Client: Trigger file download
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
web/src/hooks/usage-logs/useUsageLogsData.jsx (1)
837-905:⚠️ Potential issue | 🔴 Critical
...logStatisticsspread at the end clobbers main-page state (loading,formInitValues,setFormApi).
useLogStatistics()(perweb/src/hooks/usage-logs/useLogStatistics.jsx) exposes its ownloading,formInitValues, andsetFormApifor the drawer. By spreading...logStatisticsafter the existing return-object literal:
loading(returned at line 842 for the main logs table) is overwritten by the drawer’sloading. The skeleton/loading UI of the table will now reflect the statistics drawer’s fetch state instead of the table’s own.formInitValues(line 854, used by the mainLogsFiltersform) is overwritten by the drawer’s minimal init values. The page’s main filters losechannel,group,request_id,dateRangedefaults andlogType: '0'.setFormApi(line 853) is overwritten by the drawer’s setter, so when the main filter form mounts and callssetFormApi, it stores its API instance into the drawer’s state — breaking the main page’sgetFormValues()and theuseEffect([formApi])initial stat load, while leaving the drawer’s form unset.Scope the drawer’s state under a non-conflicting prefix instead of spreading.
Proposed fix
return { // ... existing fields ... // Translation t, - - // Statistics drawer - ...logStatistics, - setShowStatisticsDrawer: logStatistics.setVisible, + + // Statistics drawer (namespaced to avoid clobbering main-page state) + statisticsDrawer: logStatistics, + setShowStatisticsDrawer: logStatistics.setVisible, };…and then in
web/src/components/table/usage-logs/index.jsxpass<StatisticsDrawer {...logsData.statisticsDrawer} t={logsData.t} />rather than{...logsData}.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/usage-logs/useUsageLogsData.jsx` around lines 837 - 905, The spread of ...logStatistics at the end of the return value clobbers main-page state (overwriting loading, formInitValues, setFormApi); fix by namespacing the drawer state instead of spreading it — return a new key (e.g., statisticsDrawer) whose value contains the logStatistics object (and map setVisible to setShowStatisticsDrawer if needed) so main hooks keep their own loading/form state; update consumers (e.g., the StatisticsDrawer usage in web/src/components/table/usage-logs/index.jsx) to receive props from logsData.statisticsDrawer (and pass t explicitly) rather than receiving the whole logsData.
🧹 Nitpick comments (7)
model/log.go (1)
541-543:model_name LIKE ?passes user input as the pattern unchanged.Unlike
GetUserLogs(line 397) which sanitizes wildcards viasanitizeLikePattern(...) ... LIKE ? ESCAPE '!', this helper feedsmodelNamestraight intoLIKE. A caller passing%or_will silently match more rows than intended, and the behavior diverges from the rest of the file. Prefer the samesanitizeLikePattern+ESCAPEpattern, or use exact equality if wildcard matching isn’t required by the UI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model/log.go` around lines 541 - 543, The query uses modelName directly in tx.Where("model_name LIKE ?", modelName) which allows raw wildcards; change it to use the same sanitization as GetUserLogs by passing sanitizeLikePattern(modelName) and adding the ESCAPE clause (e.g., "model_name LIKE ? ESCAPE '!'") so user-supplied %/_ are treated literally, or switch to exact match ("model_name = ?") if wildcard matching is not required; update the code around the modelName handling to call sanitizeLikePattern and include the ESCAPE '!' in the WHERE clause to match the file's existing behavior.web/src/components/table/usage-logs/StatisticsDrawer.jsx (1)
46-48: Chart specs and summary totals are recomputed on every render.
buildBarSpec(),buildTrendSpec(),buildQuotaBarSpec()(lines 46‑48) and the fourstatistics.reduce(...)calls (lines 91‑94) all run on every render of the drawer, even when onlyvisibletoggles. Wrap them inuseMemokeyed onstatistics/trendso chart spec generation and summing don’t fire on unrelated parent re‑renders (and so VChart doesn’t see a fresh spec object identity unnecessarily).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx` around lines 46 - 48, The chart spec builders (buildBarSpec, buildTrendSpec, buildQuotaBarSpec) and the four statistics.reduce(...) totals are being recomputed on every render; wrap each spec creation and each reduce-based summary in React.useMemo and memoize them by their true inputs (e.g. memoize buildBarSpec and buildQuotaBarSpec on statistics, memoize buildTrendSpec on trend/statistics as appropriate, and memoize each reduce result on statistics) so they only recompute when those inputs change—this prevents unnecessary work and keeps spec object identities stable for VChart.controller/log.go (1)
127-159: Trend bucketing key collision when calling both endpoints.
GetLogStatisticscalls bothmodel.GetLogStatisticsandmodel.GetLogStatisticsTrendfor every JSON request. Given the performance characteristics flagged inmodel/log.go(the trend query effectively pulls every matching row), every drawer open / form submit will run two heavy queries back-to-back. Worth fronting the/api/log/statisticsendpoint with a tighter rate limit (e.g.middleware.SearchRateLimit()orCriticalRateLimit()), similarly tologRoute.GET("/self/search", ...), to avoid an admin accidentally DOS’ing the DB.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controller/log.go` around lines 127 - 159, The endpoint handler GetLogStatistics currently triggers two heavy DB queries (model.GetLogStatistics and model.GetLogStatisticsTrend) on every request, so add a stricter rate-limit middleware to the route that registers this handler (e.g., wrap the GET route that points to GetLogStatistics with middleware.SearchRateLimit() or middleware.CriticalRateLimit()) so repeated drawer opens/forms cannot DOS the DB; update the route definition that binds GetLogStatistics to include the chosen middleware in the route chain.web/src/hooks/usage-logs/useLogStatistics.jsx (4)
153-153: Avoid shadowing thettranslation function with the.mapcallback parameter.
useTranslation()'stis in scope here, and.map((t) => …)(also at lines 160 and inside the trend block) shadows it. Today it works because the inner body doesn't callt(...), but it's a footgun for future edits. Rename toitem/row.🛠️ Suggested fix
- const models = [...new Set(trend.map((t) => t.model_name))]; + const models = [...new Set(trend.map((row) => row.model_name))]; ... - data: [{ id: 'trendData', values: trend.map((t) => ({ - Time: t.time, - Model: t.model_name, - Count: t.request_count, - }))}], + data: [{ id: 'trendData', values: trend.map((row) => ({ + Time: row.time, + Model: row.model_name, + Count: row.request_count, + }))}],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/usage-logs/useLogStatistics.jsx` at line 153, The array mapping callbacks shadow the translation function t from useTranslation(): rename the map parameter(s) in the expressions that use trend.map (e.g., the line creating const models = [...new Set(trend.map((t) => t.model_name))] and other trend.map callbacks) to a non-conflicting identifier like item or row so they no longer shadow the outer t; update all occurrences inside those callback bodies to use the new name (e.g., item.model_name) and leave the useTranslation() t usage unchanged.
115-117: Surface backend error messages on export failure.When
/api/log/statistics/exportreturns a JSON error (e.g.{success:false, message:"..."}), the response is still received as a blob due toresponseType: 'blob', and the user only sees the generic导出失败. Consider detecting non-ExcelContent-Typeand reading the blob as text to surface the real message.🛠️ Suggested fix
const res = await API.get(url, { responseType: 'blob' }); + const ct = res.headers['content-type'] || ''; + if (ct.includes('application/json')) { + const text = await res.data.text(); + try { + const { message } = JSON.parse(text); + showError(message || t('导出失败')); + } catch { + showError(t('导出失败')); + } + return; + } const blob = new Blob([res.data], {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 115 - 117, The export catch hides backend JSON errors because the request uses responseType: 'blob'; update the export logic in useLogStatistics (the function that calls /api/log/statistics/export) to check response.headers['content-type'] (or Blob.type) after receiving the blob: if the content-type is not an Excel MIME, read the blob as text (await blob.text()), JSON.parse it, and call showError(parsed.message || parsed.msg || t('导出失败')); otherwise proceed with the existing file-download flow; keep the existing catch to handle network errors but surface parsed server messages instead of always showing t('导出失败').
104-109: Filename regex misses RFC 5987 (filename*=UTF-8''…).If the backend returns a Unicode filename via
filename*=UTF-8''...(common when the username/period contains non-ASCII), the current regex falls back tousage_statistics_${username}.xlsxinstead of using the server-provided name.🛠️ Suggested fix
- if (contentDisposition) { - const match = contentDisposition.match(/filename="?([^"]+)"?/); - if (match) filename = match[1]; - } + if (contentDisposition) { + const star = contentDisposition.match(/filename\*\s*=\s*[^']*''([^;]+)/i); + if (star) { + try { filename = decodeURIComponent(star[1]); } catch { /* ignore */ } + } else { + const plain = contentDisposition.match(/filename\s*=\s*"?([^";]+)"?/i); + if (plain) filename = plain[1]; + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 104 - 109, The current filename extraction in useLogStatistics.jsx only matches basic filename="..." and ignores RFC 5987 encoded names (filename*=UTF-8''...), so Unicode server-provided names are dropped; update the extraction logic around contentDisposition/filename to first check for an RFC5987 pattern (filename*=UTF-8''... decode percent-encoding to UTF-8), fall back to the existing filename="..." regex if not present, and finally default to the generated usage_statistics_${params.username}.xlsx; locate the block referencing contentDisposition, filename and the match variable and implement the additional filename* parsing and decoding step before assigning filename.
36-46:formInitValuesis recreated on every render with a freshnow.
new Date()runs each render, soformInitValues.dateRange[1]keeps drifting. Most form libs only consumeinitValueson mount, but downstream code that compares identity (e.g. memo deps, reset behavior) can misbehave. Wrap inuseMemo:🛠️ Suggested fix
- const now = new Date(); - const formInitValues = { - username: '', - token_name: '', - model_name: '', - dateRange: [ - timestamp2string(getTodayStartTimestamp()), - timestamp2string(now.getTime() / 1000 + 3600), - ], - }; + const formInitValues = useMemo(() => ({ + username: '', + token_name: '', + model_name: '', + dateRange: [ + timestamp2string(getTodayStartTimestamp()), + timestamp2string(Date.now() / 1000 + 3600), + ], + }), []);(Don't forget to add
useMemoto the React import.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 36 - 46, The formInitValues object is being recreated every render because now = new Date() is evaluated inline; wrap the creation of formInitValues (and now) in a useMemo so its value is stable across renders — import useMemo from React and create a memoized value (e.g., const formInitValues = useMemo(() => { const now = new Date(); return { username: '', token_name: '', model_name: '', dateRange: [timestamp2string(getTodayStartTimestamp()), timestamp2string(now.getTime() / 1000 + 3600)], }; }, [])) so components and memo deps that rely on identity (formInitValues) won’t drift each render.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controller/log.go`:
- Around line 240-245: The current Content-Disposition emits raw UTF-8 in
filename (variable filename) which is not allowed by RFC 6266; create an
ASCII-safe fallback (e.g. fallbackFilename) by stripping/transliterating or
replacing non-ASCII chars (or falling back to a safe name like "export.xlsx"),
keep encodedFilename for filename* (the UTF-8 percent-encoded value), and set
the header via c.Header("Content-Disposition", fmt.Sprintf("attachment;
filename=\"%s\"; filename*=UTF-8''%s", fallbackFilename, encodedFilename));
ensure fallbackFilename is strictly ASCII and properly quoted.
- Around line 181-238: The excelize file created with excelize.NewFile() is
never closed, leaking resources; after creating the file (excelize.NewFile())
add a defer f.Close() immediately to ensure resources are released (you can
ignore or log the Close() error), leaving the rest of the code (writing cells
and calling f.WriteToBuffer()) unchanged so the deferred close runs even on
early returns or errors.
In `@model/log.go`:
- Around line 560-620: GetLogStatisticsTrend currently pulls every row and
buckets in Go causing huge memory use; change the SQL to compute a bucketed
timestamp per row (hourly: (created_at/3600)*3600, daily:
(created_at/86400)*86400) and SELECT that bucket (e.g. bucket_ts), model_name,
COALESCE(SUM(quota),0) AS quota_sum, COUNT(*) AS request_count, then GROUP BY
bucket_ts, model_name so the DB collapses rows; update the rawRow to include
BucketTs int64, iterate rows to create TrendPoint by formatting
time.Unix(BucketTs,0) (remove the brittle bucket[:14]+"00" logic and bucketMap
minute truncation), and finally sort the resulting []TrendPoint by Time to
return a deterministic, time-ordered slice.
- Around line 547-558: GetLogStatistics currently calls buildStatisticsQuery
without enforcing a time range which can cause full-table scans; update the call
path so a sane time window is enforced: either validate in
controller.GetLogStatistics/controller.ExportLogStatistics to require non-zero
startTimestamp or endTimestamp and return a 400, or (recommended) default
missing timestamps inside GetLogStatistics/GetLogStatisticsTrend to a bounded
window (e.g., endTimestamp = now, startTimestamp = now - 30*24h) and clamp any
client-supplied range to a max span (e.g., 90 days), so buildStatisticsQuery
always receives a bounded range; apply the same defaulting/clamping logic to
GetLogStatisticsTrend to avoid heavy scans.
In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx`:
- Around line 89-95: The summary aggregation currently builds summaryData with
model_name: t('合计') and sums prompt_tokens/completion_tokens but doesn't mark
the synthesized row, which risks colliding with a real model named the same and
can silently break total_tokens if fields change; update the summaryData object
created in StatisticsDrawer (symbol: summaryData) to include a sentinel flag
(e.g., __isSummary: true) and add a short comment explaining total_tokens is
derived from prompt_tokens + completion_tokens (symbol: total_tokens column).
Then change the rowKey logic (the current record.model_name === t('合计') check)
to detect summary rows using the sentinel (record.__isSummary) instead of
comparing localized text.
In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Around line 77-91: Move the loading flag resets into finally blocks so they
always run even if a synchronous error escapes the try/catch; specifically, wrap
the API call block in useLogStatistics.jsx (the section that calls API.get and
calls setStatistics/setTrend) with try { ... } catch { ... } finally {
setLoading(false); } and likewise adjust the exportExcel function (the block
that uses setExportLoading) to ensure setExportLoading(false) is called in a
finally block; reference the setLoading, setExportLoading, setStatistics,
setTrend and exportExcel symbols when making the edits.
- Around line 110-114: The download anchor is revoked immediately after
link.click(), which can cancel downloads in some browsers; modify the code in
useLogStatistics.jsx so you append the created anchor element (link) to
document.body, call link.click(), then schedule URL.revokeObjectURL(link.href)
with a short timeout (e.g. setTimeout(..., 0 or 1000)) or revoke in a click/load
handler, and finally remove the anchor from the DOM; keep using the same blob
and filename variables and ensure link is cleaned up after revocation.
- Around line 13-14: Remove the unused imports initVChartSemiTheme and VChart
from the top of useLogStatistics.jsx since this hook only builds chart specs and
rendering/theme init happen elsewhere; locate the import statements referencing
initVChartSemiTheme and VChart and delete them so the file no longer imports
these unused symbols.
- Around line 56-67: The dateRange elements can be either strings or Date
objects, so normalize them explicitly before parsing: in the logic that sets
start_timestamp and end_timestamp (refer to values.dateRange, start_timestamp,
end_timestamp), check the element type and, if it's a Date, convert to an
ISO/timestamp string via toISOString() or getTime()/Math.floor(getTime()/1000)
as appropriate; if it's a string (from timestamp2string()), parse it using a
known format/Date constructor or create a Date and then derive seconds
consistently so both branches produce the same epoch seconds instead of relying
on Date.parse(dateObj). Also remove the unused import initVChartSemiTheme from
this file since it’s only used in useDashboardCharts.jsx.
---
Outside diff comments:
In `@web/src/hooks/usage-logs/useUsageLogsData.jsx`:
- Around line 837-905: The spread of ...logStatistics at the end of the return
value clobbers main-page state (overwriting loading, formInitValues,
setFormApi); fix by namespacing the drawer state instead of spreading it —
return a new key (e.g., statisticsDrawer) whose value contains the logStatistics
object (and map setVisible to setShowStatisticsDrawer if needed) so main hooks
keep their own loading/form state; update consumers (e.g., the StatisticsDrawer
usage in web/src/components/table/usage-logs/index.jsx) to receive props from
logsData.statisticsDrawer (and pass t explicitly) rather than receiving the
whole logsData.
---
Nitpick comments:
In `@controller/log.go`:
- Around line 127-159: The endpoint handler GetLogStatistics currently triggers
two heavy DB queries (model.GetLogStatistics and model.GetLogStatisticsTrend) on
every request, so add a stricter rate-limit middleware to the route that
registers this handler (e.g., wrap the GET route that points to GetLogStatistics
with middleware.SearchRateLimit() or middleware.CriticalRateLimit()) so repeated
drawer opens/forms cannot DOS the DB; update the route definition that binds
GetLogStatistics to include the chosen middleware in the route chain.
In `@model/log.go`:
- Around line 541-543: The query uses modelName directly in tx.Where("model_name
LIKE ?", modelName) which allows raw wildcards; change it to use the same
sanitization as GetUserLogs by passing sanitizeLikePattern(modelName) and adding
the ESCAPE clause (e.g., "model_name LIKE ? ESCAPE '!'") so user-supplied %/_
are treated literally, or switch to exact match ("model_name = ?") if wildcard
matching is not required; update the code around the modelName handling to call
sanitizeLikePattern and include the ESCAPE '!' in the WHERE clause to match the
file's existing behavior.
In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx`:
- Around line 46-48: The chart spec builders (buildBarSpec, buildTrendSpec,
buildQuotaBarSpec) and the four statistics.reduce(...) totals are being
recomputed on every render; wrap each spec creation and each reduce-based
summary in React.useMemo and memoize them by their true inputs (e.g. memoize
buildBarSpec and buildQuotaBarSpec on statistics, memoize buildTrendSpec on
trend/statistics as appropriate, and memoize each reduce result on statistics)
so they only recompute when those inputs change—this prevents unnecessary work
and keeps spec object identities stable for VChart.
In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Line 153: The array mapping callbacks shadow the translation function t from
useTranslation(): rename the map parameter(s) in the expressions that use
trend.map (e.g., the line creating const models = [...new Set(trend.map((t) =>
t.model_name))] and other trend.map callbacks) to a non-conflicting identifier
like item or row so they no longer shadow the outer t; update all occurrences
inside those callback bodies to use the new name (e.g., item.model_name) and
leave the useTranslation() t usage unchanged.
- Around line 115-117: The export catch hides backend JSON errors because the
request uses responseType: 'blob'; update the export logic in useLogStatistics
(the function that calls /api/log/statistics/export) to check
response.headers['content-type'] (or Blob.type) after receiving the blob: if the
content-type is not an Excel MIME, read the blob as text (await blob.text()),
JSON.parse it, and call showError(parsed.message || parsed.msg || t('导出失败'));
otherwise proceed with the existing file-download flow; keep the existing catch
to handle network errors but surface parsed server messages instead of always
showing t('导出失败').
- Around line 104-109: The current filename extraction in useLogStatistics.jsx
only matches basic filename="..." and ignores RFC 5987 encoded names
(filename*=UTF-8''...), so Unicode server-provided names are dropped; update the
extraction logic around contentDisposition/filename to first check for an
RFC5987 pattern (filename*=UTF-8''... decode percent-encoding to UTF-8), fall
back to the existing filename="..." regex if not present, and finally default to
the generated usage_statistics_${params.username}.xlsx; locate the block
referencing contentDisposition, filename and the match variable and implement
the additional filename* parsing and decoding step before assigning filename.
- Around line 36-46: The formInitValues object is being recreated every render
because now = new Date() is evaluated inline; wrap the creation of
formInitValues (and now) in a useMemo so its value is stable across renders —
import useMemo from React and create a memoized value (e.g., const
formInitValues = useMemo(() => { const now = new Date(); return { username: '',
token_name: '', model_name: '', dateRange:
[timestamp2string(getTodayStartTimestamp()), timestamp2string(now.getTime() /
1000 + 3600)], }; }, [])) so components and memo deps that rely on identity
(formInitValues) won’t drift each render.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1e96e30e-b959-4850-b716-f9a65f740541
⛔ Files ignored due to path filters (2)
go.sumis excluded by!**/*.sumweb/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
controller/log.gogo.modmodel/log.gorouter/api-router.goweb/src/components/table/usage-logs/StatisticsDrawer.jsxweb/src/components/table/usage-logs/UsageLogsActions.jsxweb/src/components/table/usage-logs/index.jsxweb/src/hooks/usage-logs/useLogStatistics.jsxweb/src/hooks/usage-logs/useUsageLogsData.jsx
| filename := title + ".xlsx" | ||
| // URL-encode filename for non-ASCII characters | ||
| encodedFilename := url.PathEscape(filename) | ||
| c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet") | ||
| c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", filename, encodedFilename)) | ||
| c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", buf.Bytes()) |
There was a problem hiding this comment.
Content-Disposition filename= should be ASCII; only filename*= should hold UTF-8.
title can contain non‑ASCII characters (e.g. 用户名-令牌-2026-04-27 ~ 2026-04-27), so you’re emitting a filename="..." value with raw UTF‑8 bytes. RFC 6266 requires the unquoted/quoted filename parameter to be ASCII; only filename* carries the encoded UTF-8 form. Some browsers/proxies will mishandle the raw value (garbled names, header parsing errors).
Use an ASCII-safe fallback for filename= and keep the encoded one for filename*:
Proposed fix
- filename := title + ".xlsx"
- // URL-encode filename for non-ASCII characters
- encodedFilename := url.PathEscape(filename)
- c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
- c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", filename, encodedFilename))
+ filename := title + ".xlsx"
+ asciiFallback := "log_statistics.xlsx"
+ encodedFilename := url.PathEscape(filename)
+ c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
+ c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", asciiFallback, encodedFilename))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| filename := title + ".xlsx" | |
| // URL-encode filename for non-ASCII characters | |
| encodedFilename := url.PathEscape(filename) | |
| c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet") | |
| c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", filename, encodedFilename)) | |
| c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", buf.Bytes()) | |
| filename := title + ".xlsx" | |
| asciiFallback := "log_statistics.xlsx" | |
| encodedFilename := url.PathEscape(filename) | |
| c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet") | |
| c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", asciiFallback, encodedFilename)) | |
| c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", buf.Bytes()) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controller/log.go` around lines 240 - 245, The current Content-Disposition
emits raw UTF-8 in filename (variable filename) which is not allowed by RFC
6266; create an ASCII-safe fallback (e.g. fallbackFilename) by
stripping/transliterating or replacing non-ASCII chars (or falling back to a
safe name like "export.xlsx"), keep encodedFilename for filename* (the UTF-8
percent-encoded value), and set the header via c.Header("Content-Disposition",
fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s",
fallbackFilename, encodedFilename)); ensure fallbackFilename is strictly ASCII
and properly quoted.
| func GetLogStatistics(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) ([]ModelStatistics, error) { | ||
| tx := buildStatisticsQuery(username, tokenName, startTimestamp, endTimestamp, modelName) | ||
| var results []ModelStatistics | ||
| err := tx.Select("model_name, COALESCE(SUM(quota),0) as quota, COALESCE(SUM(prompt_tokens),0) as prompt_tokens, COALESCE(SUM(completion_tokens),0) as completion_tokens, COUNT(*) as request_count"). | ||
| Group("model_name"). | ||
| Order("quota DESC"). | ||
| Find(&results).Error | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| return results, nil | ||
| } |
There was a problem hiding this comment.
Statistics query lacks an enforced time-range bound and may scan the entire logs table.
When neither startTimestamp nor endTimestamp is provided, buildStatisticsQuery produces a query that filters only by type = LogTypeConsume and aggregates over the entire log history. For a busy deployment this can be an extremely heavy scan/aggregation, easily blocking the request goroutine and stressing the DB.
Consider one of:
- Requiring a non-zero
startTimestamp/endTimestampat the controller layer (controller.GetLogStatistics/ExportLogStatistics). - Defaulting to a sensible window (e.g. last 24h / 30d) when not supplied.
- Adding a reasonable
LIMITon rows returned and clamping the max range server-side.
Same concern applies to GetLogStatisticsTrend below.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@model/log.go` around lines 547 - 558, GetLogStatistics currently calls
buildStatisticsQuery without enforcing a time range which can cause full-table
scans; update the call path so a sane time window is enforced: either validate
in controller.GetLogStatistics/controller.ExportLogStatistics to require
non-zero startTimestamp or endTimestamp and return a 400, or (recommended)
default missing timestamps inside GetLogStatistics/GetLogStatisticsTrend to a
bounded window (e.g., endTimestamp = now, startTimestamp = now - 30*24h) and
clamp any client-supplied range to a max span (e.g., 90 days), so
buildStatisticsQuery always receives a bounded range; apply the same
defaulting/clamping logic to GetLogStatisticsTrend to avoid heavy scans.
| const summaryData = hasData ? [{ | ||
| model_name: t('合计'), | ||
| request_count: statistics.reduce((s, m) => s + (m.request_count || 0), 0), | ||
| quota: statistics.reduce((s, m) => s + (m.quota || 0), 0), | ||
| prompt_tokens: statistics.reduce((s, m) => s + (m.prompt_tokens || 0), 0), | ||
| completion_tokens: statistics.reduce((s, m) => s + (m.completion_tokens || 0), 0), | ||
| }] : []; |
There was a problem hiding this comment.
Summary row drops total_tokens differentiation; rowKey can collide with a real model named 合计.
Two small issues in the summary aggregation:
- The
total_tokenscolumn (line 84) is rendered fromprompt_tokens + completion_tokensper row, so the summary works — but only becausesummaryDataincludes both fields. Worth a comment so a future change doesn’t break it. rowKeyat line 211 comparesrecord.model_name === t('合计')to detect the summary row. If a model is ever literally named合计(or whatever the current locale’s translation resolves to), its row will collide with the summary row’s key. Use a sentinel field on the synthesized summary object instead:
Proposed fix
- const summaryData = hasData ? [{
- model_name: t('合计'),
+ const summaryData = hasData ? [{
+ __isSummary: true,
+ model_name: t('合计'),
request_count: statistics.reduce((s, m) => s + (m.request_count || 0), 0),
quota: statistics.reduce((s, m) => s + (m.quota || 0), 0),
prompt_tokens: statistics.reduce((s, m) => s + (m.prompt_tokens || 0), 0),
completion_tokens: statistics.reduce((s, m) => s + (m.completion_tokens || 0), 0),
}] : [];
@@
- rowKey={(record, index) => record.model_name === t('合计') ? '__summary__' : record.model_name}
+ rowKey={(record) => (record.__isSummary ? '__summary__' : record.model_name)}Also applies to: 209-211
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx` around lines 89 -
95, The summary aggregation currently builds summaryData with model_name:
t('合计') and sums prompt_tokens/completion_tokens but doesn't mark the
synthesized row, which risks colliding with a real model named the same and can
silently break total_tokens if fields change; update the summaryData object
created in StatisticsDrawer (symbol: summaryData) to include a sentinel flag
(e.g., __isSummary: true) and add a short comment explaining total_tokens is
derived from prompt_tokens + completion_tokens (symbol: total_tokens column).
Then change the rowKey logic (the current record.model_name === t('合计') check)
to detect summary rows using the sentinel (record.__isSummary) instead of
comparing localized text.
| const link = document.createElement('a'); | ||
| link.href = URL.createObjectURL(blob); | ||
| link.download = filename; | ||
| link.click(); | ||
| URL.revokeObjectURL(link.href); |
There was a problem hiding this comment.
Revoke the object URL after the download has a chance to start.
URL.revokeObjectURL(link.href) is called synchronously right after link.click(). Some browsers (notably Safari and some mobile browsers) need the URL to remain valid until the download stream is initiated; revoking immediately can cancel the download or produce an empty file. Also, while modern Chromium/Firefox tolerate a detached anchor, appending it to the DOM is the safe pattern.
🛠️ Suggested fix
- const link = document.createElement('a');
- link.href = URL.createObjectURL(blob);
- link.download = filename;
- link.click();
- URL.revokeObjectURL(link.href);
+ const objectUrl = URL.createObjectURL(blob);
+ const link = document.createElement('a');
+ link.href = objectUrl;
+ link.download = filename;
+ document.body.appendChild(link);
+ link.click();
+ document.body.removeChild(link);
+ setTimeout(() => URL.revokeObjectURL(objectUrl), 0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 110 - 114, The
download anchor is revoked immediately after link.click(), which can cancel
downloads in some browsers; modify the code in useLogStatistics.jsx so you
append the created anchor element (link) to document.body, call link.click(),
then schedule URL.revokeObjectURL(link.href) with a short timeout (e.g.
setTimeout(..., 0 or 1000)) or revoke in a click/load handler, and finally
remove the anchor from the DOM; keep using the same blob and filename variables
and ensure link is cleaned up after revocation.
- 添加excelize文件defer关闭,防止资源泄漏 - 修复Content-Disposition中filename非ASCII字符的兼容性问题 - 将趋势查询的时间分桶从Go内存处理改为SQL层面聚合,提升性能 - 趋势查询结果按时间升序排序,确保图表数据有序 - 移除前端未使用的VChart和initVChartSemiTheme导入 - 统一日期解析逻辑,兼容Date对象和字符串类型 - setLoading/setExportLoading移至finally块,防止异常时spinner卡住 - URL.revokeObjectURL延迟执行,修复Safari下载兼容性问题 - 修复上次提交引入的问题: CodeRabbit代码审查指出的9项改进建议
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
web/src/hooks/usage-logs/useLogStatistics.jsx (1)
56-64: Floor seconds when converting fromDateto avoid sending a non-integer to the backend.
v.getTime() / 1000may produce a fractional value (e.g.,1761563456.789) if the picker returns aDatewith non-zero milliseconds. The Go side parses withstrconv.ParseInt, which silently returns0on failure, so the time-range filter would be dropped without any visible error. DefensiveMath.floorhere makes the contract explicit.🛠️ Suggested fix
const toTimestamp = (v) => { - if (v instanceof Date) return v.getTime() / 1000; - if (typeof v === 'string' && v) return Date.parse(v) / 1000; + if (v instanceof Date) return Math.floor(v.getTime() / 1000); + if (typeof v === 'string' && v) { + const ms = Date.parse(v); + return Number.isFinite(ms) ? Math.floor(ms / 1000) : 0; + } return 0; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 56 - 64, The toTimestamp helper used inside the values.dateRange handling can produce fractional seconds (v.getTime()/1000 or Date.parse(v)/1000); update toTimestamp so it returns integer seconds by applying Math.floor to the computed seconds for both Date and string branches, leaving the fallback return 0 unchanged, and ensure start_timestamp and end_timestamp are assigned from the now-integer toTimestamp(values.dateRange[0]) / toTimestamp(values.dateRange[1]).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Around line 108-118: The download filename is being set directly from the
Content-Disposition filename value which may be percent-encoded; update the
logic in useLogStatistics.jsx (the block using contentDisposition, match,
filename and link.download) to prefer parsing the RFC5987 filename* parameter
first, decode it with decodeURIComponent (or URL-decoding) and set that as
filename, and only if filename* is absent fall back to decoding the legacy
filename= capture before assigning to link.download and revoking the object URL.
---
Nitpick comments:
In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Around line 56-64: The toTimestamp helper used inside the values.dateRange
handling can produce fractional seconds (v.getTime()/1000 or
Date.parse(v)/1000); update toTimestamp so it returns integer seconds by
applying Math.floor to the computed seconds for both Date and string branches,
leaving the fallback return 0 unchanged, and ensure start_timestamp and
end_timestamp are assigned from the now-integer toTimestamp(values.dateRange[0])
/ toTimestamp(values.dateRange[1]).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dfb3baec-2056-4faa-a46f-afde9a1266ba
📒 Files selected for processing (3)
controller/log.gomodel/log.goweb/src/hooks/usage-logs/useLogStatistics.jsx
🚧 Files skipped from review as they are similar to previous changes (1)
- model/log.go
- 优先解析filename*=UTF-8''(RFC 5987)并用decodeURIComponent解码 - fallback时对legacy filename=值同样进行解码 - 修复中文用户名导出文件名显示为%E7%94%A8%E6%88%B7等编码的问题
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
model/log.go (1)
527-545:⚠️ Potential issue | 🟠 Major | ⚡ Quick winEnforce a bounded time window for statistics queries.
When
startTimestampandendTimestampare both zero, this path aggregates over the fulllogshistory. That can become a heavy scan for bothGetLogStatisticsandGetLogStatisticsTrendand degrade request latency under load.Suggested fix
func buildStatisticsQuery(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) *gorm.DB { + const ( + defaultWindow = int64(30 * 24 * 3600) + maxWindow = int64(90 * 24 * 3600) + ) + now := time.Now().Unix() + if endTimestamp == 0 { + endTimestamp = now + } + if startTimestamp == 0 || startTimestamp > endTimestamp { + startTimestamp = endTimestamp - defaultWindow + } + if endTimestamp-startTimestamp > maxWindow { + startTimestamp = endTimestamp - maxWindow + } + tx := LOG_DB.Table("logs").Where("type = ?", LogTypeConsume) + tx = tx.Where("created_at >= ?", startTimestamp).Where("created_at <= ?", endTimestamp) if username != "" { tx = tx.Where("username = ?", username) } if tokenName != "" { tx = tx.Where("token_name = ?", tokenName) } - if startTimestamp != 0 { - tx = tx.Where("created_at >= ?", startTimestamp) - } - if endTimestamp != 0 { - tx = tx.Where("created_at <= ?", endTimestamp) - } if modelName != "" { tx = tx.Where("model_name LIKE ?", modelName) } return tx }Also applies to: 547-558, 560-561
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model/log.go` around lines 527 - 545, The query builder currently allows unbounded scans when both startTimestamp and endTimestamp are zero; update buildStatisticsQuery to enforce a default bounded window (e.g., last 30 days): if startTimestamp==0 && endTimestamp==0, set endTimestamp = time.Now().Unix() and startTimestamp = endTimestamp - int64(30*24*3600) before applying the WHEREs; keep existing checks so explicit timestamps still override the default. Ensure you import time and use the same timestamp unit as created_at (Unix seconds) and apply the same defaulting logic to the other similar builders/blocks referenced (the ones used by GetLogStatistics and GetLogStatisticsTrend).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@model/log.go`:
- Around line 541-543: The WHERE clause uses modelName directly with LIKE which
allows unescaped '%' and '_' to alter results; change the code that builds tx to
escape '%' '_' and '\' in modelName (same escape routine used elsewhere in this
file), then build a safe pattern (e.g. "%" + escaped + "%") and call tx.Where
with a LIKE ... ESCAPE clause (match the style used elsewhere in this file) so
the DB treats wildcards literally; also update any callers of the surrounding
function to accept and handle the returned error if you surface one from the
escape/routing logic.
---
Duplicate comments:
In `@model/log.go`:
- Around line 527-545: The query builder currently allows unbounded scans when
both startTimestamp and endTimestamp are zero; update buildStatisticsQuery to
enforce a default bounded window (e.g., last 30 days): if startTimestamp==0 &&
endTimestamp==0, set endTimestamp = time.Now().Unix() and startTimestamp =
endTimestamp - int64(30*24*3600) before applying the WHEREs; keep existing
checks so explicit timestamps still override the default. Ensure you import time
and use the same timestamp unit as created_at (Unix seconds) and apply the same
defaulting logic to the other similar builders/blocks referenced (the ones used
by GetLogStatistics and GetLogStatisticsTrend).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| if modelName != "" { | ||
| tx = tx.Where("model_name LIKE ?", modelName) | ||
| } |
There was a problem hiding this comment.
Escape model_name LIKE patterns before querying.
modelName is passed directly into LIKE, so %/_ change semantics and can unintentionally widen results. Use the same escaped pattern approach already used elsewhere in this file.
Suggested fix
-func buildStatisticsQuery(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) *gorm.DB {
+func buildStatisticsQuery(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) (*gorm.DB, error) {
tx := LOG_DB.Table("logs").Where("type = ?", LogTypeConsume)
...
if modelName != "" {
- tx = tx.Where("model_name LIKE ?", modelName)
+ modelNamePattern, err := sanitizeLikePattern(modelName)
+ if err != nil {
+ return nil, err
+ }
+ tx = tx.Where("model_name LIKE ? ESCAPE '!'", modelNamePattern)
}
- return tx
+ return tx, nil
}And update callers to handle the returned error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@model/log.go` around lines 541 - 543, The WHERE clause uses modelName
directly with LIKE which allows unescaped '%' and '_' to alter results; change
the code that builds tx to escape '%' '_' and '\' in modelName (same escape
routine used elsewhere in this file), then build a safe pattern (e.g. "%" +
escaped + "%") and call tx.Where with a LIKE ... ESCAPE clause (match the style
used elsewhere in this file) so the DB treats wildcards literally; also update
any callers of the surrounding function to accept and handle the returned error
if you surface one from the escape/routing logic.
|
期待使用日志添加导出功能 |
Important
📝 变更描述 / Description
在使用日志页面新增按用户(必选)、令牌(可选)维度的统计功能,方便管理员快速了解各用户/令牌的 API 调用情况。
🚀 变更类型 / Type of change
🔗 关联任务 / Related Issue
🔍 重复提交检查 / Duplicate Check
已检查相关 Issue/PR:#4328 / #4329 与本 PR 都涉及 token/statistics 能力,但 #4329 当前仍为 Open,且其范围主要是 Dashboard 的模型/用户维度 token 图表与图表可见性控制;本 PR 聚焦使用日志页面的统计 Drawer、按用户/令牌/时间范围聚合,以及 Excel 导出能力。
✅ 提交前检查项 / Checklist
Bug fix,我已提交或关联对应 Issue,且不会将设计取舍、预期不一致或理解偏差直接归类为 bug。📸 运行证明 / Proof of Work
Summary by CodeRabbit