Skip to content

feat: 添加使用日志按用户/令牌维度的统计与导出功能#4498

Open
kK-2004 wants to merge 4 commits into
QuantumNous:mainfrom
kK-2004:feat/log-user-token-statistics-export
Open

feat: 添加使用日志按用户/令牌维度的统计与导出功能#4498
kK-2004 wants to merge 4 commits into
QuantumNous:mainfrom
kK-2004:feat/log-user-token-statistics-export

Conversation

@kK-2004
Copy link
Copy Markdown

@kK-2004 kK-2004 commented Apr 27, 2026

  • 新增后端统计API(GET /api/log/statistics),支持按用户+令牌+时间范围聚合日志
  • 新增后端Excel导出API(GET /api/log/statistics/export),使用excelize生成含合计行的报表
  • 前端新增统计Drawer面板,含VChart图表(调用次数分布、消耗分布、调用趋势)
  • 统计汇总表格显示消耗额度和百万Token单位的Tokens统计
  • 管理员可见「统计」按钮,支持导出为Excel文件

⚠️ 提交说明 / PR Notice

Important

  • 请提供人工撰写的简洁摘要,避免直接粘贴未经整理的 AI 输出。

📝 变更描述 / Description

在使用日志页面新增按用户(必选)、令牌(可选)维度的统计功能,方便管理员快速了解各用户/令牌的 API 调用情况。

🚀 变更类型 / Type of change

  • 🐛 Bug 修复 (Bug fix) - 请关联对应 Issue,避免将设计取舍、理解偏差或预期不一致直接归类为 bug
  • ✨ 新功能 (New feature) - 重大特性建议先通过 Issue 沟通
  • ⚡ 性能优化 / 重构 (Refactor)
  • 📝 文档更新 (Documentation)

🔗 关联任务 / Related Issue

🔍 重复提交检查 / Duplicate Check

已检查相关 Issue/PR:#4328 / #4329 与本 PR 都涉及 token/statistics 能力,但 #4329 当前仍为 Open,且其范围主要是 Dashboard 的模型/用户维度 token 图表与图表可见性控制;本 PR 聚焦使用日志页面的统计 Drawer、按用户/令牌/时间范围聚合,以及 Excel 导出能力。

✅ 提交前检查项 / Checklist

  • 人工确认: 我已亲自整理并撰写此描述,没有直接粘贴未经处理的 AI 输出。
  • 非重复提交: 我已搜索现有的 IssuesPRs,确认不是重复提交。
  • Bug fix 说明: 若此 PR 标记为 Bug fix,我已提交或关联对应 Issue,且不会将设计取舍、预期不一致或理解偏差直接归类为 bug。
  • 变更理解: 我已理解这些更改的工作原理及可能影响。
  • 范围聚焦: 本 PR 未包含任何与当前任务无关的代码改动。
  • 本地验证: 已在本地运行并通过测试或手动验证,维护者可以据此复核结果。
  • 安全合规: 代码中无敏感凭据,且符合项目代码规范。

📸 运行证明 / Proof of Work

image image

Summary by CodeRabbit

  • New Features
    • Added a comprehensive log statistics dashboard with filtering by username, token, model, and custom date ranges (hourly/daily trend aggregation).
    • Interactive charts for call distribution, quota consumption, and usage trends with per-model breakdowns and totals.
    • Right-side statistics drawer and an admin-only action to open it from the usage logs UI.
    • Excel export for statistics including formatted columns and a summary totals row.

- 新增后端统计API(GET /api/log/statistics),支持按用户+令牌+时间范围聚合日志
- 新增后端Excel导出API(GET /api/log/statistics/export),使用excelize生成含合计行的报表
- 前端新增统计Drawer面板,含VChart图表(调用次数分布、消耗分布、调用趋势)
- 统计汇总表格显示消耗额度和百万Token单位的Tokens统计
- 管理员可见「统计」按钮,支持导出为Excel文件
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 27, 2026

Walkthrough

Adds backend endpoints and model queries to compute, trend, and export per-model usage statistics as XLSX; updates Go module deps. Frontend adds a statistics drawer, hook, UI actions, chart specs, and an export flow to fetch and download those statistics.

Changes

Cohort / File(s) Summary
Backend Controllers & Routes
controller/log.go, router/api-router.go
Added GetLogStatistics and ExportLogStatistics endpoints; registered /api/log/statistics and /api/log/statistics/export (admin-only). Export builds an in-memory XLSX and returns it with correct Content-Type and UTF‑8 filename handling.
Backend Models & Queries
model/log.go
Added ModelStatistics, TrendPoint, buildStatisticsQuery, GetLogStatistics (per-model aggregates: requests, quota, tokens) and GetLogStatisticsTrend (time-bucketed hourly/daily aggregates with labels).
Go Dependencies
go.mod
Added github.com/xuri/excelize/v2 and updated several golang.org/x/* modules plus indirect deps required by excelize.
Frontend UI Components
web/src/components/table/usage-logs/StatisticsDrawer.jsx, web/src/components/table/usage-logs/UsageLogsActions.jsx, web/src/components/table/usage-logs/index.jsx
New StatisticsDrawer component (form, tabbed charts, table with summary row, export button); added admin-only "统计" action and integrated drawer into LogsPage.
Frontend Hooks & Integration
web/src/hooks/usage-logs/useLogStatistics.jsx, web/src/hooks/usage-logs/useUsageLogsData.jsx
New useLogStatistics hook for fetching/exporting stats, building chart specs, and drawer state; integrated into useUsageLogsData and exposed via the hook API.

Sequence Diagram(s)

sequenceDiagram
    participant Client as React Frontend
    participant API as Gin Controller
    participant DB as Database Model
    participant Excel as Excel Library

    rect rgba(100,150,200,0.5)
    Note over Client,DB: Statistics Retrieval Flow
    Client->>Client: Submit stats form (username, token_name, model_name, date range)
    Client->>API: GET /api/log/statistics?...
    API->>DB: GetLogStatistics(...) and GetLogStatisticsTrend(...)
    DB-->>API: ModelStatistics[], TrendPoint[]
    API-->>Client: JSON { data: { models, trend } }
    Client->>Client: Render charts and summary table
    end

    rect rgba(150,100,200,0.5)
    Note over Client,Excel: Excel Export Flow
    Client->>Client: Click "导出 Excel"
    Client->>API: GET /api/log/statistics/export?...
    API->>DB: GetLogStatistics(...)
    DB-->>API: ModelStatistics[]
    API->>Excel: Build workbook, write headers/rows, append totals
    Excel-->>API: XLSX bytes
    API-->>Client: Response (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, Content-Disposition)
    Client->>Client: Trigger file download
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • seefs001

Poem

A rabbit hops through logs at dawn, 🐇
Counts the calls and quota drawn,
Sheets of xlsx hum and sing,
Charts and trends take flight — take wing,
Admins smile as numbers dawn. 📊✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding statistics and export functionality for usage logs by user/token dimension.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
web/src/hooks/usage-logs/useUsageLogsData.jsx (1)

837-905: ⚠️ Potential issue | 🔴 Critical

...logStatistics spread at the end clobbers main-page state (loading, formInitValues, setFormApi).

useLogStatistics() (per web/src/hooks/usage-logs/useLogStatistics.jsx) exposes its own loading, formInitValues, and setFormApi for the drawer. By spreading ...logStatistics after the existing return-object literal:

  • loading (returned at line 842 for the main logs table) is overwritten by the drawer’s loading. The skeleton/loading UI of the table will now reflect the statistics drawer’s fetch state instead of the table’s own.
  • formInitValues (line 854, used by the main LogsFilters form) is overwritten by the drawer’s minimal init values. The page’s main filters lose channel, group, request_id, dateRange defaults and logType: '0'.
  • setFormApi (line 853) is overwritten by the drawer’s setter, so when the main filter form mounts and calls setFormApi, it stores its API instance into the drawer’s state — breaking the main page’s getFormValues() and the useEffect([formApi]) initial stat load, while leaving the drawer’s form unset.

Scope the drawer’s state under a non-conflicting prefix instead of spreading.

Proposed fix
   return {
     // ... existing fields ...
     // Translation
     t,
-
-    // Statistics drawer
-    ...logStatistics,
-    setShowStatisticsDrawer: logStatistics.setVisible,
+
+    // Statistics drawer (namespaced to avoid clobbering main-page state)
+    statisticsDrawer: logStatistics,
+    setShowStatisticsDrawer: logStatistics.setVisible,
   };

…and then in web/src/components/table/usage-logs/index.jsx pass <StatisticsDrawer {...logsData.statisticsDrawer} t={logsData.t} /> rather than {...logsData}.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useUsageLogsData.jsx` around lines 837 - 905, The
spread of ...logStatistics at the end of the return value clobbers main-page
state (overwriting loading, formInitValues, setFormApi); fix by namespacing the
drawer state instead of spreading it — return a new key (e.g., statisticsDrawer)
whose value contains the logStatistics object (and map setVisible to
setShowStatisticsDrawer if needed) so main hooks keep their own loading/form
state; update consumers (e.g., the StatisticsDrawer usage in
web/src/components/table/usage-logs/index.jsx) to receive props from
logsData.statisticsDrawer (and pass t explicitly) rather than receiving the
whole logsData.
🧹 Nitpick comments (7)
model/log.go (1)

541-543: model_name LIKE ? passes user input as the pattern unchanged.

Unlike GetUserLogs (line 397) which sanitizes wildcards via sanitizeLikePattern(...) ... LIKE ? ESCAPE '!', this helper feeds modelName straight into LIKE. A caller passing % or _ will silently match more rows than intended, and the behavior diverges from the rest of the file. Prefer the same sanitizeLikePattern + ESCAPE pattern, or use exact equality if wildcard matching isn’t required by the UI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/log.go` around lines 541 - 543, The query uses modelName directly in
tx.Where("model_name LIKE ?", modelName) which allows raw wildcards; change it
to use the same sanitization as GetUserLogs by passing
sanitizeLikePattern(modelName) and adding the ESCAPE clause (e.g., "model_name
LIKE ? ESCAPE '!'") so user-supplied %/_ are treated literally, or switch to
exact match ("model_name = ?") if wildcard matching is not required; update the
code around the modelName handling to call sanitizeLikePattern and include the
ESCAPE '!' in the WHERE clause to match the file's existing behavior.
web/src/components/table/usage-logs/StatisticsDrawer.jsx (1)

46-48: Chart specs and summary totals are recomputed on every render.

buildBarSpec(), buildTrendSpec(), buildQuotaBarSpec() (lines 46‑48) and the four statistics.reduce(...) calls (lines 91‑94) all run on every render of the drawer, even when only visible toggles. Wrap them in useMemo keyed on statistics/trend so chart spec generation and summing don’t fire on unrelated parent re‑renders (and so VChart doesn’t see a fresh spec object identity unnecessarily).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx` around lines 46 -
48, The chart spec builders (buildBarSpec, buildTrendSpec, buildQuotaBarSpec)
and the four statistics.reduce(...) totals are being recomputed on every render;
wrap each spec creation and each reduce-based summary in React.useMemo and
memoize them by their true inputs (e.g. memoize buildBarSpec and
buildQuotaBarSpec on statistics, memoize buildTrendSpec on trend/statistics as
appropriate, and memoize each reduce result on statistics) so they only
recompute when those inputs change—this prevents unnecessary work and keeps spec
object identities stable for VChart.
controller/log.go (1)

127-159: Trend bucketing key collision when calling both endpoints.

GetLogStatistics calls both model.GetLogStatistics and model.GetLogStatisticsTrend for every JSON request. Given the performance characteristics flagged in model/log.go (the trend query effectively pulls every matching row), every drawer open / form submit will run two heavy queries back-to-back. Worth fronting the /api/log/statistics endpoint with a tighter rate limit (e.g. middleware.SearchRateLimit() or CriticalRateLimit()), similarly to logRoute.GET("/self/search", ...), to avoid an admin accidentally DOS’ing the DB.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/log.go` around lines 127 - 159, The endpoint handler
GetLogStatistics currently triggers two heavy DB queries (model.GetLogStatistics
and model.GetLogStatisticsTrend) on every request, so add a stricter rate-limit
middleware to the route that registers this handler (e.g., wrap the GET route
that points to GetLogStatistics with middleware.SearchRateLimit() or
middleware.CriticalRateLimit()) so repeated drawer opens/forms cannot DOS the
DB; update the route definition that binds GetLogStatistics to include the
chosen middleware in the route chain.
web/src/hooks/usage-logs/useLogStatistics.jsx (4)

153-153: Avoid shadowing the t translation function with the .map callback parameter.

useTranslation()'s t is in scope here, and .map((t) => …) (also at lines 160 and inside the trend block) shadows it. Today it works because the inner body doesn't call t(...), but it's a footgun for future edits. Rename to item/row.

🛠️ Suggested fix
-    const models = [...new Set(trend.map((t) => t.model_name))];
+    const models = [...new Set(trend.map((row) => row.model_name))];
     ...
-      data: [{ id: 'trendData', values: trend.map((t) => ({
-        Time: t.time,
-        Model: t.model_name,
-        Count: t.request_count,
-      }))}],
+      data: [{ id: 'trendData', values: trend.map((row) => ({
+        Time: row.time,
+        Model: row.model_name,
+        Count: row.request_count,
+      }))}],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx` at line 153, The array mapping
callbacks shadow the translation function t from useTranslation(): rename the
map parameter(s) in the expressions that use trend.map (e.g., the line creating
const models = [...new Set(trend.map((t) => t.model_name))] and other trend.map
callbacks) to a non-conflicting identifier like item or row so they no longer
shadow the outer t; update all occurrences inside those callback bodies to use
the new name (e.g., item.model_name) and leave the useTranslation() t usage
unchanged.

115-117: Surface backend error messages on export failure.

When /api/log/statistics/export returns a JSON error (e.g. {success:false, message:"..."}), the response is still received as a blob due to responseType: 'blob', and the user only sees the generic 导出失败. Consider detecting non-Excel Content-Type and reading the blob as text to surface the real message.

🛠️ Suggested fix
       const res = await API.get(url, { responseType: 'blob' });
+      const ct = res.headers['content-type'] || '';
+      if (ct.includes('application/json')) {
+        const text = await res.data.text();
+        try {
+          const { message } = JSON.parse(text);
+          showError(message || t('导出失败'));
+        } catch {
+          showError(t('导出失败'));
+        }
+        return;
+      }
       const blob = new Blob([res.data], {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 115 - 117, The
export catch hides backend JSON errors because the request uses responseType:
'blob'; update the export logic in useLogStatistics (the function that calls
/api/log/statistics/export) to check response.headers['content-type'] (or
Blob.type) after receiving the blob: if the content-type is not an Excel MIME,
read the blob as text (await blob.text()), JSON.parse it, and call
showError(parsed.message || parsed.msg || t('导出失败')); otherwise proceed with the
existing file-download flow; keep the existing catch to handle network errors
but surface parsed server messages instead of always showing t('导出失败').

104-109: Filename regex misses RFC 5987 (filename*=UTF-8''…).

If the backend returns a Unicode filename via filename*=UTF-8''... (common when the username/period contains non-ASCII), the current regex falls back to usage_statistics_${username}.xlsx instead of using the server-provided name.

🛠️ Suggested fix
-      if (contentDisposition) {
-        const match = contentDisposition.match(/filename="?([^"]+)"?/);
-        if (match) filename = match[1];
-      }
+      if (contentDisposition) {
+        const star = contentDisposition.match(/filename\*\s*=\s*[^']*''([^;]+)/i);
+        if (star) {
+          try { filename = decodeURIComponent(star[1]); } catch { /* ignore */ }
+        } else {
+          const plain = contentDisposition.match(/filename\s*=\s*"?([^";]+)"?/i);
+          if (plain) filename = plain[1];
+        }
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 104 - 109, The
current filename extraction in useLogStatistics.jsx only matches basic
filename="..." and ignores RFC 5987 encoded names (filename*=UTF-8''...), so
Unicode server-provided names are dropped; update the extraction logic around
contentDisposition/filename to first check for an RFC5987 pattern
(filename*=UTF-8''... decode percent-encoding to UTF-8), fall back to the
existing filename="..." regex if not present, and finally default to the
generated usage_statistics_${params.username}.xlsx; locate the block referencing
contentDisposition, filename and the match variable and implement the additional
filename* parsing and decoding step before assigning filename.

36-46: formInitValues is recreated on every render with a fresh now.

new Date() runs each render, so formInitValues.dateRange[1] keeps drifting. Most form libs only consume initValues on mount, but downstream code that compares identity (e.g. memo deps, reset behavior) can misbehave. Wrap in useMemo:

🛠️ Suggested fix
-  const now = new Date();
-  const formInitValues = {
-    username: '',
-    token_name: '',
-    model_name: '',
-    dateRange: [
-      timestamp2string(getTodayStartTimestamp()),
-      timestamp2string(now.getTime() / 1000 + 3600),
-    ],
-  };
+  const formInitValues = useMemo(() => ({
+    username: '',
+    token_name: '',
+    model_name: '',
+    dateRange: [
+      timestamp2string(getTodayStartTimestamp()),
+      timestamp2string(Date.now() / 1000 + 3600),
+    ],
+  }), []);

(Don't forget to add useMemo to the React import.)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 36 - 46, The
formInitValues object is being recreated every render because now = new Date()
is evaluated inline; wrap the creation of formInitValues (and now) in a useMemo
so its value is stable across renders — import useMemo from React and create a
memoized value (e.g., const formInitValues = useMemo(() => { const now = new
Date(); return { username: '', token_name: '', model_name: '', dateRange:
[timestamp2string(getTodayStartTimestamp()), timestamp2string(now.getTime() /
1000 + 3600)], }; }, [])) so components and memo deps that rely on identity
(formInitValues) won’t drift each render.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@controller/log.go`:
- Around line 240-245: The current Content-Disposition emits raw UTF-8 in
filename (variable filename) which is not allowed by RFC 6266; create an
ASCII-safe fallback (e.g. fallbackFilename) by stripping/transliterating or
replacing non-ASCII chars (or falling back to a safe name like "export.xlsx"),
keep encodedFilename for filename* (the UTF-8 percent-encoded value), and set
the header via c.Header("Content-Disposition", fmt.Sprintf("attachment;
filename=\"%s\"; filename*=UTF-8''%s", fallbackFilename, encodedFilename));
ensure fallbackFilename is strictly ASCII and properly quoted.
- Around line 181-238: The excelize file created with excelize.NewFile() is
never closed, leaking resources; after creating the file (excelize.NewFile())
add a defer f.Close() immediately to ensure resources are released (you can
ignore or log the Close() error), leaving the rest of the code (writing cells
and calling f.WriteToBuffer()) unchanged so the deferred close runs even on
early returns or errors.

In `@model/log.go`:
- Around line 560-620: GetLogStatisticsTrend currently pulls every row and
buckets in Go causing huge memory use; change the SQL to compute a bucketed
timestamp per row (hourly: (created_at/3600)*3600, daily:
(created_at/86400)*86400) and SELECT that bucket (e.g. bucket_ts), model_name,
COALESCE(SUM(quota),0) AS quota_sum, COUNT(*) AS request_count, then GROUP BY
bucket_ts, model_name so the DB collapses rows; update the rawRow to include
BucketTs int64, iterate rows to create TrendPoint by formatting
time.Unix(BucketTs,0) (remove the brittle bucket[:14]+"00" logic and bucketMap
minute truncation), and finally sort the resulting []TrendPoint by Time to
return a deterministic, time-ordered slice.
- Around line 547-558: GetLogStatistics currently calls buildStatisticsQuery
without enforcing a time range which can cause full-table scans; update the call
path so a sane time window is enforced: either validate in
controller.GetLogStatistics/controller.ExportLogStatistics to require non-zero
startTimestamp or endTimestamp and return a 400, or (recommended) default
missing timestamps inside GetLogStatistics/GetLogStatisticsTrend to a bounded
window (e.g., endTimestamp = now, startTimestamp = now - 30*24h) and clamp any
client-supplied range to a max span (e.g., 90 days), so buildStatisticsQuery
always receives a bounded range; apply the same defaulting/clamping logic to
GetLogStatisticsTrend to avoid heavy scans.

In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx`:
- Around line 89-95: The summary aggregation currently builds summaryData with
model_name: t('合计') and sums prompt_tokens/completion_tokens but doesn't mark
the synthesized row, which risks colliding with a real model named the same and
can silently break total_tokens if fields change; update the summaryData object
created in StatisticsDrawer (symbol: summaryData) to include a sentinel flag
(e.g., __isSummary: true) and add a short comment explaining total_tokens is
derived from prompt_tokens + completion_tokens (symbol: total_tokens column).
Then change the rowKey logic (the current record.model_name === t('合计') check)
to detect summary rows using the sentinel (record.__isSummary) instead of
comparing localized text.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Around line 77-91: Move the loading flag resets into finally blocks so they
always run even if a synchronous error escapes the try/catch; specifically, wrap
the API call block in useLogStatistics.jsx (the section that calls API.get and
calls setStatistics/setTrend) with try { ... } catch { ... } finally {
setLoading(false); } and likewise adjust the exportExcel function (the block
that uses setExportLoading) to ensure setExportLoading(false) is called in a
finally block; reference the setLoading, setExportLoading, setStatistics,
setTrend and exportExcel symbols when making the edits.
- Around line 110-114: The download anchor is revoked immediately after
link.click(), which can cancel downloads in some browsers; modify the code in
useLogStatistics.jsx so you append the created anchor element (link) to
document.body, call link.click(), then schedule URL.revokeObjectURL(link.href)
with a short timeout (e.g. setTimeout(..., 0 or 1000)) or revoke in a click/load
handler, and finally remove the anchor from the DOM; keep using the same blob
and filename variables and ensure link is cleaned up after revocation.
- Around line 13-14: Remove the unused imports initVChartSemiTheme and VChart
from the top of useLogStatistics.jsx since this hook only builds chart specs and
rendering/theme init happen elsewhere; locate the import statements referencing
initVChartSemiTheme and VChart and delete them so the file no longer imports
these unused symbols.
- Around line 56-67: The dateRange elements can be either strings or Date
objects, so normalize them explicitly before parsing: in the logic that sets
start_timestamp and end_timestamp (refer to values.dateRange, start_timestamp,
end_timestamp), check the element type and, if it's a Date, convert to an
ISO/timestamp string via toISOString() or getTime()/Math.floor(getTime()/1000)
as appropriate; if it's a string (from timestamp2string()), parse it using a
known format/Date constructor or create a Date and then derive seconds
consistently so both branches produce the same epoch seconds instead of relying
on Date.parse(dateObj). Also remove the unused import initVChartSemiTheme from
this file since it’s only used in useDashboardCharts.jsx.

---

Outside diff comments:
In `@web/src/hooks/usage-logs/useUsageLogsData.jsx`:
- Around line 837-905: The spread of ...logStatistics at the end of the return
value clobbers main-page state (overwriting loading, formInitValues,
setFormApi); fix by namespacing the drawer state instead of spreading it —
return a new key (e.g., statisticsDrawer) whose value contains the logStatistics
object (and map setVisible to setShowStatisticsDrawer if needed) so main hooks
keep their own loading/form state; update consumers (e.g., the StatisticsDrawer
usage in web/src/components/table/usage-logs/index.jsx) to receive props from
logsData.statisticsDrawer (and pass t explicitly) rather than receiving the
whole logsData.

---

Nitpick comments:
In `@controller/log.go`:
- Around line 127-159: The endpoint handler GetLogStatistics currently triggers
two heavy DB queries (model.GetLogStatistics and model.GetLogStatisticsTrend) on
every request, so add a stricter rate-limit middleware to the route that
registers this handler (e.g., wrap the GET route that points to GetLogStatistics
with middleware.SearchRateLimit() or middleware.CriticalRateLimit()) so repeated
drawer opens/forms cannot DOS the DB; update the route definition that binds
GetLogStatistics to include the chosen middleware in the route chain.

In `@model/log.go`:
- Around line 541-543: The query uses modelName directly in tx.Where("model_name
LIKE ?", modelName) which allows raw wildcards; change it to use the same
sanitization as GetUserLogs by passing sanitizeLikePattern(modelName) and adding
the ESCAPE clause (e.g., "model_name LIKE ? ESCAPE '!'") so user-supplied %/_
are treated literally, or switch to exact match ("model_name = ?") if wildcard
matching is not required; update the code around the modelName handling to call
sanitizeLikePattern and include the ESCAPE '!' in the WHERE clause to match the
file's existing behavior.

In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx`:
- Around line 46-48: The chart spec builders (buildBarSpec, buildTrendSpec,
buildQuotaBarSpec) and the four statistics.reduce(...) totals are being
recomputed on every render; wrap each spec creation and each reduce-based
summary in React.useMemo and memoize them by their true inputs (e.g. memoize
buildBarSpec and buildQuotaBarSpec on statistics, memoize buildTrendSpec on
trend/statistics as appropriate, and memoize each reduce result on statistics)
so they only recompute when those inputs change—this prevents unnecessary work
and keeps spec object identities stable for VChart.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Line 153: The array mapping callbacks shadow the translation function t from
useTranslation(): rename the map parameter(s) in the expressions that use
trend.map (e.g., the line creating const models = [...new Set(trend.map((t) =>
t.model_name))] and other trend.map callbacks) to a non-conflicting identifier
like item or row so they no longer shadow the outer t; update all occurrences
inside those callback bodies to use the new name (e.g., item.model_name) and
leave the useTranslation() t usage unchanged.
- Around line 115-117: The export catch hides backend JSON errors because the
request uses responseType: 'blob'; update the export logic in useLogStatistics
(the function that calls /api/log/statistics/export) to check
response.headers['content-type'] (or Blob.type) after receiving the blob: if the
content-type is not an Excel MIME, read the blob as text (await blob.text()),
JSON.parse it, and call showError(parsed.message || parsed.msg || t('导出失败'));
otherwise proceed with the existing file-download flow; keep the existing catch
to handle network errors but surface parsed server messages instead of always
showing t('导出失败').
- Around line 104-109: The current filename extraction in useLogStatistics.jsx
only matches basic filename="..." and ignores RFC 5987 encoded names
(filename*=UTF-8''...), so Unicode server-provided names are dropped; update the
extraction logic around contentDisposition/filename to first check for an
RFC5987 pattern (filename*=UTF-8''... decode percent-encoding to UTF-8), fall
back to the existing filename="..." regex if not present, and finally default to
the generated usage_statistics_${params.username}.xlsx; locate the block
referencing contentDisposition, filename and the match variable and implement
the additional filename* parsing and decoding step before assigning filename.
- Around line 36-46: The formInitValues object is being recreated every render
because now = new Date() is evaluated inline; wrap the creation of
formInitValues (and now) in a useMemo so its value is stable across renders —
import useMemo from React and create a memoized value (e.g., const
formInitValues = useMemo(() => { const now = new Date(); return { username: '',
token_name: '', model_name: '', dateRange:
[timestamp2string(getTodayStartTimestamp()), timestamp2string(now.getTime() /
1000 + 3600)], }; }, [])) so components and memo deps that rely on identity
(formInitValues) won’t drift each render.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1e96e30e-b959-4850-b716-f9a65f740541

📥 Commits

Reviewing files that changed from the base of the PR and between e36d191 and 0bc341d.

⛔ Files ignored due to path filters (2)
  • go.sum is excluded by !**/*.sum
  • web/bun.lock is excluded by !**/*.lock
📒 Files selected for processing (9)
  • controller/log.go
  • go.mod
  • model/log.go
  • router/api-router.go
  • web/src/components/table/usage-logs/StatisticsDrawer.jsx
  • web/src/components/table/usage-logs/UsageLogsActions.jsx
  • web/src/components/table/usage-logs/index.jsx
  • web/src/hooks/usage-logs/useLogStatistics.jsx
  • web/src/hooks/usage-logs/useUsageLogsData.jsx

Comment thread controller/log.go
Comment thread controller/log.go
Comment on lines +240 to +245
filename := title + ".xlsx"
// URL-encode filename for non-ASCII characters
encodedFilename := url.PathEscape(filename)
c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", filename, encodedFilename))
c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", buf.Bytes())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Content-Disposition filename= should be ASCII; only filename*= should hold UTF-8.

title can contain non‑ASCII characters (e.g. 用户名-令牌-2026-04-27 ~ 2026-04-27), so you’re emitting a filename="..." value with raw UTF‑8 bytes. RFC 6266 requires the unquoted/quoted filename parameter to be ASCII; only filename* carries the encoded UTF-8 form. Some browsers/proxies will mishandle the raw value (garbled names, header parsing errors).

Use an ASCII-safe fallback for filename= and keep the encoded one for filename*:

Proposed fix
-	filename := title + ".xlsx"
-	// URL-encode filename for non-ASCII characters
-	encodedFilename := url.PathEscape(filename)
-	c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
-	c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", filename, encodedFilename))
+	filename := title + ".xlsx"
+	asciiFallback := "log_statistics.xlsx"
+	encodedFilename := url.PathEscape(filename)
+	c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
+	c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", asciiFallback, encodedFilename))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
filename := title + ".xlsx"
// URL-encode filename for non-ASCII characters
encodedFilename := url.PathEscape(filename)
c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", filename, encodedFilename))
c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", buf.Bytes())
filename := title + ".xlsx"
asciiFallback := "log_statistics.xlsx"
encodedFilename := url.PathEscape(filename)
c.Header("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s", asciiFallback, encodedFilename))
c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", buf.Bytes())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/log.go` around lines 240 - 245, The current Content-Disposition
emits raw UTF-8 in filename (variable filename) which is not allowed by RFC
6266; create an ASCII-safe fallback (e.g. fallbackFilename) by
stripping/transliterating or replacing non-ASCII chars (or falling back to a
safe name like "export.xlsx"), keep encodedFilename for filename* (the UTF-8
percent-encoded value), and set the header via c.Header("Content-Disposition",
fmt.Sprintf("attachment; filename=\"%s\"; filename*=UTF-8''%s",
fallbackFilename, encodedFilename)); ensure fallbackFilename is strictly ASCII
and properly quoted.

Comment thread model/log.go
Comment on lines +547 to +558
func GetLogStatistics(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) ([]ModelStatistics, error) {
tx := buildStatisticsQuery(username, tokenName, startTimestamp, endTimestamp, modelName)
var results []ModelStatistics
err := tx.Select("model_name, COALESCE(SUM(quota),0) as quota, COALESCE(SUM(prompt_tokens),0) as prompt_tokens, COALESCE(SUM(completion_tokens),0) as completion_tokens, COUNT(*) as request_count").
Group("model_name").
Order("quota DESC").
Find(&results).Error
if err != nil {
return nil, err
}
return results, nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Statistics query lacks an enforced time-range bound and may scan the entire logs table.

When neither startTimestamp nor endTimestamp is provided, buildStatisticsQuery produces a query that filters only by type = LogTypeConsume and aggregates over the entire log history. For a busy deployment this can be an extremely heavy scan/aggregation, easily blocking the request goroutine and stressing the DB.

Consider one of:

  • Requiring a non-zero startTimestamp/endTimestamp at the controller layer (controller.GetLogStatistics/ExportLogStatistics).
  • Defaulting to a sensible window (e.g. last 24h / 30d) when not supplied.
  • Adding a reasonable LIMIT on rows returned and clamping the max range server-side.

Same concern applies to GetLogStatisticsTrend below.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/log.go` around lines 547 - 558, GetLogStatistics currently calls
buildStatisticsQuery without enforcing a time range which can cause full-table
scans; update the call path so a sane time window is enforced: either validate
in controller.GetLogStatistics/controller.ExportLogStatistics to require
non-zero startTimestamp or endTimestamp and return a 400, or (recommended)
default missing timestamps inside GetLogStatistics/GetLogStatisticsTrend to a
bounded window (e.g., endTimestamp = now, startTimestamp = now - 30*24h) and
clamp any client-supplied range to a max span (e.g., 90 days), so
buildStatisticsQuery always receives a bounded range; apply the same
defaulting/clamping logic to GetLogStatisticsTrend to avoid heavy scans.

Comment thread model/log.go
Comment on lines +89 to +95
const summaryData = hasData ? [{
model_name: t('合计'),
request_count: statistics.reduce((s, m) => s + (m.request_count || 0), 0),
quota: statistics.reduce((s, m) => s + (m.quota || 0), 0),
prompt_tokens: statistics.reduce((s, m) => s + (m.prompt_tokens || 0), 0),
completion_tokens: statistics.reduce((s, m) => s + (m.completion_tokens || 0), 0),
}] : [];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Summary row drops total_tokens differentiation; rowKey can collide with a real model named 合计.

Two small issues in the summary aggregation:

  1. The total_tokens column (line 84) is rendered from prompt_tokens + completion_tokens per row, so the summary works — but only because summaryData includes both fields. Worth a comment so a future change doesn’t break it.
  2. rowKey at line 211 compares record.model_name === t('合计') to detect the summary row. If a model is ever literally named 合计 (or whatever the current locale’s translation resolves to), its row will collide with the summary row’s key. Use a sentinel field on the synthesized summary object instead:
Proposed fix
-  const summaryData = hasData ? [{
-    model_name: t('合计'),
+  const summaryData = hasData ? [{
+    __isSummary: true,
+    model_name: t('合计'),
     request_count: statistics.reduce((s, m) => s + (m.request_count || 0), 0),
     quota: statistics.reduce((s, m) => s + (m.quota || 0), 0),
     prompt_tokens: statistics.reduce((s, m) => s + (m.prompt_tokens || 0), 0),
     completion_tokens: statistics.reduce((s, m) => s + (m.completion_tokens || 0), 0),
   }] : [];
@@
-              rowKey={(record, index) => record.model_name === t('合计') ? '__summary__' : record.model_name}
+              rowKey={(record) => (record.__isSummary ? '__summary__' : record.model_name)}

Also applies to: 209-211

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/components/table/usage-logs/StatisticsDrawer.jsx` around lines 89 -
95, The summary aggregation currently builds summaryData with model_name:
t('合计') and sums prompt_tokens/completion_tokens but doesn't mark the
synthesized row, which risks colliding with a real model named the same and can
silently break total_tokens if fields change; update the summaryData object
created in StatisticsDrawer (symbol: summaryData) to include a sentinel flag
(e.g., __isSummary: true) and add a short comment explaining total_tokens is
derived from prompt_tokens + completion_tokens (symbol: total_tokens column).
Then change the rowKey logic (the current record.model_name === t('合计') check)
to detect summary rows using the sentinel (record.__isSummary) instead of
comparing localized text.

Comment thread web/src/hooks/usage-logs/useLogStatistics.jsx Outdated
Comment thread web/src/hooks/usage-logs/useLogStatistics.jsx Outdated
Comment thread web/src/hooks/usage-logs/useLogStatistics.jsx Outdated
Comment on lines +110 to +114
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.download = filename;
link.click();
URL.revokeObjectURL(link.href);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Revoke the object URL after the download has a chance to start.

URL.revokeObjectURL(link.href) is called synchronously right after link.click(). Some browsers (notably Safari and some mobile browsers) need the URL to remain valid until the download stream is initiated; revoking immediately can cancel the download or produce an empty file. Also, while modern Chromium/Firefox tolerate a detached anchor, appending it to the DOM is the safe pattern.

🛠️ Suggested fix
-      const link = document.createElement('a');
-      link.href = URL.createObjectURL(blob);
-      link.download = filename;
-      link.click();
-      URL.revokeObjectURL(link.href);
+      const objectUrl = URL.createObjectURL(blob);
+      const link = document.createElement('a');
+      link.href = objectUrl;
+      link.download = filename;
+      document.body.appendChild(link);
+      link.click();
+      document.body.removeChild(link);
+      setTimeout(() => URL.revokeObjectURL(objectUrl), 0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 110 - 114, The
download anchor is revoked immediately after link.click(), which can cancel
downloads in some browsers; modify the code in useLogStatistics.jsx so you
append the created anchor element (link) to document.body, call link.click(),
then schedule URL.revokeObjectURL(link.href) with a short timeout (e.g.
setTimeout(..., 0 or 1000)) or revoke in a click/load handler, and finally
remove the anchor from the DOM; keep using the same blob and filename variables
and ensure link is cleaned up after revocation.

- 添加excelize文件defer关闭,防止资源泄漏
- 修复Content-Disposition中filename非ASCII字符的兼容性问题
- 将趋势查询的时间分桶从Go内存处理改为SQL层面聚合,提升性能
- 趋势查询结果按时间升序排序,确保图表数据有序
- 移除前端未使用的VChart和initVChartSemiTheme导入
- 统一日期解析逻辑,兼容Date对象和字符串类型
- setLoading/setExportLoading移至finally块,防止异常时spinner卡住
- URL.revokeObjectURL延迟执行,修复Safari下载兼容性问题
- 修复上次提交引入的问题: CodeRabbit代码审查指出的9项改进建议
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
web/src/hooks/usage-logs/useLogStatistics.jsx (1)

56-64: Floor seconds when converting from Date to avoid sending a non-integer to the backend.

v.getTime() / 1000 may produce a fractional value (e.g., 1761563456.789) if the picker returns a Date with non-zero milliseconds. The Go side parses with strconv.ParseInt, which silently returns 0 on failure, so the time-range filter would be dropped without any visible error. Defensive Math.floor here makes the contract explicit.

🛠️ Suggested fix
       const toTimestamp = (v) => {
-        if (v instanceof Date) return v.getTime() / 1000;
-        if (typeof v === 'string' && v) return Date.parse(v) / 1000;
+        if (v instanceof Date) return Math.floor(v.getTime() / 1000);
+        if (typeof v === 'string' && v) {
+          const ms = Date.parse(v);
+          return Number.isFinite(ms) ? Math.floor(ms / 1000) : 0;
+        }
         return 0;
       };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useLogStatistics.jsx` around lines 56 - 64, The
toTimestamp helper used inside the values.dateRange handling can produce
fractional seconds (v.getTime()/1000 or Date.parse(v)/1000); update toTimestamp
so it returns integer seconds by applying Math.floor to the computed seconds for
both Date and string branches, leaving the fallback return 0 unchanged, and
ensure start_timestamp and end_timestamp are assigned from the now-integer
toTimestamp(values.dateRange[0]) / toTimestamp(values.dateRange[1]).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Around line 108-118: The download filename is being set directly from the
Content-Disposition filename value which may be percent-encoded; update the
logic in useLogStatistics.jsx (the block using contentDisposition, match,
filename and link.download) to prefer parsing the RFC5987 filename* parameter
first, decode it with decodeURIComponent (or URL-decoding) and set that as
filename, and only if filename* is absent fall back to decoding the legacy
filename= capture before assigning to link.download and revoking the object URL.

---

Nitpick comments:
In `@web/src/hooks/usage-logs/useLogStatistics.jsx`:
- Around line 56-64: The toTimestamp helper used inside the values.dateRange
handling can produce fractional seconds (v.getTime()/1000 or
Date.parse(v)/1000); update toTimestamp so it returns integer seconds by
applying Math.floor to the computed seconds for both Date and string branches,
leaving the fallback return 0 unchanged, and ensure start_timestamp and
end_timestamp are assigned from the now-integer toTimestamp(values.dateRange[0])
/ toTimestamp(values.dateRange[1]).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: dfb3baec-2056-4faa-a46f-afde9a1266ba

📥 Commits

Reviewing files that changed from the base of the PR and between 0bc341d and 9df8063.

📒 Files selected for processing (3)
  • controller/log.go
  • model/log.go
  • web/src/hooks/usage-logs/useLogStatistics.jsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • model/log.go

Comment thread web/src/hooks/usage-logs/useLogStatistics.jsx
kK-2004 added 2 commits April 27, 2026 21:41
- 优先解析filename*=UTF-8''(RFC 5987)并用decodeURIComponent解码
- fallback时对legacy filename=值同样进行解码
- 修复中文用户名导出文件名显示为%E7%94%A8%E6%88%B7等编码的问题
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
model/log.go (1)

527-545: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Enforce a bounded time window for statistics queries.

When startTimestamp and endTimestamp are both zero, this path aggregates over the full logs history. That can become a heavy scan for both GetLogStatistics and GetLogStatisticsTrend and degrade request latency under load.

Suggested fix
 func buildStatisticsQuery(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) *gorm.DB {
+	const (
+		defaultWindow = int64(30 * 24 * 3600)
+		maxWindow     = int64(90 * 24 * 3600)
+	)
+	now := time.Now().Unix()
+	if endTimestamp == 0 {
+		endTimestamp = now
+	}
+	if startTimestamp == 0 || startTimestamp > endTimestamp {
+		startTimestamp = endTimestamp - defaultWindow
+	}
+	if endTimestamp-startTimestamp > maxWindow {
+		startTimestamp = endTimestamp - maxWindow
+	}
+
 	tx := LOG_DB.Table("logs").Where("type = ?", LogTypeConsume)
+	tx = tx.Where("created_at >= ?", startTimestamp).Where("created_at <= ?", endTimestamp)
 	if username != "" {
 		tx = tx.Where("username = ?", username)
 	}
 	if tokenName != "" {
 		tx = tx.Where("token_name = ?", tokenName)
 	}
-	if startTimestamp != 0 {
-		tx = tx.Where("created_at >= ?", startTimestamp)
-	}
-	if endTimestamp != 0 {
-		tx = tx.Where("created_at <= ?", endTimestamp)
-	}
 	if modelName != "" {
 		tx = tx.Where("model_name LIKE ?", modelName)
 	}
 	return tx
 }

Also applies to: 547-558, 560-561

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/log.go` around lines 527 - 545, The query builder currently allows
unbounded scans when both startTimestamp and endTimestamp are zero; update
buildStatisticsQuery to enforce a default bounded window (e.g., last 30 days):
if startTimestamp==0 && endTimestamp==0, set endTimestamp = time.Now().Unix()
and startTimestamp = endTimestamp - int64(30*24*3600) before applying the
WHEREs; keep existing checks so explicit timestamps still override the default.
Ensure you import time and use the same timestamp unit as created_at (Unix
seconds) and apply the same defaulting logic to the other similar
builders/blocks referenced (the ones used by GetLogStatistics and
GetLogStatisticsTrend).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@model/log.go`:
- Around line 541-543: The WHERE clause uses modelName directly with LIKE which
allows unescaped '%' and '_' to alter results; change the code that builds tx to
escape '%' '_' and '\' in modelName (same escape routine used elsewhere in this
file), then build a safe pattern (e.g. "%" + escaped + "%") and call tx.Where
with a LIKE ... ESCAPE clause (match the style used elsewhere in this file) so
the DB treats wildcards literally; also update any callers of the surrounding
function to accept and handle the returned error if you surface one from the
escape/routing logic.

---

Duplicate comments:
In `@model/log.go`:
- Around line 527-545: The query builder currently allows unbounded scans when
both startTimestamp and endTimestamp are zero; update buildStatisticsQuery to
enforce a default bounded window (e.g., last 30 days): if startTimestamp==0 &&
endTimestamp==0, set endTimestamp = time.Now().Unix() and startTimestamp =
endTimestamp - int64(30*24*3600) before applying the WHEREs; keep existing
checks so explicit timestamps still override the default. Ensure you import time
and use the same timestamp unit as created_at (Unix seconds) and apply the same
defaulting logic to the other similar builders/blocks referenced (the ones used
by GetLogStatistics and GetLogStatisticsTrend).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 74388c6e-c8fc-4069-999f-38389a231636

📥 Commits

Reviewing files that changed from the base of the PR and between ace5020 and 70f467e.

📒 Files selected for processing (1)
  • model/log.go

Comment thread model/log.go
Comment on lines +541 to +543
if modelName != "" {
tx = tx.Where("model_name LIKE ?", modelName)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Escape model_name LIKE patterns before querying.

modelName is passed directly into LIKE, so %/_ change semantics and can unintentionally widen results. Use the same escaped pattern approach already used elsewhere in this file.

Suggested fix
-func buildStatisticsQuery(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) *gorm.DB {
+func buildStatisticsQuery(username string, tokenName string, startTimestamp int64, endTimestamp int64, modelName string) (*gorm.DB, error) {
 	tx := LOG_DB.Table("logs").Where("type = ?", LogTypeConsume)
 	...
 	if modelName != "" {
-		tx = tx.Where("model_name LIKE ?", modelName)
+		modelNamePattern, err := sanitizeLikePattern(modelName)
+		if err != nil {
+			return nil, err
+		}
+		tx = tx.Where("model_name LIKE ? ESCAPE '!'", modelNamePattern)
 	}
-	return tx
+	return tx, nil
 }

And update callers to handle the returned error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/log.go` around lines 541 - 543, The WHERE clause uses modelName
directly with LIKE which allows unescaped '%' and '_' to alter results; change
the code that builds tx to escape '%' '_' and '\' in modelName (same escape
routine used elsewhere in this file), then build a safe pattern (e.g. "%" +
escaped + "%") and call tx.Where with a LIKE ... ESCAPE clause (match the style
used elsewhere in this file) so the DB treats wildcards literally; also update
any callers of the surrounding function to accept and handle the returned error
if you surface one from the escape/routing logic.

@ytanck
Copy link
Copy Markdown

ytanck commented May 11, 2026

期待使用日志添加导出功能

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants