Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds a 50,000-character result-size budget enforced in the SQL tool and prompts; centralizes ClickHouse error-safety logic into a helper; frontend maps AI/provider errors to friendlier messages and uses that helper in the Ask-AI UI. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant AI_Assistant
participant Backend
participant ClickHouse
participant ErrorLogger
User->>Frontend: Submit question (may request analytics)
Frontend->>AI_Assistant: Send prompt (includes SQL guidance & size budget)
AI_Assistant->>Frontend: Emits tool call -> queryAnalytics(SQL)
Frontend->>Backend: Invoke queryAnalytics with SQL
Backend->>ClickHouse: Execute SQL
ClickHouse-->>Backend: Return rows or error
Backend->>Backend: Serialize rows -> check SQL_QUERY_RESULT_MAX_CHARS
alt ≤ 50K chars
Backend-->>Frontend: { success: true, rowCount, result }
Frontend->>User: Display results
else > 50K chars
Backend-->>Frontend: { success: false, error: "Result too large", rowCount, characters, columnsReturned }
Frontend->>AI_Assistant: Assistant detects `{ success: false }` -> issue narrower query (per prompt)
Frontend->>ErrorLogger: captureError(if uncategorized)
AI_Assistant->>Frontend: New, narrower tool call
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR adds a 50,000-character result-size guard to the Confidence Score: 5/5Safe to merge — all findings are P2 style suggestions that do not block correctness. The size guard, prompt additions, and frontend error handling are all correct. The only finding is a P2 efficiency note about serialising large result sets before discarding them, which does not affect correctness or user-facing behaviour. No files require special attention. Important Files Changed
Sequence DiagramsequenceDiagram
participant AI as AI Model
participant Tool as queryAnalytics Tool
participant CH as ClickHouse
AI->>Tool: execute(query)
Tool->>CH: query (max 10MB, 10k rows)
CH-->>Tool: resultSet (JSONEachRow)
Tool->>Tool: rows = await resultSet.json()
Tool->>Tool: serialized = JSON.stringify(rows)
alt serialized.length > 50,000 chars
Tool-->>AI: { success: false, error, rowCount, characters, columnsReturned }
AI->>AI: Re-query with aggregation / narrower WHERE / smaller LIMIT
AI->>Tool: execute(refined_query)
else within budget
Tool-->>AI: { success: true, rowCount, result: rows }
end
Prompt To Fix All With AIThis is a comment left during a code review.
Path: apps/backend/src/lib/ai/tools/sql-query.ts
Line: 40-41
Comment:
**Full result transferred before size check**
`JSON.stringify(rows)` materialises the entire result set in memory (up to the existing 10 MB ClickHouse cap) before the guard kicks in. Any result between ~50 KB and 10 MB is fully fetched and then discarded. Lowering `max_result_bytes` to something closer to the 50,000-character target (e.g. ~200 KB to give headroom for non-ASCII) would cut wasted bandwidth without changing the existing fallback logic.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "initial commit" | Re-trigger Greptile |
There was a problem hiding this comment.
Pull request overview
This PR reduces runaway token usage in the dashboard “Ask AI” analytics flow by enforcing a hard cap on the size of SQL tool results and by improving how AI/provider errors are reported to end users.
Changes:
- Add a 50,000-character hard limit for
queryAnalyticstool responses and return a structuredsuccess:falseerror when exceeded. - Update the AI system prompt with strict “tool result budget” guidance and re-query instructions.
- Improve dashboard error UX by mapping raw AI errors to friendly messages and capturing raw errors via
captureError.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| apps/dashboard/src/components/commands/ask-ai.tsx | Adds friendly AI error messaging and captures raw errors for observability. |
| apps/backend/src/lib/ai/tools/sql-query.ts | Enforces a maximum serialized result size and returns a guided error object instead of large row payloads. |
| apps/backend/src/lib/ai/prompts.ts | Instructs the AI to prefer aggregation and to re-query when the tool returns a “result too large” response. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/src/lib/ai/prompts.ts`:
- Line 120: Update the quantile example in apps/backend/src/lib/ai/prompts.ts to
use a schema-valid numeric value instead of the nonexistent "amount" column:
either replace "amount" with a real numeric column from the table (e.g.,
"duration_ms" or "price") or explicitly extract a numeric value from JSON (e.g.,
use JSONExtractFloat(properties, 'amount') or
toFloat64(JSONExtractRaw(properties, 'amount')) ) so the example query (the
Quantiles line) produces valid ClickHouse SQL; modify the example string where
"Quantiles: SELECT quantile(0.5)(amount), quantile(0.95)(amount) FROM events"
appears accordingly.
In `@apps/dashboard/src/components/commands/ask-ai.tsx`:
- Around line 485-497: Update getFriendlyAiErrorMessage to explicitly detect
backend "Result too large" errors by checking the combined blob (error.message +
cause message) for patterns like /result too large|limit \d+/i and return a
clear user-facing message such as asking the user to narrow the query or request
fewer rows; modify the function (getFriendlyAiErrorMessage and its blob
variable) to add this new conditional branch before the generic fallback so
those backend errors no longer fall through to "Something went wrong."
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 683f7767-1fd4-4794-879e-1f4188654fb4
📒 Files selected for processing (3)
apps/backend/src/lib/ai/prompts.tsapps/backend/src/lib/ai/tools/sql-query.tsapps/dashboard/src/components/commands/ask-ai.tsx
There was a problem hiding this comment.
🧹 Nitpick comments (1)
apps/backend/src/lib/clickhouse-errors.ts (1)
39-40: CachegetNodeEnvironment()once per call.Line 39 calls
getNodeEnvironment()twice in the same branch; storing once improves clarity and avoids repeated lookup.Suggested refactor
- if (getNodeEnvironment() === "development" || getNodeEnvironment() === "test") { + const nodeEnv = getNodeEnvironment(); + if (nodeEnv === "development" || nodeEnv === "test") { return `${DEFAULT_CLICKHOUSE_ERROR_MESSAGE}${!isKnown ? "\n\nThis error is not known and you should probably add it to the safe or unsafe codes in clickhouse-errors.ts." : ""}\n\nAs you are in development mode, you can see the full error: ${errorCode} ${message}`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/clickhouse-errors.ts` around lines 39 - 40, The code calls getNodeEnvironment() twice when building the development/test error string; refactor by calling getNodeEnvironment() once into a local const (e.g., const env = getNodeEnvironment()) at the start of the function in clickhouse-errors.ts and use that variable in the if condition and the returned template instead of repeated calls, ensuring getNodeEnvironment() is only invoked once per call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/backend/src/lib/clickhouse-errors.ts`:
- Around line 39-40: The code calls getNodeEnvironment() twice when building the
development/test error string; refactor by calling getNodeEnvironment() once
into a local const (e.g., const env = getNodeEnvironment()) at the start of the
function in clickhouse-errors.ts and use that variable in the if condition and
the returned template instead of repeated calls, ensuring getNodeEnvironment()
is only invoked once per call.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 552d2c5d-5a3f-471d-9e14-acb9939e94d8
📒 Files selected for processing (3)
apps/backend/src/app/api/latest/internal/analytics/query/route.tsapps/backend/src/lib/ai/tools/sql-query.tsapps/backend/src/lib/clickhouse-errors.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/backend/src/lib/ai/tools/sql-query.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx`:
- Around line 541-542: The cast on causeMessage bypasses the type system; change
the extraction to runtime-safe narrowing: ensure the thrown value is an Error
(e.g., check error instanceof Error) then inspect error.cause with defensive
checks (if cause instanceof Error -> use cause.message, else if typeof cause ===
"string" -> use cause, else if cause and typeof cause === "object" and "message"
in cause and typeof (cause as any).message === "string" -> use that), and
fallback to an empty string; then build blob from error.message plus that
computed causeMessage (update the code that defines causeMessage and blob
accordingly).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ca0b94d1-416b-4d7c-a14a-db308c1e1b40
📒 Files selected for processing (2)
apps/dashboard/src/components/commands/ai-chat-shared.tsxapps/dashboard/src/components/commands/ask-ai.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/dashboard/src/components/commands/ask-ai.tsx
This PR fixes the bug where analytics tool returns a lot of rows, which results in huge token count. We do it by checking the number of characters in the tool call, and if it is more than 50000 characters, we send an error message rather than the rows and ask the ai to make more focused queries.
Summary by CodeRabbit
New Features
Bug Fixes & Improvements