feat: 添加 Ollama 原生供应商适配与审批模式选择#409
feat: 添加 Ollama 原生供应商适配与审批模式选择#409pei-lin-001 wants to merge 2 commits intoclaude-code-best:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds an Ollama provider integration (client, model mapping, converters, stream adapter, context caching, web search/fetch adapters, OAuth/settings wiring, and tests) and introduces a new interactive ChangesOllama Provider Integration
Approval Command
Sequence Diagram(s)sequenceDiagram
participant User
participant Client as Claude Code Client
participant Settings
participant OllamaAPI as Ollama API
User->>Client: Select Ollama provider or provide API key & defaults
Client->>Settings: Persist modelType 'ollama' and OLLAMA_* envs
Settings-->>Client: Confirm saved / env updated
User->>Client: Send chat message (may include tool use)
Client->>Client: Convert Anthropic messages/tools → Ollama format
Client->>OllamaAPI: POST /chat (streaming) with tools, think, options
OllamaAPI->>Client: Stream chunks (thinking, content, tool_calls)
Client->>Client: Parse/adapt chunks → Anthropic stream events
Client->>User: Stream thinking/text/tool events in UI
Client->>Client: On completion, assemble final AssistantMessage and record usage
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (15)
src/commands/approval/approvalModes.ts (1)
103-110: 💤 Low value
getApprovalModeDescriptorsilently falls back todefaultfor thebubbleinternal mode.If the current
PermissionModeis'bubble'(an internal mode not inAPPROVAL_MODE_DESCRIPTORS),showCurrentApprovalModeinapproval.tsxwould display "Current approval mode: Default (Ask before tools that need approval)" — which is misleading. Consider returning a dedicated fallback label or guarding the caller.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/approval/approvalModes.ts` around lines 103 - 110, getApprovalModeDescriptor currently returns the first descriptor as a silent fallback when a mode like the internal 'bubble' isn't present in APPROVAL_MODE_DESCRIPTORS, causing showCurrentApprovalMode (in approval.tsx) to display a misleading "Default" label; update getApprovalModeDescriptor to detect missing modes and return a clear fallback descriptor (e.g., { mode, label: "Internal/Unknown", description: "This mode is internal and not displayed" }) or change its signature to return undefined so the caller (showCurrentApprovalMode) can render an explicit "Internal/Unknown" message; reference the function getApprovalModeDescriptor and the constant APPROVAL_MODE_DESCRIPTORS when making the change.src/commands/approval/__tests__/approvalModes.test.ts (1)
1-63: ⚡ Quick winMissing test coverage for the
help/-h/--helpargument path.
HELP_ARGS = new Set(['help', '-h', '--help'])drives a separate{ type: 'help' }branch inparseApprovalModeArg(and ultimately the help message inexecuteApproval), but there are no tests for it. Additionally,formatApprovalModeis only tested forbypassPermissions; other modes are untested.➕ Proposed additional tests
+ test('parses help arguments', () => { + expect(parseApprovalModeArg('help')).toEqual({ type: 'help' }) + expect(parseApprovalModeArg('-h')).toEqual({ type: 'help' }) + expect(parseApprovalModeArg('--help')).toEqual({ type: 'help' }) + }) + + test('normalizes uppercase and underscore inputs', () => { + expect(parseApprovalModeArg('DEFAULT')).toEqual({ type: 'mode', mode: 'default' }) + expect(parseApprovalModeArg('full_access')).toEqual({ type: 'mode', mode: 'bypassPermissions' }) + })And in
describe('formatApprovalMode'):+ test('returns the label for non-bypass modes', () => { + expect(formatApprovalMode('default')).toBe('Default') + expect(formatApprovalMode('plan')).toBe('Plan') + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/approval/__tests__/approvalModes.test.ts` around lines 1 - 63, Add tests covering the HELP_ARGS branch and additional formatApprovalMode outputs: update approvalModes.test.ts to assert parseApprovalModeArg returns { type: 'help' } for each string in HELP_ARGS (e.g., 'help', '-h', '--help'), and add formatApprovalMode assertions for other modes (e.g., 'default' -> "Default (default)", 'acceptEdits' -> "Accept edits (acceptEdits)", 'dontAsk' -> "Don't ask (dontAsk)", etc.). Reference the parseApprovalModeArg function and HELP_ARGS set to locate the missing branch and the formatApprovalMode function to add the extra expectations. Ensure tests follow the existing bun:test structure and mirror the style used for existing cases.src/commands/approval/approval.tsx (1)
91-94: 💤 Low valueHelp text is hardcoded and will drift from
APPROVAL_MODE_DESCRIPTORS.Adding or renaming a mode in
APPROVAL_MODE_DESCRIPTORSrequires a manual update here; they'll easily diverge.♻️ Proposed refactor — derive from descriptors
- case 'help': - return { - message: - 'Usage: /approval [default|accept-edits|plan|auto|dont-ask|full-access]\n\nApproval modes:\n- default: Ask before tools that need approval\n- accept-edits: Auto-approve file edits, still ask for risky actions\n- plan: Plan first, do not make changes until you approve\n- auto: Use the automatic approval classifier when available\n- dont-ask: Deny anything not already pre-approved\n- full-access: Allow tool use without approval prompts', - }; + case 'help': { + const modeLines = APPROVAL_MODE_DESCRIPTORS.map( + d => `- ${d.aliases[1] ?? d.aliases[0] ?? d.mode}: ${d.description}`, + ).join('\n') + return { + message: `Usage: /approval [default|accept-edits|plan|auto|dont-ask|full-access]\n\nApproval modes:\n${modeLines}`, + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/approval/approval.tsx` around lines 91 - 94, The help text is hardcoded and can drift from the canonical APPROVAL_MODE_DESCRIPTORS; replace the literal multi-line string returned in the function (the object with key message in src/commands/approval/approval.tsx) with a generated string built from APPROVAL_MODE_DESCRIPTORS (iterate its entries to produce the usage line and a bullet list of mode names and descriptions), ensuring the usage header includes the modes joined from the descriptor keys and that each descriptor's description is used verbatim so adding/renaming a mode in APPROVAL_MODE_DESCRIPTORS automatically updates this help output.src/utils/auth.ts (1)
115-125: ⚡ Quick winReplace
(settings as any).modelTypewith optional chaining on the raw return value.All four uses of
(settings as any).modelType(lines 120–123, including the pre-existingopenai/geminichecks) violate theas anyprohibition in production TypeScript. The cast is unnecessary —getSettings_DEPRECATED()already returns a typed value; the|| {}fallback is what defeats the type. Splitting the reads eliminates the cast entirely.♻️ Proposed refactor
- const settings = getSettings_DEPRECATED() || {} + const settings = getSettings_DEPRECATED() const is3P = isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) || isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) || isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY) || - (settings as any).modelType === 'openai' || - (settings as any).modelType === 'gemini' || - (settings as any).modelType === 'grok' || - (settings as any).modelType === 'ollama' || + settings?.modelType === 'openai' || + settings?.modelType === 'gemini' || + settings?.modelType === 'grok' || + settings?.modelType === 'ollama' || !!process.env.OPENAI_BASE_URL || !!process.env.GEMINI_BASE_URL - const apiKeyHelper = settings.apiKeyHelper + const apiKeyHelper = settings?.apiKeyHelperAs per coding guidelines: "Prohibit
as anytype assertions in production code; useas unknown as SpecificTypedouble assertion or interface supplementation when type mismatches occur."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/auth.ts` around lines 115 - 125, The code uses (settings as any).modelType to check modelType which violates the no-as-any rule; instead stop masking the typed return by removing the "|| {}" fallback and use optional chaining on the raw getSettings_DEPRECATED() result — e.g., assign const settings = getSettings_DEPRECATED(); then replace each (settings as any).modelType === 'openai'|'gemini'|'grok'|'ollama' with settings?.modelType === 'openai' (and similar) so you preserve types and eliminate the cast while keeping the other env-based checks (CLAUDE_CODE_*, OPENAI_BASE_URL, GEMINI_BASE_URL) intact when computing is3P.src/utils/status.tsx (1)
291-310: 💤 Low valueLGTM —
'ollama': 'Ollama'correctly extends the provider label map.Note: other providers (
gemini,grok,openai) each get a follow-upelse ifbranch that surfaces their configured base URL in the status pane. Adding an equivalent block forOLLAMA_BASE_URLwould let users verify which Ollama endpoint is active without leaving the status view.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/status.tsx` around lines 291 - 310, Add an else-if branch in buildAPIProviderProperties to mirror the other providers that surface their base URL: when apiProvider === 'ollama' push a Property with a label like "Ollama base URL" and a value taken from the configured Ollama endpoint (e.g., process.env.OLLAMA_BASE_URL or the project’s config/getEnv helper) so the status pane shows the active Ollama endpoint; follow the same pattern and null/undefined fallback used by the existing gemini/grok/openai branches.src/components/ConsoleOAuthFlow.tsx (1)
6-6: ⚡ Quick winUse the
src/*alias for this new import.This introduces a new relative import in a TSX file, which goes against the repo import-path convention.
Suggested fix
-import { resetModelStringsForTestingOnly } from '../bootstrap/state.js'; +import { resetModelStringsForTestingOnly } from 'src/bootstrap/state.js';As per coding guidelines,
Use src/* path alias in imports; do not use relative paths like ../../../.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ConsoleOAuthFlow.tsx` at line 6, The import in ConsoleOAuthFlow.tsx uses a relative path for resetModelStringsForTestingOnly; update the import to use the repository alias (src/*) instead of '../bootstrap/state.js' so it follows the project's import-path convention — locate the import of resetModelStringsForTestingOnly in ConsoleOAuthFlow.tsx and replace the relative path with the aliased path (e.g., import { resetModelStringsForTestingOnly } from 'src/bootstrap/state').src/utils/context.ts (1)
8-9: 💤 Low valueConsider using
src/*path alias for imports.Per coding guidelines, prefer
src/*path alias over relative paths like../../../. However, this file already uses relative imports consistently (lines 2-7), so this is a minor consistency note.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/context.ts` around lines 8 - 9, Update the imports in src/utils/context.ts to use the project path alias (src/...) instead of relative paths: replace the import of getCachedOllamaContextLength from '../services/api/ollama/context.js' and getAPIProvider from './model/providers.js' with their aliased equivalents (e.g., import { getCachedOllamaContextLength } from 'src/services/api/ollama/context' and import { getAPIProvider } from 'src/utils/model/providers' or the correct aliased module paths in your project); ensure the module specifiers match the configured TS/webpack path mappings and adjust any file extensions to match project conventions.src/services/api/ollama/client.ts (1)
106-127: ⚖️ Poor tradeoffMissing request timeout handling.
The
postOllamaJSONfunction accepts anAbortSignalbut doesn't set a default timeout. For reliability, consider adding a default timeout to prevent indefinite hangs when the Ollama server is unresponsive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/ollama/client.ts` around lines 106 - 127, postOllamaJSON currently accepts an optional AbortSignal but has no default timeout, so add a default timeout by creating an AbortController inside postOllamaJSON when requestOptions?.signal is not provided (or to combine/chain signals if one is provided), set a timer (e.g., 30s) to call controller.abort(), pass controller.signal into fetchOllama, and clear the timer after the fetch completes or errors; reference the postOllamaJSON function and the call to fetchOllama to locate where to create/attach the AbortController and timer.packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts (1)
59-61: 💤 Low valueGlobal
getAPIProvidermock silently activates the Ollama path for any test callinggetURLMarkdownContentAfter
beforeEachdeletesOLLAMA_USE_NATIVE_WEB_FETCH,shouldUseOllamaWebFetch()evaluates the provider ('ollama') and the default base URL (ollama.com), returningtrue. This means any future test in this file that callsgetURLMarkdownContentwithout explicitly settingOLLAMA_USE_NATIVE_WEB_FETCH = 'false'will silently take the Ollama path. The existing tests handle this correctly (redirect/proxy tests bypassgetURLMarkdownContent; the content-type test opts out explicitly), but the implicit contract is non-obvious.Consider adding a comment in
beforeEachcalling out why the opt-out is required:beforeEach(() => { delete process.env.OLLAMA_USE_NATIVE_WEB_FETCH + // NOTE: getAPIProvider() is mocked to return 'ollama', so shouldUseOllamaWebFetch() + // returns true by default. Tests that exercise the non-Ollama path in + // getURLMarkdownContent() must set OLLAMA_USE_NATIVE_WEB_FETCH = 'false'. getMock = async () => ({ ... }) })Also applies to: 79-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts` around lines 59 - 61, The test file globally mocks getAPIProvider to return 'ollama', which combined with beforeEach deleting OLLAMA_USE_NATIVE_WEB_FETCH causes shouldUseOllamaWebFetch() to select the Ollama path for any getURLMarkdownContent calls; update the beforeEach block to add a clear comment explaining that we intentionally delete OLLAMA_USE_NATIVE_WEB_FETCH to force the default provider behavior and that tests which must avoid the Ollama path should explicitly set OLLAMA_USE_NATIVE_WEB_FETCH = 'false' (or otherwise opt out), and add the same explanatory comment near the other beforeEach at lines 79-87 so future readers understand why opt-out is required when calling getURLMarkdownContent.packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts (1)
29-35: ⚡ Quick win
isThirdPartyProvider()checks raw env vars and missesmodelType-based configurationThe new Ollama routing correctly uses
getAPIProvider(), which checkssettings.modelTypebefore env vars.isThirdPartyProvider()does not — it only checks the env-var forms. If a user configures OpenAI/Gemini/Grok viamodelType: 'openai'in their settings file (without settingCLAUDE_CODE_USE_OPENAI),getAPIProvider()returns'openai'butisThirdPartyProvider()returnsfalse. This causes the factory to fall through toisFirstPartyAnthropicBaseUrl(), likely selecting theapiadapter (Anthropic server_tools), which doesn't work with OpenAI endpoints.♻️ Suggested fix
+import { + getAPIProvider, + isFirstPartyAnthropicBaseUrl, +} from 'src/utils/model/providers.js' function isThirdPartyProvider(): boolean { - return !!( - process.env.CLAUDE_CODE_USE_OPENAI || - process.env.CLAUDE_CODE_USE_GEMINI || - process.env.CLAUDE_CODE_USE_GROK - ) + const provider = getAPIProvider() + return provider === 'openai' || provider === 'gemini' || provider === 'grok' }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts` around lines 29 - 35, isThirdPartyProvider() currently checks only raw env vars and can miss cases where the user configured a third-party provider via settings.modelType; update isThirdPartyProvider() to call getAPIProvider() and return true when getAPIProvider() returns 'openai', 'gemini', or 'grok' (in addition to or instead of the existing env-var checks), so the factory routing matches the same provider resolution as getAPIProvider().packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts (1)
47-52: 💤 Low valueTest 4 doesn't fully isolate "OLLAMA_MODEL is ignored"
Because
OLLAMA_DEFAULT_SONNET_MODEL = 'qwen3-coder'is also set, the assertiontoBe('qwen3-coder')would pass even ifOLLAMA_MODELwere consulted as a lower-priority fallback (both cases yield the same value). A stronger form of the test would set onlyOLLAMA_MODEL(withoutOLLAMA_DEFAULT_SONNET_MODEL) and assert the result is still'qwen3-coder'(the hardcoded fallback), proving the override is never read.🔍 Supplemental test
test('falls back to qwen3-coder when only OLLAMA_MODEL is set (legacy override ignored)', () => { process.env.OLLAMA_MODEL = 'legacy-global-model' // OLLAMA_DEFAULT_SONNET_MODEL intentionally not set expect(resolveOllamaModel('claude-sonnet-4-6')).toBe('qwen3-coder') })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts around lines 47 - 52, The test currently sets both process.env.OLLAMA_MODEL and process.env.OLLAMA_DEFAULT_SONNET_MODEL so the assertion can pass regardless of whether resolveOllamaModel consults the legacy OLLAMA_MODEL; update the test in packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts to isolate the legacy override by removing the OLLAMA_DEFAULT_SONNET_MODEL assignment (or add a new test) so only process.env.OLLAMA_MODEL is set and then assert resolveOllamaModel('claude-sonnet-4-6') returns the hardcoded 'qwen3-coder', referencing the resolveOllamaModel function to ensure the legacy global is never read.src/services/api/ollama/__tests__/client.test.ts (1)
82-116: 💤 Low valueContext-length and caching tests are nested under the wrong
describeblockLines 82–96 test
extractOllamaModelInfoContextLength/extractOllamaNumCtxParameter(both fromcontext.ts), and lines 98–116 testgetOllamaContextLength— none of which areshowOllamaModel. When one of these tests fails the runner reportsshowOllamaModel > caches context length…, which is misleading.♻️ Suggested reorganisation
-describe('showOllamaModel', () => { +describe('showOllamaModel', () => { test('fetches model details via native /api/show', async () => { // …existing test… }) +}) - test('extracts context length from model_info and num_ctx parameters', () => { +describe('extractOllamaModelInfoContextLength / extractOllamaNumCtxParameter', () => { + test('extracts context length from model_info and num_ctx parameters', () => { // … }) +}) - test('caches context length after /api/show', async () => { +describe('getOllamaContextLength', () => { + test('caches context length after first /api/show fetch', async () => { // … }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/ollama/__tests__/client.test.ts` around lines 82 - 116, The tests for extractOllamaModelInfoContextLength, extractOllamaNumCtxParameter and getOllamaContextLength are placed under the wrong describe block (showOllamaModel), which causes misleading test names on failure; move the block that contains the tests referencing extractOllamaModelInfoContextLength, extractOllamaNumCtxParameter and getOllamaContextLength out of the describe('showOllamaModel') suite into either a new describe('context') suite or the appropriate existing describe for context.ts so each test is named after the function it tests and the caching test for getOllamaContextLength sits with other context-related tests.packages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts (1)
72-75: 💤 Low valueAccessing
.deltaon a potentiallyundefinedmessageDeltawill throw an opaqueTypeErrorIf
adaptOllamaStreamToAnthropicemits nomessage_deltaevent,events.find(...)returnsundefined, andmessageDelta.delta.stop_reasoncrashes withTypeError: Cannot read properties of undefined (reading 'delta')instead of a clear assertion failure.🛡️ Suggested fix
const messageDelta = events.find(event => event.type === 'message_delta') +expect(messageDelta).toBeDefined() expect(messageDelta.delta.stop_reason).toBe('tool_use') expect(messageDelta.usage.input_tokens).toBe(12) expect(messageDelta.usage.output_tokens).toBe(5)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts around lines 72 - 75, The test accesses messageDelta.delta without ensuring messageDelta exists, which can throw a TypeError; update the test in streamAdapter.test.ts to assert the find result is defined before reading properties (e.g., assert expect(messageDelta).toBeDefined() or use a guard like if (!messageDelta) fail('expected message_delta event')), then continue to check messageDelta.delta.stop_reason and the usage fields so failures produce clear assertion errors; reference the messageDelta variable and the events.find(...) call in the test for the change.packages/builtin-tools/src/tools/WebFetchTool/utils.ts (1)
388-399: 💤 Low value
shouldUseOllamaWebFetchreturnstrueby default for the Ollama provider with noOLLAMA_BASE_URLsetWhen
getAPIProvider() === 'ollama'andOLLAMA_BASE_URLis unset, the function defaults to'https://ollama.com/api', which passes the hostname check, so native web fetch is silently enabled. This is intentional (cloud Ollama supports it; local Ollama doesn't), but worth an inline comment so future readers understand the implicit contract.📝 Suggested inline doc
export function shouldUseOllamaWebFetch(): boolean { if (process.env.OLLAMA_USE_NATIVE_WEB_FETCH !== undefined) { return isEnvTruthy(process.env.OLLAMA_USE_NATIVE_WEB_FETCH) } if (getAPIProvider() !== 'ollama') return false + // When OLLAMA_BASE_URL is not set the client defaults to ollama.com (cloud), + // which provides /api/web_fetch. Local Ollama (e.g. localhost:11434) does not, + // so only enable native fetch when the hostname is 'ollama.com'. + // Override with OLLAMA_USE_NATIVE_WEB_FETCH=true/false when needed. const baseURL = process.env.OLLAMA_BASE_URL || 'https://ollama.com/api' try { return new URL(baseURL).hostname === 'ollama.com' } catch { return false } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 388 - 399, shouldUseOllamaWebFetch currently defaults to enabling native web fetch for the Ollama provider when OLLAMA_BASE_URL is unset by assuming 'https://ollama.com/api'; add a concise inline comment inside the shouldUseOllamaWebFetch function near the default baseURL logic explaining the implicit contract: cloud Ollama (ollama.com) supports native web fetch while local/offline Ollama instances do not, and mention that OLLAMA_USE_NATIVE_WEB_FETCH env overrides this behavior; reference the function name shouldUseOllamaWebFetch, the env vars OLLAMA_USE_NATIVE_WEB_FETCH and OLLAMA_BASE_URL, and getAPIProvider() to help locate where to add the comment.src/services/api/ollama/__tests__/think.test.ts (1)
2-6: ⚡ Quick winUse
src/*aliases for these test imports.These deep relative imports make the new test brittle to file moves and diverge from the repo import convention.
Suggested cleanup
-import { getEmptyToolPermissionContext } from '../../../../Tool.js' -import { asSystemPrompt } from '../../../../utils/systemPromptType.js' -import type { Options } from '../../claude.js' -import { resolveOllamaThink } from '../index.js' -import { queryModelOllama } from '../index.js' +import { getEmptyToolPermissionContext } from 'src/Tool.js' +import type { Options } from 'src/services/api/claude.js' +import { queryModelOllama, resolveOllamaThink } from 'src/services/api/ollama/index.js' +import { asSystemPrompt } from 'src/utils/systemPromptType.js'As per coding guidelines, "Use
src/*path alias in imports; do not use relative paths like../../../".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/ollama/__tests__/think.test.ts` around lines 2 - 6, The test file uses deep relative imports for symbols like getEmptyToolPermissionContext, asSystemPrompt, Options, resolveOllamaThink, and queryModelOllama; update the import paths to use the repo's src/* path aliases instead of '../../../' style paths so the test follows project conventions and is resilient to file moves (replace '../../../../Tool.js', '../../../../utils/systemPromptType.js', '../../claude.js', and '../index.js' imports with their corresponding src/... aliases).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`:
- Around line 72-99: The domain filter check in matchesDomainFilters compares
URL.hostname (already lowercase) to allowedDomains/blockedDomains without
normalizing them, so mixed-case or whitespace entries can bypass rules;
normalize entries first by mapping allowedDomains and blockedDomains to trimmed,
lowercased strings (and strip any leading '.'), then perform the existing
comparisons against hostname (which you already obtain via new
URL(url).hostname) to ensure correct allow/block behavior; update
matchesDomainFilters to use these normalized arrays in the .some checks.
In `@src/commands/approval/approval.tsx`:
- Around line 42-47: The current synchronous gating uses
isAutoModeGateEnabled(), which is a stub that always returns true, so the block
never runs; change getModeUnavailableMessage (in
src/commands/approval/approval.tsx) to synchronously call
getAutoModeUnavailableReason() for mode === 'auto' and, if it returns a reason,
return the existing "Auto approval is unavailable..." message (using
getAutoModeUnavailableNotification(reason)); remove or stop relying on
isAutoModeGateEnabled() for UI gating so the picker grays out correctly and
prevents applyApprovalMode from optimistically applying auto before
verifyAutoModeGateAccess runs.
In `@src/commands/provider.ts`:
- Around line 113-124: The branch handling arg === 'ollama' returns early when
OLLAMA_API_KEY is missing, but it never applies saved env vars so a stored
OLLAMA_BASE_URL in settings.env isn't loaded into process.env for the current
session; update the branch in the provider switch to call
applyConfigEnvironmentVariables() (or the equivalent function that applies
settings.env) after updateSettingsForSource('userSettings', { modelType:
'ollama' }) and before returning so the saved OLLAMA_BASE_URL (and other saved
envs) take effect immediately; ensure you still return the same text response
after applying the env.
In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 1270-1291: After a successful
updateSettingsForSource('userSettings', { modelType: 'ollama', env }), clear the
in-memory cached model strings so the new Ollama mappings take effect
immediately: after the for (const [k, v] of Object.entries(env)) process.env[k]
= v; and before setOAuthStatus({ state: 'success' }) / void onDone(), call the
same cache-reset helper used in the custom-platform path (e.g.
resetCachedModelStrings() or refreshModelMappings()) — import it if needed and
await it if it returns a Promise — so session/model lookups no longer use stale
strings.
In `@src/services/api/ollama/index.ts`:
- Around line 528-541: The helper resolveOllamaMaxTokens allows an override or
env value to exceed the discovered/default cap; change its logic in
resolveOllamaMaxTokens so any override (override param or parsed
OLLAMA_MAX_TOKENS / CLAUDE_CODE_MAX_OUTPUT_TOKENS) is clamped to
defaultMaxTokens (use Math.min) and only accepted if finite and >0; locate the
function resolveOllamaMaxTokens and replace the early return of override or
parsed with a clamped value (e.g., return Math.min(defaultMaxTokens,
validOverride)) while preserving the final fallback to defaultMaxTokens.
In `@src/utils/model/modelOptions.ts`:
- Around line 77-85: The helper getProviderEnvVar incorrectly falls back all
unknown providers to ANTHROPIC_*; change the logic so only the explicit
'anthropic' provider maps to process.env[`ANTHROPIC_${suffix}`] and any other
provider returns undefined. Locate getProviderEnvVar and replace the final
default return with an explicit if (provider === 'anthropic') return
process.env[`ANTHROPIC_${suffix}`]; else return undefined so unsupported
providers (e.g., grok, bedrock, vertex, foundry) do not pick up Anthropic env
vars.
---
Nitpick comments:
In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts:
- Around line 47-52: The test currently sets both process.env.OLLAMA_MODEL and
process.env.OLLAMA_DEFAULT_SONNET_MODEL so the assertion can pass regardless of
whether resolveOllamaModel consults the legacy OLLAMA_MODEL; update the test in
packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
to isolate the legacy override by removing the OLLAMA_DEFAULT_SONNET_MODEL
assignment (or add a new test) so only process.env.OLLAMA_MODEL is set and then
assert resolveOllamaModel('claude-sonnet-4-6') returns the hardcoded
'qwen3-coder', referencing the resolveOllamaModel function to ensure the legacy
global is never read.
In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts:
- Around line 72-75: The test accesses messageDelta.delta without ensuring
messageDelta exists, which can throw a TypeError; update the test in
streamAdapter.test.ts to assert the find result is defined before reading
properties (e.g., assert expect(messageDelta).toBeDefined() or use a guard like
if (!messageDelta) fail('expected message_delta event')), then continue to check
messageDelta.delta.stop_reason and the usage fields so failures produce clear
assertion errors; reference the messageDelta variable and the events.find(...)
call in the test for the change.
In `@packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts`:
- Around line 59-61: The test file globally mocks getAPIProvider to return
'ollama', which combined with beforeEach deleting OLLAMA_USE_NATIVE_WEB_FETCH
causes shouldUseOllamaWebFetch() to select the Ollama path for any
getURLMarkdownContent calls; update the beforeEach block to add a clear comment
explaining that we intentionally delete OLLAMA_USE_NATIVE_WEB_FETCH to force the
default provider behavior and that tests which must avoid the Ollama path should
explicitly set OLLAMA_USE_NATIVE_WEB_FETCH = 'false' (or otherwise opt out), and
add the same explanatory comment near the other beforeEach at lines 79-87 so
future readers understand why opt-out is required when calling
getURLMarkdownContent.
In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts`:
- Around line 388-399: shouldUseOllamaWebFetch currently defaults to enabling
native web fetch for the Ollama provider when OLLAMA_BASE_URL is unset by
assuming 'https://ollama.com/api'; add a concise inline comment inside the
shouldUseOllamaWebFetch function near the default baseURL logic explaining the
implicit contract: cloud Ollama (ollama.com) supports native web fetch while
local/offline Ollama instances do not, and mention that
OLLAMA_USE_NATIVE_WEB_FETCH env overrides this behavior; reference the function
name shouldUseOllamaWebFetch, the env vars OLLAMA_USE_NATIVE_WEB_FETCH and
OLLAMA_BASE_URL, and getAPIProvider() to help locate where to add the comment.
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts`:
- Around line 29-35: isThirdPartyProvider() currently checks only raw env vars
and can miss cases where the user configured a third-party provider via
settings.modelType; update isThirdPartyProvider() to call getAPIProvider() and
return true when getAPIProvider() returns 'openai', 'gemini', or 'grok' (in
addition to or instead of the existing env-var checks), so the factory routing
matches the same provider resolution as getAPIProvider().
In `@src/commands/approval/__tests__/approvalModes.test.ts`:
- Around line 1-63: Add tests covering the HELP_ARGS branch and additional
formatApprovalMode outputs: update approvalModes.test.ts to assert
parseApprovalModeArg returns { type: 'help' } for each string in HELP_ARGS
(e.g., 'help', '-h', '--help'), and add formatApprovalMode assertions for other
modes (e.g., 'default' -> "Default (default)", 'acceptEdits' -> "Accept edits
(acceptEdits)", 'dontAsk' -> "Don't ask (dontAsk)", etc.). Reference the
parseApprovalModeArg function and HELP_ARGS set to locate the missing branch and
the formatApprovalMode function to add the extra expectations. Ensure tests
follow the existing bun:test structure and mirror the style used for existing
cases.
In `@src/commands/approval/approval.tsx`:
- Around line 91-94: The help text is hardcoded and can drift from the canonical
APPROVAL_MODE_DESCRIPTORS; replace the literal multi-line string returned in the
function (the object with key message in src/commands/approval/approval.tsx)
with a generated string built from APPROVAL_MODE_DESCRIPTORS (iterate its
entries to produce the usage line and a bullet list of mode names and
descriptions), ensuring the usage header includes the modes joined from the
descriptor keys and that each descriptor's description is used verbatim so
adding/renaming a mode in APPROVAL_MODE_DESCRIPTORS automatically updates this
help output.
In `@src/commands/approval/approvalModes.ts`:
- Around line 103-110: getApprovalModeDescriptor currently returns the first
descriptor as a silent fallback when a mode like the internal 'bubble' isn't
present in APPROVAL_MODE_DESCRIPTORS, causing showCurrentApprovalMode (in
approval.tsx) to display a misleading "Default" label; update
getApprovalModeDescriptor to detect missing modes and return a clear fallback
descriptor (e.g., { mode, label: "Internal/Unknown", description: "This mode is
internal and not displayed" }) or change its signature to return undefined so
the caller (showCurrentApprovalMode) can render an explicit "Internal/Unknown"
message; reference the function getApprovalModeDescriptor and the constant
APPROVAL_MODE_DESCRIPTORS when making the change.
In `@src/components/ConsoleOAuthFlow.tsx`:
- Line 6: The import in ConsoleOAuthFlow.tsx uses a relative path for
resetModelStringsForTestingOnly; update the import to use the repository alias
(src/*) instead of '../bootstrap/state.js' so it follows the project's
import-path convention — locate the import of resetModelStringsForTestingOnly in
ConsoleOAuthFlow.tsx and replace the relative path with the aliased path (e.g.,
import { resetModelStringsForTestingOnly } from 'src/bootstrap/state').
In `@src/services/api/ollama/__tests__/client.test.ts`:
- Around line 82-116: The tests for extractOllamaModelInfoContextLength,
extractOllamaNumCtxParameter and getOllamaContextLength are placed under the
wrong describe block (showOllamaModel), which causes misleading test names on
failure; move the block that contains the tests referencing
extractOllamaModelInfoContextLength, extractOllamaNumCtxParameter and
getOllamaContextLength out of the describe('showOllamaModel') suite into either
a new describe('context') suite or the appropriate existing describe for
context.ts so each test is named after the function it tests and the caching
test for getOllamaContextLength sits with other context-related tests.
In `@src/services/api/ollama/__tests__/think.test.ts`:
- Around line 2-6: The test file uses deep relative imports for symbols like
getEmptyToolPermissionContext, asSystemPrompt, Options, resolveOllamaThink, and
queryModelOllama; update the import paths to use the repo's src/* path aliases
instead of '../../../' style paths so the test follows project conventions and
is resilient to file moves (replace '../../../../Tool.js',
'../../../../utils/systemPromptType.js', '../../claude.js', and '../index.js'
imports with their corresponding src/... aliases).
In `@src/services/api/ollama/client.ts`:
- Around line 106-127: postOllamaJSON currently accepts an optional AbortSignal
but has no default timeout, so add a default timeout by creating an
AbortController inside postOllamaJSON when requestOptions?.signal is not
provided (or to combine/chain signals if one is provided), set a timer (e.g.,
30s) to call controller.abort(), pass controller.signal into fetchOllama, and
clear the timer after the fetch completes or errors; reference the
postOllamaJSON function and the call to fetchOllama to locate where to
create/attach the AbortController and timer.
In `@src/utils/auth.ts`:
- Around line 115-125: The code uses (settings as any).modelType to check
modelType which violates the no-as-any rule; instead stop masking the typed
return by removing the "|| {}" fallback and use optional chaining on the raw
getSettings_DEPRECATED() result — e.g., assign const settings =
getSettings_DEPRECATED(); then replace each (settings as any).modelType ===
'openai'|'gemini'|'grok'|'ollama' with settings?.modelType === 'openai' (and
similar) so you preserve types and eliminate the cast while keeping the other
env-based checks (CLAUDE_CODE_*, OPENAI_BASE_URL, GEMINI_BASE_URL) intact when
computing is3P.
In `@src/utils/context.ts`:
- Around line 8-9: Update the imports in src/utils/context.ts to use the project
path alias (src/...) instead of relative paths: replace the import of
getCachedOllamaContextLength from '../services/api/ollama/context.js' and
getAPIProvider from './model/providers.js' with their aliased equivalents (e.g.,
import { getCachedOllamaContextLength } from 'src/services/api/ollama/context'
and import { getAPIProvider } from 'src/utils/model/providers' or the correct
aliased module paths in your project); ensure the module specifiers match the
configured TS/webpack path mappings and adjust any file extensions to match
project conventions.
In `@src/utils/status.tsx`:
- Around line 291-310: Add an else-if branch in buildAPIProviderProperties to
mirror the other providers that surface their base URL: when apiProvider ===
'ollama' push a Property with a label like "Ollama base URL" and a value taken
from the configured Ollama endpoint (e.g., process.env.OLLAMA_BASE_URL or the
project’s config/getEnv helper) so the status pane shows the active Ollama
endpoint; follow the same pattern and null/undefined fallback used by the
existing gemini/grok/openai branches.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 45e50e50-e865-4795-9a2a-f1523ac344e7
📒 Files selected for processing (43)
README.mddocs/features/ollama-provider.mdpackages/@ant/model-provider/src/index.tspackages/@ant/model-provider/src/providers/ollama/__tests__/convertMessages.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/convertTools.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.tspackages/@ant/model-provider/src/providers/ollama/convertMessages.tspackages/@ant/model-provider/src/providers/ollama/convertTools.tspackages/@ant/model-provider/src/providers/ollama/modelMapping.tspackages/@ant/model-provider/src/providers/ollama/streamAdapter.tspackages/@ant/model-provider/src/providers/ollama/types.tspackages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.tspackages/builtin-tools/src/tools/WebFetchTool/utils.tspackages/builtin-tools/src/tools/WebSearchTool/__tests__/adapterFactory.test.tspackages/builtin-tools/src/tools/WebSearchTool/adapters/index.tspackages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.tssrc/commands.tssrc/commands/approval/__tests__/approvalModes.test.tssrc/commands/approval/approval.tsxsrc/commands/approval/approvalModes.tssrc/commands/approval/index.tssrc/commands/provider.tssrc/components/ConsoleOAuthFlow.tsxsrc/services/api/claude.tssrc/services/api/ollama/__tests__/client.test.tssrc/services/api/ollama/__tests__/think.test.tssrc/services/api/ollama/client.tssrc/services/api/ollama/context.tssrc/services/api/ollama/index.tssrc/utils/auth.tssrc/utils/context.tssrc/utils/effort.tssrc/utils/managedEnvConstants.tssrc/utils/model/__tests__/ollamaModelOptions.test.tssrc/utils/model/configs.tssrc/utils/model/model.tssrc/utils/model/modelOptions.tssrc/utils/model/providers.tssrc/utils/permissions/permissionSetup.tssrc/utils/settings/types.tssrc/utils/status.tsxsrc/utils/thinking.ts
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/commands/approval/approvalModes.ts (2)
97-100: ⚡ Quick winDerive “valid options” from descriptors instead of hard-coding
This message duplicates mode names and can drift from
APPROVAL_MODE_DESCRIPTORS. Build it from the descriptor list so parser/help/UI stay in sync.Suggested change
+const PRIMARY_APPROVAL_ALIASES = APPROVAL_MODE_DESCRIPTORS.map(descriptor => descriptor.aliases[0]).join(', '); + export function parseApprovalModeArg(args: string): ApprovalModeArgResult { const normalized = normalizeApprovalArg(args) if (CURRENT_ARGS.has(normalized)) { return { type: 'current' } @@ return { type: 'invalid', - message: `Invalid approval mode: ${args}. Valid options are: default, accept-edits, plan, auto, dont-ask, full-access`, + message: `Invalid approval mode: ${args}. Valid options are: ${PRIMARY_APPROVAL_ALIASES}`, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/approval/approvalModes.ts` around lines 97 - 100, The error message hard-codes valid mode names and can drift; instead derive the list from APPROVAL_MODE_DESCRIPTORS. In the function that returns the invalid result (the return with type: 'invalid' and message: `Invalid approval mode...`), build the valid options string by mapping APPROVAL_MODE_DESCRIPTORS to their keys/names (e.g., Object.keys or .map over the descriptors) and join them into the message so the message uses the live descriptor list rather than a hard-coded string.
1-1: ⚡ Quick winUse the
src/*alias for this TypeScript importPlease replace the relative import with the project alias to match repo conventions and avoid fragile path traversal.
Suggested change
-import type { PermissionMode } from '../../utils/permissions/PermissionMode.js' +import type { PermissionMode } from 'src/utils/permissions/PermissionMode.js';As per coding guidelines, "Use
src/*path alias in imports; do not use relative paths like../../../".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/approval/approvalModes.ts` at line 1, Replace the relative import of PermissionMode in approvalModes.ts with the project path alias: change the import of PermissionMode from '../../utils/permissions/PermissionMode.js' to use the 'src/*' alias (e.g., 'src/utils/permissions/PermissionMode.js') so the file imports PermissionMode via the src path alias instead of a relative traversal.src/services/api/ollama/__tests__/think.test.ts (1)
2-4: ⚡ Quick winUse
src/*aliases instead of deep relative imports.Lines 2-4 currently use
../../../../...paths; switch these tosrc/*aliases to keep imports stable and consistent with repo conventions.Proposed change
-import { getEmptyToolPermissionContext } from '../../../../Tool.js' -import { asSystemPrompt } from '../../../../utils/systemPromptType.js' -import type { Options } from '../../claude.js' +import { getEmptyToolPermissionContext } from 'src/Tool.js' +import { asSystemPrompt } from 'src/utils/systemPromptType.js' +import type { Options } from 'src/services/api/claude.js'As per coding guidelines,
Use src/* path alias in imports; do not use relative paths like ../../../.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/ollama/__tests__/think.test.ts` around lines 2 - 4, The test file uses deep relative imports for getEmptyToolPermissionContext, asSystemPrompt and Options; update the import statements in src/services/api/ollama/__tests__/think.test.ts to use the repo path aliases (replace '../../../../Tool.js' and '../../../../utils/systemPromptType.js' and '../../claude.js' with their equivalent 'src/...' aliased paths), ensuring you import getEmptyToolPermissionContext, asSystemPrompt and the Options type via the src/* aliases to match project conventions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts:
- Around line 14-27: The test setup currently deletes envKeys only once at
module load and restores them in afterEach, which allows host environment
variables to leak into later tests; add a beforeEach that clears those keys from
process.env (for each key in envKeys delete process.env[key]) so every test
starts with a clean environment, keep savedEnv and the existing afterEach
restore logic intact; reference envKeys, savedEnv, beforeEach and afterEach to
locate the change.
In `@src/utils/status.tsx`:
- Around line 400-405: The status display shows raw OLLAMA_BASE_URL which can
differ from the actual endpoint because createOllamaClient auto-appends "/api";
update the apiProvider === 'ollama' branch to derive the effective Ollama URL
the same way as createOllamaClient (e.g., take process.env.OLLAMA_BASE_URL or
default, and if it doesn't end with "/api" append "/api") before pushing
properties so the label/value for "Ollama base URL" matches the runtime
endpoint; reference the ollamaBaseUrl variable and createOllamaClient behavior
when making the change.
---
Nitpick comments:
In `@src/commands/approval/approvalModes.ts`:
- Around line 97-100: The error message hard-codes valid mode names and can
drift; instead derive the list from APPROVAL_MODE_DESCRIPTORS. In the function
that returns the invalid result (the return with type: 'invalid' and message:
`Invalid approval mode...`), build the valid options string by mapping
APPROVAL_MODE_DESCRIPTORS to their keys/names (e.g., Object.keys or .map over
the descriptors) and join them into the message so the message uses the live
descriptor list rather than a hard-coded string.
- Line 1: Replace the relative import of PermissionMode in approvalModes.ts with
the project path alias: change the import of PermissionMode from
'../../utils/permissions/PermissionMode.js' to use the 'src/*' alias (e.g.,
'src/utils/permissions/PermissionMode.js') so the file imports PermissionMode
via the src path alias instead of a relative traversal.
In `@src/services/api/ollama/__tests__/think.test.ts`:
- Around line 2-4: The test file uses deep relative imports for
getEmptyToolPermissionContext, asSystemPrompt and Options; update the import
statements in src/services/api/ollama/__tests__/think.test.ts to use the repo
path aliases (replace '../../../../Tool.js' and
'../../../../utils/systemPromptType.js' and '../../claude.js' with their
equivalent 'src/...' aliased paths), ensuring you import
getEmptyToolPermissionContext, asSystemPrompt and the Options type via the src/*
aliases to match project conventions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 178c29c0-728d-4fc9-9ee8-62add9aa7cdf
📒 Files selected for processing (45)
README.mddocs/features/ollama-provider.mdpackages/@ant/model-provider/src/index.tspackages/@ant/model-provider/src/providers/ollama/__tests__/convertMessages.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/convertTools.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.tspackages/@ant/model-provider/src/providers/ollama/convertMessages.tspackages/@ant/model-provider/src/providers/ollama/convertTools.tspackages/@ant/model-provider/src/providers/ollama/modelMapping.tspackages/@ant/model-provider/src/providers/ollama/streamAdapter.tspackages/@ant/model-provider/src/providers/ollama/types.tspackages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.tspackages/builtin-tools/src/tools/WebFetchTool/utils.tspackages/builtin-tools/src/tools/WebSearchTool/__tests__/adapterFactory.test.tspackages/builtin-tools/src/tools/WebSearchTool/__tests__/ollamaAdapter.test.tspackages/builtin-tools/src/tools/WebSearchTool/adapters/index.tspackages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.tssrc/commands.tssrc/commands/approval/__tests__/approvalModes.test.tssrc/commands/approval/approval.tsxsrc/commands/approval/approvalModes.tssrc/commands/approval/index.tssrc/commands/provider.tssrc/components/ConsoleOAuthFlow.tsxsrc/services/api/claude.tssrc/services/api/ollama/__tests__/client.test.tssrc/services/api/ollama/__tests__/think.test.tssrc/services/api/ollama/client.tssrc/services/api/ollama/context.tssrc/services/api/ollama/index.tssrc/utils/auth.tssrc/utils/context.tssrc/utils/effort.tssrc/utils/managedEnvConstants.tssrc/utils/model/__tests__/ollamaModelOptions.test.tssrc/utils/model/__tests__/providers.test.tssrc/utils/model/configs.tssrc/utils/model/model.tssrc/utils/model/modelOptions.tssrc/utils/model/providers.tssrc/utils/permissions/permissionSetup.tssrc/utils/settings/types.tssrc/utils/status.tsxsrc/utils/thinking.ts
✅ Files skipped from review due to trivial changes (8)
- src/commands/approval/index.ts
- packages/@ant/model-provider/src/providers/ollama/tests/convertMessages.test.ts
- src/utils/model/providers.ts
- packages/@ant/model-provider/src/providers/ollama/tests/convertTools.test.ts
- src/commands/approval/tests/approvalModes.test.ts
- packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts
- packages/builtin-tools/src/tools/WebFetchTool/tests/headers.test.ts
- src/services/api/ollama/index.ts
🚧 Files skipped from review as they are similar to previous changes (28)
- src/utils/permissions/permissionSetup.ts
- packages/builtin-tools/src/tools/WebSearchTool/tests/adapterFactory.test.ts
- src/commands.ts
- packages/@ant/model-provider/src/providers/ollama/modelMapping.ts
- src/utils/model/tests/ollamaModelOptions.test.ts
- src/utils/context.ts
- src/utils/model/model.ts
- packages/@ant/model-provider/src/providers/ollama/tests/streamAdapter.test.ts
- README.md
- src/services/api/ollama/context.ts
- packages/@ant/model-provider/src/providers/ollama/types.ts
- packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
- src/components/ConsoleOAuthFlow.tsx
- src/utils/model/configs.ts
- docs/features/ollama-provider.md
- src/services/api/ollama/tests/client.test.ts
- packages/@ant/model-provider/src/providers/ollama/convertTools.ts
- src/services/api/claude.ts
- packages/builtin-tools/src/tools/WebFetchTool/utils.ts
- src/commands/provider.ts
- packages/@ant/model-provider/src/index.ts
- src/utils/auth.ts
- src/commands/approval/approval.tsx
- src/utils/model/modelOptions.ts
- src/utils/effort.ts
- src/utils/thinking.ts
- packages/@ant/model-provider/src/providers/ollama/convertMessages.ts
- src/services/api/ollama/client.ts
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/utils/thinking.ts (1)
117-155:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
modelSupportsAdaptiveThinking— Ollama check placed after legacy-model exclusion block, silently disabling adaptive thinking for Ollama Haiku/Sonnet/Opus tiers.The legacy exclusion block (lines 132–138) returns
falsefor any model whose canonical name contains"opus","sonnet", or"haiku"— with no provider guard. The Ollamareturn trueat lines 151–153 is only reachable when those strings are absent. Since the PR explicitly introduces a Haiku/Sonnet/Opus three-tier model mapping for Ollama (via/loginconfig), those canonical names are expected to match the exclusion block, and adaptive thinking will be silently disabled for them.Compare
modelSupportsThinking(lines 91–114): the Ollama early-return is placed before any canonical-based logic, which is the correct pattern.🐛 Proposed fix — move `provider` retrieval and Ollama guard before the legacy exclusion block
export function modelSupportsAdaptiveThinking(model: string): boolean { const supported3P = get3PModelCapabilityOverride(model, 'adaptive_thinking') if (supported3P !== undefined) { return supported3P } + const provider = getAPIProvider() + if (provider === 'ollama') { + return true + } const canonical = getCanonicalName(model) // Supported by a subset of Claude 4 models if ( canonical.includes('opus-4-7') || canonical.includes('opus-4-6') || canonical.includes('sonnet-4-6') ) { return true } // Exclude any other known legacy models (allowlist above catches 4-6+ variants first) if ( canonical.includes('opus') || canonical.includes('sonnet') || canonical.includes('haiku') ) { return false } // ... - const provider = getAPIProvider() - if (provider === 'ollama') { - return true - } return provider === 'firstParty' || provider === 'foundry' }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/thinking.ts` around lines 117 - 155, The function modelSupportsAdaptiveThinking currently checks canonical names for "opus"/"sonnet"/"haiku" before considering the API provider, which causes Ollama models to be incorrectly excluded; move the provider lookup (call to getAPIProvider()) and the Ollama early-return (return true for provider === 'ollama') to before the canonical-based legacy exclusion block so Ollama model strings are allowed to opt-in via the three-tier mapping, and ensure subsequent logic (getCanonicalName and the legacy excludes) runs only for non-Ollama providers.
🧹 Nitpick comments (5)
packages/@ant/model-provider/src/providers/ollama/modelMapping.ts (1)
34-40: ⚡ Quick winNo generic
OLLAMA_DEFAULT_MODELescape hatch before the hardcodedqwen3-coderfallback
qwen3-coderexists in the official Ollama registry, but the full model requires a minimum of 250 GB of memory or unified memory — most users won't have it pre-installed. When no family-specific env var is configured (or whengetModelFamilyreturnsnullfor an unrecognized Claude model family), there is no way to set a single generic override without configuring all three family env vars separately.Adding
OLLAMA_DEFAULT_MODELas a lower-priority tier before the hardcoded name would let users configure one variable for all cases:♻️ Proposed fix to add a generic fallback override
// 2. Per-family env var (OLLAMA_DEFAULT_OPUS_MODEL, etc.) if (family) { const ollamaEnvVar = `OLLAMA_DEFAULT_${family.toUpperCase()}_MODEL` const ollamaOverride = process.env[ollamaEnvVar] if (ollamaOverride) return ollamaOverride } - return 'qwen3-coder' + // 3. Generic override for any remaining Claude ID (e.g. unrecognised family) + const genericOverride = process.env['OLLAMA_DEFAULT_MODEL'] + if (genericOverride) return genericOverride + + return 'qwen3-coder'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts around lines 34 - 40, The current mapping returns the hardcoded 'qwen3-coder' when no family-specific env var is set; add a lower-priority generic override by checking process.env.OLLAMA_DEFAULT_MODEL before falling back to 'qwen3-coder'. Update the logic in the same scope where ollamaEnvVar/ollamaOverride are used (in modelMapping.ts, around the family handling and return) to first return process.env.OLLAMA_DEFAULT_MODEL if defined, otherwise return the hardcoded 'qwen3-coder'.packages/@ant/model-provider/src/providers/ollama/types.ts (2)
34-37: ⚡ Quick win
num_ctxis missing fromoptionsdespite the PR's stated dynamic context-length feature.The PR description states that the provider reads the model's context window via
/api/showand adjusts the output cap accordingly, butnum_ctx— one of the valid Ollamaoptionskeys used to set the context window size — is absent. Any implementation code that sets it will need to bypass the type, risking anas anyassertion.♻️ Proposed fix
options?: { temperature?: number num_predict?: number + num_ctx?: number }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/ollama/types.ts around lines 34 - 37, The options type is missing the Ollama key for context window size; update the options declaration (the options object where temperature and num_predict are defined) to include num_ctx?: number so callers can type-safely set the model context length without using any assertions; ensure the new field is optional and typed as number to match Ollama's API.
1-8: ⚡ Quick winConsider replacing
OllamaMessagewith a role-discriminated union.The current flat interface allows invalid combinations at the type level (e.g.,
role: 'system'withtool_calls, orrole: 'assistant'withtool_name). A discriminated union enforces correct field presence per role at compile time, which is valuable for all callers — especially the stream adapter and any future API clients.♻️ Proposed discriminated-union refactor
-export interface OllamaMessage { - role: 'system' | 'user' | 'assistant' | 'tool' - content?: string - thinking?: string - tool_name?: string - images?: string[] - tool_calls?: OllamaToolCall[] -} +export type OllamaMessage = + | { role: 'system'; content: string } + | { role: 'user'; content?: string; images?: string[] } + | { + role: 'assistant' + content?: string + thinking?: string + tool_calls?: OllamaToolCall[] + } + | { role: 'tool'; tool_name?: string; content?: string }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/ollama/types.ts around lines 1 - 8, OllamaMessage is currently a flat interface that allows invalid role/field combos; replace it with a role-discriminated union (e.g., type OllamaMessage = SystemMessage | UserMessage | AssistantMessage | ToolMessage) where each variant explicitly declares only the valid fields (for example SystemMessage: { role: 'system'; content?: string }, UserMessage: { role: 'user'; content?: string }, AssistantMessage: { role: 'assistant'; content?: string; thinking?: string }, ToolMessage: { role: 'tool'; tool_name: string; tool_calls?: OllamaToolCall[]; images?: string[] }), export the union as OllamaMessage, and update all consumers (notably the stream adapter and any serializers/parsers) to use the new discriminated types so TypeScript enforces correct field presence by role.packages/builtin-tools/src/tools/WebFetchTool/utils.ts (2)
417-420: ⚡ Quick winType assertion on
response.json()bypasses structural validation.
Response.json()resolves toany; assertinganydirectly toOllamaWebFetchResponseis functionally equivalent to anas anycast — it provides no runtime or compile-time guarantee about the payload shape. Per the coding guidelines, objects with unknown structure should be accessed throughRecord<string, unknown>with explicit type guards.♻️ Proposed refactor
- const payload = (await response.json()) as OllamaWebFetchResponse - const title = payload.title?.trim() - const content = payload.content?.trim() ?? '' - const links = Array.isArray(payload.links) ? payload.links : [] + const raw = (await response.json()) as Record<string, unknown> + const title = typeof raw.title === 'string' ? raw.title.trim() : undefined + const content = typeof raw.content === 'string' ? raw.content.trim() : '' + const links = Array.isArray(raw.links) + ? raw.links.filter((l): l is string => typeof l === 'string') + : []As per coding guidelines: "Use
Record<string, unknown>instead ofanyfor objects with unknown structure" and "Use type guards (type narrowing) with union types instead of forced type casting."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 417 - 420, The code currently casts response.json() directly to OllamaWebFetchResponse (payload), bypassing structural validation; change to first parse JSON into an unknown/Record<string, unknown> and add a type guard function (e.g., isOllamaWebFetchResponse(obj): obj is OllamaWebFetchResponse) that checks payload.title is string|undefined, payload.content is string|undefined, and payload.links is an array if present. Use that guard before extracting title/content/links (trim only after verifying string), and fall back to safe defaults when the guard fails to avoid unsafe as-casts in the payload/title/content/links handling.
401-438: 💤 Low value
getURLMarkdownContentFromOllamadoes not populateURL_CACHE.Every call to this function goes over the network, even for repeated identical URLs. The standard path deduplicates with a 15-minute TTL LRU cache. Whether caching is desirable here depends on whether freshness is critical for the Ollama use-case, but the discrepancy is worth an explicit comment either way.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 401 - 438, The function getURLMarkdownContentFromOllama currently never writes to URL_CACHE, causing repeated network calls for the same URL; either populate URL_CACHE after a successful fetch or add an explicit comment explaining why freshness prevents caching. Implement the fix by, inside getURLMarkdownContentFromOllama after building the return object (and before returning), inserting a URL_CACHE.set(url, { bytes, code, codeText, content, contentType }) with the same LRU/TTL semantics used elsewhere (15-minute TTL) so callers using the shared cache are deduplicated; if opting not to cache, add a clear comment in the function explaining why Ollama responses must always bypass URL_CACHE and reference the URL_CACHE symbol in that comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts:
- Around line 26-29: The current clean-up only removes a trailing "[1m]" and can
leave ANSI artifacts that break isClaudeModelId; update the sanitization around
selectedModel (the cleanModel assignment) to remove ANSI escape sequences more
robustly (strip any leading/trailing or embedded ANSI CSI sequences) before
calling isClaudeModelId so model-family detection works reliably; modify the
cleanModel logic used by isClaudeModelId and any callers in modelMapping.ts to
use the broader strip, or add a small helper (e.g., sanitizeModelName) and use
it wherever selectedModel is normalized.
In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts`:
- Around line 448-450: The Ollama branch currently returns early from
shouldUseOllamaWebFetch() to call getURLMarkdownContentFromOllama(), which
bypasses the checkDomainBlocklist() preflight when skipWebFetchPreflight is
false; move or add the same blocklist guard used in the non-Ollama path so that
before calling getURLMarkdownContentFromOllama() you conditionally call
checkDomainBlocklist(url) (respecting skipWebFetchPreflight) and
abort/throw/return the same way the other branch does if the domain is blocked.
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts`:
- Around line 41-52: The env override check for adapterKey currently ignores
WEB_SEARCH_ADAPTER=ollama because the whitelist only includes
'api','bing','brave','exa'; update the conditional that sets adapterKey (the
envAdapter === ... branch) to also accept 'ollama' so an explicit envAdapter ===
'ollama' returns 'ollama' instead of falling through to getAPIProvider(); keep
the existing fallback to getAPIProvider() and the ternary structure intact.
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`:
- Around line 25-30: The adapter uses Math.min(Math.max(options.numResults ?? 5,
1), 10) which defaults to 5, but SearchOptions documents a default of 8; update
the fallback to use 8 so callers who omit numResults get the expected default.
Modify the expression that computes maxResults (referenced in ollamaAdapter.ts
as maxResults and options.numResults) to use ?? 8 while keeping the same min/max
bounds before calling client.webSearch.
- Around line 45-47: The code force-casts the fetch body to
OllamaWebSearchResponse and iterates payload.results without runtime validation;
replace that with a type guard: check Array.isArray(payload.results) before the
for...of loop so you only iterate when payload.results is an array, keeping the
existing per-item guards inside the loop (referencing payload.results,
OllamaWebSearchResponse, and SearchResult) and leaving results: SearchResult[]
initialization unchanged.
In `@src/commands/approval/approval.tsx`:
- Around line 117-125: The effect in ApplyApprovalAndClose (the React.useEffect
that calls setAppState and onDone) can call onDone twice due to message
changing; add a run-once ref guard: create a ref (e.g., doneCalledRef =
useRef(false)) in the component, check if doneCalledRef.current is false before
invoking onDone(message), set doneCalledRef.current = true immediately after
calling onDone, and keep the existing setAppState/applyModeUpdate logic intact;
reference applyModeUpdate, executeApproval, and ApprovalCommandWithArgs to
locate the surrounding code.
---
Outside diff comments:
In `@src/utils/thinking.ts`:
- Around line 117-155: The function modelSupportsAdaptiveThinking currently
checks canonical names for "opus"/"sonnet"/"haiku" before considering the API
provider, which causes Ollama models to be incorrectly excluded; move the
provider lookup (call to getAPIProvider()) and the Ollama early-return (return
true for provider === 'ollama') to before the canonical-based legacy exclusion
block so Ollama model strings are allowed to opt-in via the three-tier mapping,
and ensure subsequent logic (getCanonicalName and the legacy excludes) runs only
for non-Ollama providers.
---
Nitpick comments:
In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts:
- Around line 34-40: The current mapping returns the hardcoded 'qwen3-coder'
when no family-specific env var is set; add a lower-priority generic override by
checking process.env.OLLAMA_DEFAULT_MODEL before falling back to 'qwen3-coder'.
Update the logic in the same scope where ollamaEnvVar/ollamaOverride are used
(in modelMapping.ts, around the family handling and return) to first return
process.env.OLLAMA_DEFAULT_MODEL if defined, otherwise return the hardcoded
'qwen3-coder'.
In `@packages/`@ant/model-provider/src/providers/ollama/types.ts:
- Around line 34-37: The options type is missing the Ollama key for context
window size; update the options declaration (the options object where
temperature and num_predict are defined) to include num_ctx?: number so callers
can type-safely set the model context length without using any assertions;
ensure the new field is optional and typed as number to match Ollama's API.
- Around line 1-8: OllamaMessage is currently a flat interface that allows
invalid role/field combos; replace it with a role-discriminated union (e.g.,
type OllamaMessage = SystemMessage | UserMessage | AssistantMessage |
ToolMessage) where each variant explicitly declares only the valid fields (for
example SystemMessage: { role: 'system'; content?: string }, UserMessage: {
role: 'user'; content?: string }, AssistantMessage: { role: 'assistant';
content?: string; thinking?: string }, ToolMessage: { role: 'tool'; tool_name:
string; tool_calls?: OllamaToolCall[]; images?: string[] }), export the union as
OllamaMessage, and update all consumers (notably the stream adapter and any
serializers/parsers) to use the new discriminated types so TypeScript enforces
correct field presence by role.
In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts`:
- Around line 417-420: The code currently casts response.json() directly to
OllamaWebFetchResponse (payload), bypassing structural validation; change to
first parse JSON into an unknown/Record<string, unknown> and add a type guard
function (e.g., isOllamaWebFetchResponse(obj): obj is OllamaWebFetchResponse)
that checks payload.title is string|undefined, payload.content is
string|undefined, and payload.links is an array if present. Use that guard
before extracting title/content/links (trim only after verifying string), and
fall back to safe defaults when the guard fails to avoid unsafe as-casts in the
payload/title/content/links handling.
- Around line 401-438: The function getURLMarkdownContentFromOllama currently
never writes to URL_CACHE, causing repeated network calls for the same URL;
either populate URL_CACHE after a successful fetch or add an explicit comment
explaining why freshness prevents caching. Implement the fix by, inside
getURLMarkdownContentFromOllama after building the return object (and before
returning), inserting a URL_CACHE.set(url, { bytes, code, codeText, content,
contentType }) with the same LRU/TTL semantics used elsewhere (15-minute TTL) so
callers using the shared cache are deduplicated; if opting not to cache, add a
clear comment in the function explaining why Ollama responses must always bypass
URL_CACHE and reference the URL_CACHE symbol in that comment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6243cfe1-ea88-41ab-9c7c-a2450f9025cf
📒 Files selected for processing (45)
README.mddocs/features/ollama-provider.mdpackages/@ant/model-provider/src/index.tspackages/@ant/model-provider/src/providers/ollama/__tests__/convertMessages.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/convertTools.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.tspackages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.tspackages/@ant/model-provider/src/providers/ollama/convertMessages.tspackages/@ant/model-provider/src/providers/ollama/convertTools.tspackages/@ant/model-provider/src/providers/ollama/modelMapping.tspackages/@ant/model-provider/src/providers/ollama/streamAdapter.tspackages/@ant/model-provider/src/providers/ollama/types.tspackages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.tspackages/builtin-tools/src/tools/WebFetchTool/utils.tspackages/builtin-tools/src/tools/WebSearchTool/__tests__/adapterFactory.test.tspackages/builtin-tools/src/tools/WebSearchTool/__tests__/ollamaAdapter.test.tspackages/builtin-tools/src/tools/WebSearchTool/adapters/index.tspackages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.tssrc/commands.tssrc/commands/approval/__tests__/approvalModes.test.tssrc/commands/approval/approval.tsxsrc/commands/approval/approvalModes.tssrc/commands/approval/index.tssrc/commands/provider.tssrc/components/ConsoleOAuthFlow.tsxsrc/services/api/claude.tssrc/services/api/ollama/__tests__/client.test.tssrc/services/api/ollama/__tests__/think.test.tssrc/services/api/ollama/client.tssrc/services/api/ollama/context.tssrc/services/api/ollama/index.tssrc/utils/auth.tssrc/utils/context.tssrc/utils/effort.tssrc/utils/managedEnvConstants.tssrc/utils/model/__tests__/ollamaModelOptions.test.tssrc/utils/model/__tests__/providers.test.tssrc/utils/model/configs.tssrc/utils/model/model.tssrc/utils/model/modelOptions.tssrc/utils/model/providers.tssrc/utils/permissions/permissionSetup.tssrc/utils/settings/types.tssrc/utils/status.tsxsrc/utils/thinking.ts
✅ Files skipped from review due to trivial changes (16)
- packages/builtin-tools/src/tools/WebSearchTool/tests/ollamaAdapter.test.ts
- packages/@ant/model-provider/src/providers/ollama/tests/convertMessages.test.ts
- src/services/api/ollama/context.ts
- packages/builtin-tools/src/tools/WebSearchTool/tests/adapterFactory.test.ts
- src/utils/effort.ts
- src/utils/settings/types.ts
- src/utils/permissions/permissionSetup.ts
- packages/@ant/model-provider/src/providers/ollama/tests/convertTools.test.ts
- docs/features/ollama-provider.md
- src/services/api/ollama/tests/think.test.ts
- src/utils/model/model.ts
- packages/@ant/model-provider/src/providers/ollama/tests/streamAdapter.test.ts
- src/commands/approval/approvalModes.ts
- packages/@ant/model-provider/src/index.ts
- src/commands.ts
- src/utils/model/tests/providers.test.ts
🚧 Files skipped from review as they are similar to previous changes (18)
- README.md
- src/commands/approval/tests/approvalModes.test.ts
- src/commands/approval/index.ts
- src/utils/auth.ts
- src/utils/model/configs.ts
- src/utils/model/tests/ollamaModelOptions.test.ts
- src/utils/context.ts
- packages/builtin-tools/src/tools/WebFetchTool/tests/headers.test.ts
- src/services/api/ollama/tests/client.test.ts
- packages/@ant/model-provider/src/providers/ollama/tests/modelMapping.test.ts
- src/commands/provider.ts
- src/utils/status.tsx
- packages/@ant/model-provider/src/providers/ollama/convertTools.ts
- src/services/api/ollama/client.ts
- packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
- src/components/ConsoleOAuthFlow.tsx
- src/services/api/ollama/index.ts
- src/utils/model/providers.ts
| const cleanModel = selectedModel.replace(/\[1m\]$/, '') | ||
| if (!isClaudeModelId(cleanModel)) { | ||
| return cleanModel | ||
| } |
There was a problem hiding this comment.
Incomplete ANSI artifact stripping may silently bypass Claude-ID detection
The regex /\[1m\]$/ only removes a trailing [1m] suffix. If the terminal UI also prepends the same escape code (e.g. the model name arrives as [1mclaude-3-5-sonnet-20241022), isClaudeModelId will fail to match: \bclaude requires a word boundary, but m and c are both word characters so the boundary does not exist, causing the raw artifact-laden string to be forwarded verbatim to Ollama — resulting in a silent model-not-found error rather than the expected family-mapped fallback.
Consider stripping leading/trailing ANSI codes more broadly, or asserting via test that selectedModel is always clean before reaching this function:
🛡️ Proposed fix for more robust ANSI stripping
- const cleanModel = selectedModel.replace(/\[1m\]$/, '')
+ // Strip common ANSI SGR escape sequences (e.g. \x1b[1m bold, \x1b[0m reset)
+ // that may appear when the model name is taken from a terminal UI display string.
+ const cleanModel = selectedModel.replace(/(\x1b\[[\d;]*m|\[[\d;]*m)/g, '').trim()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts around
lines 26 - 29, The current clean-up only removes a trailing "[1m]" and can leave
ANSI artifacts that break isClaudeModelId; update the sanitization around
selectedModel (the cleanModel assignment) to remove ANSI escape sequences more
robustly (strip any leading/trailing or embedded ANSI CSI sequences) before
calling isClaudeModelId so model-family detection works reliably; modify the
cleanModel logic used by isClaudeModelId and any callers in modelMapping.ts to
use the broader strip, or add a small helper (e.g., sanitizeModelName) and use
it wherever selectedModel is normalized.
| if (shouldUseOllamaWebFetch()) { | ||
| return getURLMarkdownContentFromOllama(url, abortController) | ||
| } |
There was a problem hiding this comment.
Ollama path silently bypasses the domain blocklist preflight check.
The early return at line 449 exits before the checkDomainBlocklist call at line 485. Because skipWebFetchPreflight === false is the effective default (meaning "do check"), most users would have the blocklist enforcement active on the non-Ollama path, but switching the provider to Ollama silently removes it. A domain Anthropic has flagged as blocked becomes reachable via the Ollama native endpoint.
The fix is to run the same blocklist gate (conditioned on skipWebFetchPreflight) before dispatching to getURLMarkdownContentFromOllama, mirroring the existing pattern:
🛡️ Proposed fix
+ if (shouldUseOllamaWebFetch()) {
+ // Domain blocklist check still applies even for the Ollama native path
+ if (getSettings_DEPRECATED().skipWebFetchPreflight === false) {
+ try {
+ const hostname = new URL(url).hostname
+ const checkResult = await checkDomainBlocklist(hostname)
+ switch (checkResult.status) {
+ case 'blocked':
+ throw new DomainBlockedError(hostname)
+ case 'check_failed':
+ throw new DomainCheckFailedError(hostname)
+ }
+ } catch (e) {
+ if (e instanceof DomainBlockedError || e instanceof DomainCheckFailedError) throw e
+ logError(e)
+ }
+ }
- if (shouldUseOllamaWebFetch()) {
return getURLMarkdownContentFromOllama(url, abortController)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (shouldUseOllamaWebFetch()) { | |
| return getURLMarkdownContentFromOllama(url, abortController) | |
| } | |
| if (shouldUseOllamaWebFetch()) { | |
| // Domain blocklist check still applies even for the Ollama native path | |
| if (getSettings_DEPRECATED().skipWebFetchPreflight === false) { | |
| try { | |
| const hostname = new URL(url).hostname | |
| const checkResult = await checkDomainBlocklist(hostname) | |
| switch (checkResult.status) { | |
| case 'blocked': | |
| throw new DomainBlockedError(hostname) | |
| case 'check_failed': | |
| throw new DomainCheckFailedError(hostname) | |
| } | |
| } catch (e) { | |
| if (e instanceof DomainBlockedError || e instanceof DomainCheckFailedError) throw e | |
| logError(e) | |
| } | |
| } | |
| return getURLMarkdownContentFromOllama(url, abortController) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 448 -
450, The Ollama branch currently returns early from shouldUseOllamaWebFetch() to
call getURLMarkdownContentFromOllama(), which bypasses the
checkDomainBlocklist() preflight when skipWebFetchPreflight is false; move or
add the same blocklist guard used in the non-Ollama path so that before calling
getURLMarkdownContentFromOllama() you conditionally call
checkDomainBlocklist(url) (respecting skipWebFetchPreflight) and
abort/throw/return the same way the other branch does if the domain is blocked.
| // 2. Ollama provider → ollama (native /api/web_search) | ||
| // 3. Third-party provider (OpenAI/Gemini/Grok) → bing (no server_tools support) | ||
| // 4. First-party Anthropic API → api (server-side web search + connector_text) | ||
| // 5. Fallback → exa | ||
| const adapterKey = | ||
| envAdapter === 'api' || | ||
| envAdapter === 'bing' || | ||
| envAdapter === 'brave' || | ||
| envAdapter === 'exa' | ||
| ? envAdapter | ||
| : isThirdPartyProvider() | ||
| ? 'bing' | ||
| : isFirstPartyAnthropicBaseUrl() | ||
| ? 'api' | ||
| : 'exa' | ||
| : getAPIProvider() === 'ollama' | ||
| ? 'ollama' |
There was a problem hiding this comment.
Honor WEB_SEARCH_ADAPTER=ollama in the explicit override path.
The factory now supports an ollama adapter, but the override whitelist still excludes it. With the current ternary, WEB_SEARCH_ADAPTER=ollama is silently ignored and provider detection takes over instead.
Suggested fix
const adapterKey =
envAdapter === 'api' ||
envAdapter === 'bing' ||
envAdapter === 'brave' ||
- envAdapter === 'exa'
+ envAdapter === 'exa' ||
+ envAdapter === 'ollama'
? envAdapter
: getAPIProvider() === 'ollama'
? 'ollama'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts` around
lines 41 - 52, The env override check for adapterKey currently ignores
WEB_SEARCH_ADAPTER=ollama because the whitelist only includes
'api','bing','brave','exa'; update the conditional that sets adapterKey (the
envAdapter === ... branch) to also accept 'ollama' so an explicit envAdapter ===
'ollama' returns 'ollama' instead of falling through to getAPIProvider(); keep
the existing fallback to getAPIProvider() and the ternary structure intact.
| const maxResults = Math.min(Math.max(options.numResults ?? 5, 1), 10) | ||
| const response = await client.webSearch( | ||
| { | ||
| query, | ||
| max_results: maxResults, | ||
| }, |
There was a problem hiding this comment.
Keep the default numResults aligned with SearchOptions.
SearchOptions.numResults documents a default of 8, but this adapter falls back to 5. When callers omit numResults, Ollama search will under-fetch compared with the shared contract.
Suggested fix
- const maxResults = Math.min(Math.max(options.numResults ?? 5, 1), 10)
+ const maxResults = Math.min(Math.max(options.numResults ?? 8, 1), 10)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const maxResults = Math.min(Math.max(options.numResults ?? 5, 1), 10) | |
| const response = await client.webSearch( | |
| { | |
| query, | |
| max_results: maxResults, | |
| }, | |
| const maxResults = Math.min(Math.max(options.numResults ?? 8, 1), 10) | |
| const response = await client.webSearch( | |
| { | |
| query, | |
| max_results: maxResults, | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`
around lines 25 - 30, The adapter uses Math.min(Math.max(options.numResults ??
5, 1), 10) which defaults to 5, but SearchOptions documents a default of 8;
update the fallback to use 8 so callers who omit numResults get the expected
default. Modify the expression that computes maxResults (referenced in
ollamaAdapter.ts as maxResults and options.numResults) to use ?? 8 while keeping
the same min/max bounds before calling client.webSearch.
| const payload = (await response.json()) as OllamaWebSearchResponse | ||
| const results: SearchResult[] = [] | ||
| for (const result of payload.results ?? []) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure and find the adapter file
git ls-files | grep -i ollama | head -20Repository: claude-code-best/claude-code
Length of output: 1164
🏁 Script executed:
# Let's read the specific file and examine the lines in question
cat -n packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts | head -60Repository: claude-code-best/claude-code
Length of output: 2253
🏁 Script executed:
# Look for the OllamaWebSearchResponse type definition
rg "OllamaWebSearchResponse" --type ts -A 5 -B 2Repository: claude-code-best/claude-code
Length of output: 1779
🏁 Script executed:
cat -n packages/builtin-tools/src/tools/WebSearchTool/__tests__/ollamaAdapter.test.ts | head -100Repository: claude-code-best/claude-code
Length of output: 1526
Use a type guard to validate payload.results is an array before iteration.
The forced cast on line 45 doesn't validate the response shape at runtime. The nullish coalescing operator payload.results ?? [] only checks for null or undefined, not whether results is actually an array. If the API returns { results: "string" } or any non-iterable value, the for...of loop will throw before the per-item guards on line 48 execute.
Add proper type narrowing to check that results is an array:
const results: SearchResult[] = []
if (Array.isArray(payload.results)) {
for (const result of payload.results) {
// existing logic
}
}This aligns with the coding guideline: use type guards instead of forced type casting with union types.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`
around lines 45 - 47, The code force-casts the fetch body to
OllamaWebSearchResponse and iterates payload.results without runtime validation;
replace that with a type guard: check Array.isArray(payload.results) before the
for...of loop so you only iterate when payload.results is an array, keeping the
existing per-item guards inside the loop (referencing payload.results,
OllamaWebSearchResponse, and SearchResult) and leaving results: SearchResult[]
initialization unchanged.
| React.useEffect(() => { | ||
| if (modeUpdate) { | ||
| setAppState(prev => ({ | ||
| ...prev, | ||
| toolPermissionContext: applyModeUpdate(prev.toolPermissionContext, modeUpdate), | ||
| })); | ||
| } | ||
| onDone(message); | ||
| }, [setAppState, message, modeUpdate, onDone]); |
There was a problem hiding this comment.
onDone can be called twice with conflicting messages — add a useRef run-once guard.
The effect's message dependency causes a second firing: after the first setAppState call commits the mode update, ApprovalCommandWithArgs re-renders with the updated toolPermissionContext; executeApproval now returns { message: "Approval mode is already X.", modeUpdate: undefined }; the changed message prop triggers the effect again, calling onDone a second time with the wrong "already set" text.
Updates queued in the passive-effects phase are deferred to the end of the effects phase, so the setAppState queued in the first firing and any unmount triggered by onDone are processed in the same deferred batch — but whether the unmount beats the re-render of ApprovalCommandWithArgs is not guaranteed and depends on external command-system behaviour this component does not control.
A useRef run-once guard is the idiomatic fix:
🐛 Proposed fix — run-once guard in ApplyApprovalAndClose
function ApplyApprovalAndClose({
result,
onDone,
}: {
result: ApprovalCommandResult;
onDone: (result: string) => void;
}): React.ReactNode {
const setAppState = useSetAppState();
const { message, modeUpdate } = result;
+ const firedRef = React.useRef(false);
React.useEffect(() => {
+ if (firedRef.current) return;
+ firedRef.current = true;
if (modeUpdate) {
setAppState(prev => ({
...prev,
toolPermissionContext: applyModeUpdate(prev.toolPermissionContext, modeUpdate),
}));
}
onDone(message);
}, [setAppState, message, modeUpdate, onDone]);
return null;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/approval/approval.tsx` around lines 117 - 125, The effect in
ApplyApprovalAndClose (the React.useEffect that calls setAppState and onDone)
can call onDone twice due to message changing; add a run-once ref guard: create
a ref (e.g., doneCalledRef = useRef(false)) in the component, check if
doneCalledRef.current is false before invoking onDone(message), set
doneCalledRef.current = true immediately after calling onDone, and keep the
existing setAppState/applyModeUpdate logic intact; reference applyModeUpdate,
executeApproval, and ApprovalCommandWithArgs to locate the surrounding code.
变更内容
本 PR 添加 Ollama 原生 API 供应商支持,并新增
/approval命令用于手动切换权限审批模式。Ollama 原生供应商
https://ollama.com/apiOLLAMA_BASE_URL=http://localhost:11434/api/api/chat流式对话/api/web_search与/api/web_fetch/api/show读取模型上下文长度,并动态调整输出上限/login配置 Ollama API Key、Base URL 和 Haiku/Sonnet/Opus 三类模型映射OLLAMA_MODEL的依赖,改为与其他第三方供应商一致的三档模型配置方式审批模式选择
/approval命令/approval default/approval accept-edits/approval plan/approval auto/approval dont-ask/approval full-accessfull-access复用现有bypassPermissions,并保留原有危险模式启用门槛测试
已通过:
/approval参数解析单元测试bun run typecheckSummary by CodeRabbit
New Features
Documentation