Skip to content

feat: 添加 Ollama 原生供应商适配与审批模式选择#409

Open
pei-lin-001 wants to merge 2 commits intoclaude-code-best:mainfrom
pei-lin-001:main
Open

feat: 添加 Ollama 原生供应商适配与审批模式选择#409
pei-lin-001 wants to merge 2 commits intoclaude-code-best:mainfrom
pei-lin-001:main

Conversation

@pei-lin-001
Copy link
Copy Markdown

@pei-lin-001 pei-lin-001 commented May 4, 2026

变更内容

本 PR 添加 Ollama 原生 API 供应商支持,并新增 /approval 命令用于手动切换权限审批模式。

Ollama 原生供应商

  • 使用 Ollama native API,默认地址为 https://ollama.com/api
  • 支持本地 Ollama:OLLAMA_BASE_URL=http://localhost:11434/api
  • 支持 /api/chat 流式对话
  • 支持 Ollama tool calling
  • 支持 Ollama thinking 参数
  • 支持原生 /api/web_search/api/web_fetch
  • 支持通过 /api/show 读取模型上下文长度,并动态调整输出上限
  • 支持 /login 配置 Ollama API Key、Base URL 和 Haiku/Sonnet/Opus 三类模型映射
  • 移除对单一 OLLAMA_MODEL 的依赖,改为与其他第三方供应商一致的三档模型配置方式
  • 补充 Ollama provider 使用文档

审批模式选择

  • 新增 /approval 命令
  • 支持交互式选择权限模式
  • 支持直接参数切换,例如:
    • /approval default
    • /approval accept-edits
    • /approval plan
    • /approval auto
    • /approval dont-ask
    • /approval full-access
  • full-access 复用现有 bypassPermissions,并保留原有危险模式启用门槛

测试

已通过:

  • Ollama message/tool/stream/model mapping 单元测试
  • Ollama API client 与 thinking 单元测试
  • Ollama WebFetch/WebSearch adapter 单元测试
  • /approval 参数解析单元测试
  • bun run typecheck

Summary by CodeRabbit

  • New Features

    • Ollama provider selectable (via login or settings) with UI to enter base URL, API key, and default models; Ollama-backed web search and web-fetch paths when enabled.
    • Added interactive /approval command and picker to manage tool permission modes.
  • Documentation

    • README updated and new Ollama provider guide added covering setup, model routing, supported native features, and limits.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 4, 2026

📝 Walkthrough

Walkthrough

Adds an Ollama provider integration (client, model mapping, converters, stream adapter, context caching, web search/fetch adapters, OAuth/settings wiring, and tests) and introduces a new interactive /approval command with parsing, UI, and permission-context changes.

Changes

Ollama Provider Integration

Layer / File(s) Summary
Type Definitions
packages/@ant/model-provider/src/providers/ollama/types.ts
Adds Ollama request/response types: messages, tools, tool calls, chat request, and streaming chunk shapes.
Message/Tool Converters
packages/@ant/model-provider/src/providers/ollama/convertMessages.ts, packages/@ant/model-provider/src/providers/ollama/convertTools.ts
Implements Anthropic→Ollama message and tool conversions, input/result normalization, and JSON-schema sanitization for function parameters.
Model Routing & Mapping
packages/@ant/model-provider/src/providers/ollama/modelMapping.ts, src/utils/model/configs.ts, src/utils/model/modelOptions.ts
Detects Claude family IDs, maps to per-family OLLAMA_DEFAULT_*_MODEL env overrides, provides getOllamaConfiguredModelOption, and updates provider mappings in model configs.
Stream Adapter
packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts, packages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts
Converts Ollama line-delimited JSON chat chunks into Anthropic BetaRawMessageStreamEvent sequence, mapping thinking/text/tool blocks and done reasons.
Public Exports
packages/@ant/model-provider/src/index.ts
Re-exports resolve/convert/adapter helpers and Ollama types for consumers.
Client & Context Caching
src/services/api/ollama/client.ts, src/services/api/ollama/context.ts
Implements getOllamaClient, HTTP helpers, tags/show endpoints, getOllamaContextLength caching/extractors, and cache invalidation.
Query Streaming Implementation
src/services/api/ollama/index.ts, src/services/api/ollama/__tests__/*
Adds queryModelOllama: tool/message normalization, think/num_predict resolution, assemble/parse Ollama stream, usage merging, cost/langfuse recording, and tests (think/context behaviors).
Main Routing
src/services/api/claude.ts
Routes model queries to Ollama when getAPIProvider() === 'ollama'.
Web Search / Fetch Adapters
packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts, packages/builtin-tools/src/tools/WebFetchTool/utils.ts, packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts
Adds OllamaSearchAdapter, Ollama-backed web_fetch path that returns markdown, domain filtering, and tests; selection wired into the adapter factory.
Provider Detection & Env Management
src/utils/model/providers.ts, src/utils/managedEnvConstants.ts, src/utils/settings/types.ts
Adds ollama to APIProvider, maps env flag CLAUDE_CODE_USE_OLLAMA, and includes Ollama env keys in provider-managed / safe lists and settings schema.
Capabilities & Context Integration
src/utils/thinking.ts, src/utils/effort.ts, src/utils/context.ts, src/utils/auth.ts
Treats Ollama provider as supporting thinking/adaptive-thinking/effort and consults cached Ollama context length when resolving model context window; disables Anthropic auth when Ollama is active.
OAuth & UI Wiring
src/components/ConsoleOAuthFlow.tsx
Adds an Ollama API login option, validates base URL, collects API key/default-model overrides, persists settings/env, and resets model strings post-save.
Tests & Docs
packages/@ant/model-provider/src/providers/ollama/__tests__/*, src/services/api/ollama/__tests__/*, packages/builtin-tools/src/tools/WebSearchTool/__tests__/*, docs/features/ollama-provider.md, README.md
Adds unit/integration tests for converters, client, stream adapter, search/fetch adapters; adds Ollama provider documentation and updates README feature row link.

Approval Command

Layer / File(s) Summary
Mode Registry & Parsing
src/commands/approval/approvalModes.ts
Adds approval-mode descriptors, alias maps, normalization, parse result types, and formatting helpers.
Command Logic & UI
src/commands/approval/approval.tsx
Implements execute/show/apply flows, interactive ApprovalPicker UI, availability gating, analytics event on change, and ApplyApprovalAndClose effect wiring.
Command Registration
src/commands/approval/index.ts, src/commands.ts
Adds command metadata and registers approval in the built-in commands list.
Permission Context
src/utils/permissions/permissionSetup.ts
Makes isBypassPermissionsModeAvailable conditional based on the requested mode or explicit allow flag.
Tests
src/commands/approval/__tests__/approvalModes.test.ts
Covers parsing, alias normalization, formatting, and invalid-input handling.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Client as Claude Code Client
    participant Settings
    participant OllamaAPI as Ollama API

    User->>Client: Select Ollama provider or provide API key & defaults
    Client->>Settings: Persist modelType 'ollama' and OLLAMA_* envs
    Settings-->>Client: Confirm saved / env updated

    User->>Client: Send chat message (may include tool use)
    Client->>Client: Convert Anthropic messages/tools → Ollama format
    Client->>OllamaAPI: POST /chat (streaming) with tools, think, options
    OllamaAPI->>Client: Stream chunks (thinking, content, tool_calls)
    Client->>Client: Parse/adapt chunks → Anthropic stream events
    Client->>User: Stream thinking/text/tool events in UI
    Client->>Client: On completion, assemble final AssistantMessage and record usage
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 I hopped through code with nimble feet,

Mapped models, streams, and envs complete.
Messages flip to Ollama's tongue,
Tool calls hum and streams are sung.
A tiny rabbit, change made sweet.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (15)
src/commands/approval/approvalModes.ts (1)

103-110: 💤 Low value

getApprovalModeDescriptor silently falls back to default for the bubble internal mode.

If the current PermissionMode is 'bubble' (an internal mode not in APPROVAL_MODE_DESCRIPTORS), showCurrentApprovalMode in approval.tsx would display "Current approval mode: Default (Ask before tools that need approval)" — which is misleading. Consider returning a dedicated fallback label or guarding the caller.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/approval/approvalModes.ts` around lines 103 - 110,
getApprovalModeDescriptor currently returns the first descriptor as a silent
fallback when a mode like the internal 'bubble' isn't present in
APPROVAL_MODE_DESCRIPTORS, causing showCurrentApprovalMode (in approval.tsx) to
display a misleading "Default" label; update getApprovalModeDescriptor to detect
missing modes and return a clear fallback descriptor (e.g., { mode, label:
"Internal/Unknown", description: "This mode is internal and not displayed" }) or
change its signature to return undefined so the caller (showCurrentApprovalMode)
can render an explicit "Internal/Unknown" message; reference the function
getApprovalModeDescriptor and the constant APPROVAL_MODE_DESCRIPTORS when making
the change.
src/commands/approval/__tests__/approvalModes.test.ts (1)

1-63: ⚡ Quick win

Missing test coverage for the help/-h/--help argument path.

HELP_ARGS = new Set(['help', '-h', '--help']) drives a separate { type: 'help' } branch in parseApprovalModeArg (and ultimately the help message in executeApproval), but there are no tests for it. Additionally, formatApprovalMode is only tested for bypassPermissions; other modes are untested.

➕ Proposed additional tests
+  test('parses help arguments', () => {
+    expect(parseApprovalModeArg('help')).toEqual({ type: 'help' })
+    expect(parseApprovalModeArg('-h')).toEqual({ type: 'help' })
+    expect(parseApprovalModeArg('--help')).toEqual({ type: 'help' })
+  })
+
+  test('normalizes uppercase and underscore inputs', () => {
+    expect(parseApprovalModeArg('DEFAULT')).toEqual({ type: 'mode', mode: 'default' })
+    expect(parseApprovalModeArg('full_access')).toEqual({ type: 'mode', mode: 'bypassPermissions' })
+  })

And in describe('formatApprovalMode'):

+  test('returns the label for non-bypass modes', () => {
+    expect(formatApprovalMode('default')).toBe('Default')
+    expect(formatApprovalMode('plan')).toBe('Plan')
+  })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/approval/__tests__/approvalModes.test.ts` around lines 1 - 63,
Add tests covering the HELP_ARGS branch and additional formatApprovalMode
outputs: update approvalModes.test.ts to assert parseApprovalModeArg returns {
type: 'help' } for each string in HELP_ARGS (e.g., 'help', '-h', '--help'), and
add formatApprovalMode assertions for other modes (e.g., 'default' -> "Default
(default)", 'acceptEdits' -> "Accept edits (acceptEdits)", 'dontAsk' -> "Don't
ask (dontAsk)", etc.). Reference the parseApprovalModeArg function and HELP_ARGS
set to locate the missing branch and the formatApprovalMode function to add the
extra expectations. Ensure tests follow the existing bun:test structure and
mirror the style used for existing cases.
src/commands/approval/approval.tsx (1)

91-94: 💤 Low value

Help text is hardcoded and will drift from APPROVAL_MODE_DESCRIPTORS.

Adding or renaming a mode in APPROVAL_MODE_DESCRIPTORS requires a manual update here; they'll easily diverge.

♻️ Proposed refactor — derive from descriptors
-    case 'help':
-      return {
-        message:
-          'Usage: /approval [default|accept-edits|plan|auto|dont-ask|full-access]\n\nApproval modes:\n- default: Ask before tools that need approval\n- accept-edits: Auto-approve file edits, still ask for risky actions\n- plan: Plan first, do not make changes until you approve\n- auto: Use the automatic approval classifier when available\n- dont-ask: Deny anything not already pre-approved\n- full-access: Allow tool use without approval prompts',
-      };
+    case 'help': {
+      const modeLines = APPROVAL_MODE_DESCRIPTORS.map(
+        d => `- ${d.aliases[1] ?? d.aliases[0] ?? d.mode}: ${d.description}`,
+      ).join('\n')
+      return {
+        message: `Usage: /approval [default|accept-edits|plan|auto|dont-ask|full-access]\n\nApproval modes:\n${modeLines}`,
+      }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/approval/approval.tsx` around lines 91 - 94, The help text is
hardcoded and can drift from the canonical APPROVAL_MODE_DESCRIPTORS; replace
the literal multi-line string returned in the function (the object with key
message in src/commands/approval/approval.tsx) with a generated string built
from APPROVAL_MODE_DESCRIPTORS (iterate its entries to produce the usage line
and a bullet list of mode names and descriptions), ensuring the usage header
includes the modes joined from the descriptor keys and that each descriptor's
description is used verbatim so adding/renaming a mode in
APPROVAL_MODE_DESCRIPTORS automatically updates this help output.
src/utils/auth.ts (1)

115-125: ⚡ Quick win

Replace (settings as any).modelType with optional chaining on the raw return value.

All four uses of (settings as any).modelType (lines 120–123, including the pre-existing openai/gemini checks) violate the as any prohibition in production TypeScript. The cast is unnecessary — getSettings_DEPRECATED() already returns a typed value; the || {} fallback is what defeats the type. Splitting the reads eliminates the cast entirely.

♻️ Proposed refactor
-  const settings = getSettings_DEPRECATED() || {}
+  const settings = getSettings_DEPRECATED()
   const is3P =
     isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) ||
     isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) ||
     isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY) ||
-    (settings as any).modelType === 'openai' ||
-    (settings as any).modelType === 'gemini' ||
-    (settings as any).modelType === 'grok' ||
-    (settings as any).modelType === 'ollama' ||
+    settings?.modelType === 'openai' ||
+    settings?.modelType === 'gemini' ||
+    settings?.modelType === 'grok' ||
+    settings?.modelType === 'ollama' ||
     !!process.env.OPENAI_BASE_URL ||
     !!process.env.GEMINI_BASE_URL
-  const apiKeyHelper = settings.apiKeyHelper
+  const apiKeyHelper = settings?.apiKeyHelper

As per coding guidelines: "Prohibit as any type assertions in production code; use as unknown as SpecificType double assertion or interface supplementation when type mismatches occur."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/auth.ts` around lines 115 - 125, The code uses (settings as
any).modelType to check modelType which violates the no-as-any rule; instead
stop masking the typed return by removing the "|| {}" fallback and use optional
chaining on the raw getSettings_DEPRECATED() result — e.g., assign const
settings = getSettings_DEPRECATED(); then replace each (settings as
any).modelType === 'openai'|'gemini'|'grok'|'ollama' with settings?.modelType
=== 'openai' (and similar) so you preserve types and eliminate the cast while
keeping the other env-based checks (CLAUDE_CODE_*, OPENAI_BASE_URL,
GEMINI_BASE_URL) intact when computing is3P.
src/utils/status.tsx (1)

291-310: 💤 Low value

LGTM — 'ollama': 'Ollama' correctly extends the provider label map.

Note: other providers (gemini, grok, openai) each get a follow-up else if branch that surfaces their configured base URL in the status pane. Adding an equivalent block for OLLAMA_BASE_URL would let users verify which Ollama endpoint is active without leaving the status view.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/status.tsx` around lines 291 - 310, Add an else-if branch in
buildAPIProviderProperties to mirror the other providers that surface their base
URL: when apiProvider === 'ollama' push a Property with a label like "Ollama
base URL" and a value taken from the configured Ollama endpoint (e.g.,
process.env.OLLAMA_BASE_URL or the project’s config/getEnv helper) so the status
pane shows the active Ollama endpoint; follow the same pattern and
null/undefined fallback used by the existing gemini/grok/openai branches.
src/components/ConsoleOAuthFlow.tsx (1)

6-6: ⚡ Quick win

Use the src/* alias for this new import.

This introduces a new relative import in a TSX file, which goes against the repo import-path convention.

Suggested fix
-import { resetModelStringsForTestingOnly } from '../bootstrap/state.js';
+import { resetModelStringsForTestingOnly } from 'src/bootstrap/state.js';

As per coding guidelines, Use src/* path alias in imports; do not use relative paths like ../../../.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ConsoleOAuthFlow.tsx` at line 6, The import in
ConsoleOAuthFlow.tsx uses a relative path for resetModelStringsForTestingOnly;
update the import to use the repository alias (src/*) instead of
'../bootstrap/state.js' so it follows the project's import-path convention —
locate the import of resetModelStringsForTestingOnly in ConsoleOAuthFlow.tsx and
replace the relative path with the aliased path (e.g., import {
resetModelStringsForTestingOnly } from 'src/bootstrap/state').
src/utils/context.ts (1)

8-9: 💤 Low value

Consider using src/* path alias for imports.

Per coding guidelines, prefer src/* path alias over relative paths like ../../../. However, this file already uses relative imports consistently (lines 2-7), so this is a minor consistency note.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/context.ts` around lines 8 - 9, Update the imports in
src/utils/context.ts to use the project path alias (src/...) instead of relative
paths: replace the import of getCachedOllamaContextLength from
'../services/api/ollama/context.js' and getAPIProvider from
'./model/providers.js' with their aliased equivalents (e.g., import {
getCachedOllamaContextLength } from 'src/services/api/ollama/context' and import
{ getAPIProvider } from 'src/utils/model/providers' or the correct aliased
module paths in your project); ensure the module specifiers match the configured
TS/webpack path mappings and adjust any file extensions to match project
conventions.
src/services/api/ollama/client.ts (1)

106-127: ⚖️ Poor tradeoff

Missing request timeout handling.

The postOllamaJSON function accepts an AbortSignal but doesn't set a default timeout. For reliability, consider adding a default timeout to prevent indefinite hangs when the Ollama server is unresponsive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/ollama/client.ts` around lines 106 - 127, postOllamaJSON
currently accepts an optional AbortSignal but has no default timeout, so add a
default timeout by creating an AbortController inside postOllamaJSON when
requestOptions?.signal is not provided (or to combine/chain signals if one is
provided), set a timer (e.g., 30s) to call controller.abort(), pass
controller.signal into fetchOllama, and clear the timer after the fetch
completes or errors; reference the postOllamaJSON function and the call to
fetchOllama to locate where to create/attach the AbortController and timer.
packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts (1)

59-61: 💤 Low value

Global getAPIProvider mock silently activates the Ollama path for any test calling getURLMarkdownContent

After beforeEach deletes OLLAMA_USE_NATIVE_WEB_FETCH, shouldUseOllamaWebFetch() evaluates the provider ('ollama') and the default base URL (ollama.com), returning true. This means any future test in this file that calls getURLMarkdownContent without explicitly setting OLLAMA_USE_NATIVE_WEB_FETCH = 'false' will silently take the Ollama path. The existing tests handle this correctly (redirect/proxy tests bypass getURLMarkdownContent; the content-type test opts out explicitly), but the implicit contract is non-obvious.

Consider adding a comment in beforeEach calling out why the opt-out is required:

 beforeEach(() => {
   delete process.env.OLLAMA_USE_NATIVE_WEB_FETCH
+  // NOTE: getAPIProvider() is mocked to return 'ollama', so shouldUseOllamaWebFetch()
+  // returns true by default. Tests that exercise the non-Ollama path in
+  // getURLMarkdownContent() must set OLLAMA_USE_NATIVE_WEB_FETCH = 'false'.
   getMock = async () => ({ ... })
 })

Also applies to: 79-87

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts`
around lines 59 - 61, The test file globally mocks getAPIProvider to return
'ollama', which combined with beforeEach deleting OLLAMA_USE_NATIVE_WEB_FETCH
causes shouldUseOllamaWebFetch() to select the Ollama path for any
getURLMarkdownContent calls; update the beforeEach block to add a clear comment
explaining that we intentionally delete OLLAMA_USE_NATIVE_WEB_FETCH to force the
default provider behavior and that tests which must avoid the Ollama path should
explicitly set OLLAMA_USE_NATIVE_WEB_FETCH = 'false' (or otherwise opt out), and
add the same explanatory comment near the other beforeEach at lines 79-87 so
future readers understand why opt-out is required when calling
getURLMarkdownContent.
packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts (1)

29-35: ⚡ Quick win

isThirdPartyProvider() checks raw env vars and misses modelType-based configuration

The new Ollama routing correctly uses getAPIProvider(), which checks settings.modelType before env vars. isThirdPartyProvider() does not — it only checks the env-var forms. If a user configures OpenAI/Gemini/Grok via modelType: 'openai' in their settings file (without setting CLAUDE_CODE_USE_OPENAI), getAPIProvider() returns 'openai' but isThirdPartyProvider() returns false. This causes the factory to fall through to isFirstPartyAnthropicBaseUrl(), likely selecting the api adapter (Anthropic server_tools), which doesn't work with OpenAI endpoints.

♻️ Suggested fix
+import {
+  getAPIProvider,
+  isFirstPartyAnthropicBaseUrl,
+} from 'src/utils/model/providers.js'

 function isThirdPartyProvider(): boolean {
-  return !!(
-    process.env.CLAUDE_CODE_USE_OPENAI ||
-    process.env.CLAUDE_CODE_USE_GEMINI ||
-    process.env.CLAUDE_CODE_USE_GROK
-  )
+  const provider = getAPIProvider()
+  return provider === 'openai' || provider === 'gemini' || provider === 'grok'
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts` around
lines 29 - 35, isThirdPartyProvider() currently checks only raw env vars and can
miss cases where the user configured a third-party provider via
settings.modelType; update isThirdPartyProvider() to call getAPIProvider() and
return true when getAPIProvider() returns 'openai', 'gemini', or 'grok' (in
addition to or instead of the existing env-var checks), so the factory routing
matches the same provider resolution as getAPIProvider().
packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts (1)

47-52: 💤 Low value

Test 4 doesn't fully isolate "OLLAMA_MODEL is ignored"

Because OLLAMA_DEFAULT_SONNET_MODEL = 'qwen3-coder' is also set, the assertion toBe('qwen3-coder') would pass even if OLLAMA_MODEL were consulted as a lower-priority fallback (both cases yield the same value). A stronger form of the test would set only OLLAMA_MODEL (without OLLAMA_DEFAULT_SONNET_MODEL) and assert the result is still 'qwen3-coder' (the hardcoded fallback), proving the override is never read.

🔍 Supplemental test
test('falls back to qwen3-coder when only OLLAMA_MODEL is set (legacy override ignored)', () => {
  process.env.OLLAMA_MODEL = 'legacy-global-model'
  // OLLAMA_DEFAULT_SONNET_MODEL intentionally not set

  expect(resolveOllamaModel('claude-sonnet-4-6')).toBe('qwen3-coder')
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
around lines 47 - 52, The test currently sets both process.env.OLLAMA_MODEL and
process.env.OLLAMA_DEFAULT_SONNET_MODEL so the assertion can pass regardless of
whether resolveOllamaModel consults the legacy OLLAMA_MODEL; update the test in
packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
to isolate the legacy override by removing the OLLAMA_DEFAULT_SONNET_MODEL
assignment (or add a new test) so only process.env.OLLAMA_MODEL is set and then
assert resolveOllamaModel('claude-sonnet-4-6') returns the hardcoded
'qwen3-coder', referencing the resolveOllamaModel function to ensure the legacy
global is never read.
src/services/api/ollama/__tests__/client.test.ts (1)

82-116: 💤 Low value

Context-length and caching tests are nested under the wrong describe block

Lines 82–96 test extractOllamaModelInfoContextLength / extractOllamaNumCtxParameter (both from context.ts), and lines 98–116 test getOllamaContextLength — none of which are showOllamaModel. When one of these tests fails the runner reports showOllamaModel > caches context length…, which is misleading.

♻️ Suggested reorganisation
-describe('showOllamaModel', () => {
+describe('showOllamaModel', () => {
   test('fetches model details via native /api/show', async () => {
     // …existing test…
   })
+})
 
-  test('extracts context length from model_info and num_ctx parameters', () => {
+describe('extractOllamaModelInfoContextLength / extractOllamaNumCtxParameter', () => {
+  test('extracts context length from model_info and num_ctx parameters', () => {
     // …
   })
+})
 
-  test('caches context length after /api/show', async () => {
+describe('getOllamaContextLength', () => {
+  test('caches context length after first /api/show fetch', async () => {
     // …
   })
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/ollama/__tests__/client.test.ts` around lines 82 - 116, The
tests for extractOllamaModelInfoContextLength, extractOllamaNumCtxParameter and
getOllamaContextLength are placed under the wrong describe block
(showOllamaModel), which causes misleading test names on failure; move the block
that contains the tests referencing extractOllamaModelInfoContextLength,
extractOllamaNumCtxParameter and getOllamaContextLength out of the
describe('showOllamaModel') suite into either a new describe('context') suite or
the appropriate existing describe for context.ts so each test is named after the
function it tests and the caching test for getOllamaContextLength sits with
other context-related tests.
packages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts (1)

72-75: 💤 Low value

Accessing .delta on a potentially undefined messageDelta will throw an opaque TypeError

If adaptOllamaStreamToAnthropic emits no message_delta event, events.find(...) returns undefined, and messageDelta.delta.stop_reason crashes with TypeError: Cannot read properties of undefined (reading 'delta') instead of a clear assertion failure.

🛡️ Suggested fix
 const messageDelta = events.find(event => event.type === 'message_delta')
+expect(messageDelta).toBeDefined()
 expect(messageDelta.delta.stop_reason).toBe('tool_use')
 expect(messageDelta.usage.input_tokens).toBe(12)
 expect(messageDelta.usage.output_tokens).toBe(5)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts
around lines 72 - 75, The test accesses messageDelta.delta without ensuring
messageDelta exists, which can throw a TypeError; update the test in
streamAdapter.test.ts to assert the find result is defined before reading
properties (e.g., assert expect(messageDelta).toBeDefined() or use a guard like
if (!messageDelta) fail('expected message_delta event')), then continue to check
messageDelta.delta.stop_reason and the usage fields so failures produce clear
assertion errors; reference the messageDelta variable and the events.find(...)
call in the test for the change.
packages/builtin-tools/src/tools/WebFetchTool/utils.ts (1)

388-399: 💤 Low value

shouldUseOllamaWebFetch returns true by default for the Ollama provider with no OLLAMA_BASE_URL set

When getAPIProvider() === 'ollama' and OLLAMA_BASE_URL is unset, the function defaults to 'https://ollama.com/api', which passes the hostname check, so native web fetch is silently enabled. This is intentional (cloud Ollama supports it; local Ollama doesn't), but worth an inline comment so future readers understand the implicit contract.

📝 Suggested inline doc
 export function shouldUseOllamaWebFetch(): boolean {
   if (process.env.OLLAMA_USE_NATIVE_WEB_FETCH !== undefined) {
     return isEnvTruthy(process.env.OLLAMA_USE_NATIVE_WEB_FETCH)
   }
   if (getAPIProvider() !== 'ollama') return false
+  // When OLLAMA_BASE_URL is not set the client defaults to ollama.com (cloud),
+  // which provides /api/web_fetch. Local Ollama (e.g. localhost:11434) does not,
+  // so only enable native fetch when the hostname is 'ollama.com'.
+  // Override with OLLAMA_USE_NATIVE_WEB_FETCH=true/false when needed.
   const baseURL = process.env.OLLAMA_BASE_URL || 'https://ollama.com/api'
   try {
     return new URL(baseURL).hostname === 'ollama.com'
   } catch {
     return false
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 388 -
399, shouldUseOllamaWebFetch currently defaults to enabling native web fetch for
the Ollama provider when OLLAMA_BASE_URL is unset by assuming
'https://ollama.com/api'; add a concise inline comment inside the
shouldUseOllamaWebFetch function near the default baseURL logic explaining the
implicit contract: cloud Ollama (ollama.com) supports native web fetch while
local/offline Ollama instances do not, and mention that
OLLAMA_USE_NATIVE_WEB_FETCH env overrides this behavior; reference the function
name shouldUseOllamaWebFetch, the env vars OLLAMA_USE_NATIVE_WEB_FETCH and
OLLAMA_BASE_URL, and getAPIProvider() to help locate where to add the comment.
src/services/api/ollama/__tests__/think.test.ts (1)

2-6: ⚡ Quick win

Use src/* aliases for these test imports.

These deep relative imports make the new test brittle to file moves and diverge from the repo import convention.

Suggested cleanup
-import { getEmptyToolPermissionContext } from '../../../../Tool.js'
-import { asSystemPrompt } from '../../../../utils/systemPromptType.js'
-import type { Options } from '../../claude.js'
-import { resolveOllamaThink } from '../index.js'
-import { queryModelOllama } from '../index.js'
+import { getEmptyToolPermissionContext } from 'src/Tool.js'
+import type { Options } from 'src/services/api/claude.js'
+import { queryModelOllama, resolveOllamaThink } from 'src/services/api/ollama/index.js'
+import { asSystemPrompt } from 'src/utils/systemPromptType.js'

As per coding guidelines, "Use src/* path alias in imports; do not use relative paths like ../../../".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/ollama/__tests__/think.test.ts` around lines 2 - 6, The test
file uses deep relative imports for symbols like getEmptyToolPermissionContext,
asSystemPrompt, Options, resolveOllamaThink, and queryModelOllama; update the
import paths to use the repo's src/* path aliases instead of '../../../' style
paths so the test follows project conventions and is resilient to file moves
(replace '../../../../Tool.js', '../../../../utils/systemPromptType.js',
'../../claude.js', and '../index.js' imports with their corresponding src/...
aliases).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`:
- Around line 72-99: The domain filter check in matchesDomainFilters compares
URL.hostname (already lowercase) to allowedDomains/blockedDomains without
normalizing them, so mixed-case or whitespace entries can bypass rules;
normalize entries first by mapping allowedDomains and blockedDomains to trimmed,
lowercased strings (and strip any leading '.'), then perform the existing
comparisons against hostname (which you already obtain via new
URL(url).hostname) to ensure correct allow/block behavior; update
matchesDomainFilters to use these normalized arrays in the .some checks.

In `@src/commands/approval/approval.tsx`:
- Around line 42-47: The current synchronous gating uses
isAutoModeGateEnabled(), which is a stub that always returns true, so the block
never runs; change getModeUnavailableMessage (in
src/commands/approval/approval.tsx) to synchronously call
getAutoModeUnavailableReason() for mode === 'auto' and, if it returns a reason,
return the existing "Auto approval is unavailable..." message (using
getAutoModeUnavailableNotification(reason)); remove or stop relying on
isAutoModeGateEnabled() for UI gating so the picker grays out correctly and
prevents applyApprovalMode from optimistically applying auto before
verifyAutoModeGateAccess runs.

In `@src/commands/provider.ts`:
- Around line 113-124: The branch handling arg === 'ollama' returns early when
OLLAMA_API_KEY is missing, but it never applies saved env vars so a stored
OLLAMA_BASE_URL in settings.env isn't loaded into process.env for the current
session; update the branch in the provider switch to call
applyConfigEnvironmentVariables() (or the equivalent function that applies
settings.env) after updateSettingsForSource('userSettings', { modelType:
'ollama' }) and before returning so the saved OLLAMA_BASE_URL (and other saved
envs) take effect immediately; ensure you still return the same text response
after applying the env.

In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 1270-1291: After a successful
updateSettingsForSource('userSettings', { modelType: 'ollama', env }), clear the
in-memory cached model strings so the new Ollama mappings take effect
immediately: after the for (const [k, v] of Object.entries(env)) process.env[k]
= v; and before setOAuthStatus({ state: 'success' }) / void onDone(), call the
same cache-reset helper used in the custom-platform path (e.g.
resetCachedModelStrings() or refreshModelMappings()) — import it if needed and
await it if it returns a Promise — so session/model lookups no longer use stale
strings.

In `@src/services/api/ollama/index.ts`:
- Around line 528-541: The helper resolveOllamaMaxTokens allows an override or
env value to exceed the discovered/default cap; change its logic in
resolveOllamaMaxTokens so any override (override param or parsed
OLLAMA_MAX_TOKENS / CLAUDE_CODE_MAX_OUTPUT_TOKENS) is clamped to
defaultMaxTokens (use Math.min) and only accepted if finite and >0; locate the
function resolveOllamaMaxTokens and replace the early return of override or
parsed with a clamped value (e.g., return Math.min(defaultMaxTokens,
validOverride)) while preserving the final fallback to defaultMaxTokens.

In `@src/utils/model/modelOptions.ts`:
- Around line 77-85: The helper getProviderEnvVar incorrectly falls back all
unknown providers to ANTHROPIC_*; change the logic so only the explicit
'anthropic' provider maps to process.env[`ANTHROPIC_${suffix}`] and any other
provider returns undefined. Locate getProviderEnvVar and replace the final
default return with an explicit if (provider === 'anthropic') return
process.env[`ANTHROPIC_${suffix}`]; else return undefined so unsupported
providers (e.g., grok, bedrock, vertex, foundry) do not pick up Anthropic env
vars.

---

Nitpick comments:
In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts:
- Around line 47-52: The test currently sets both process.env.OLLAMA_MODEL and
process.env.OLLAMA_DEFAULT_SONNET_MODEL so the assertion can pass regardless of
whether resolveOllamaModel consults the legacy OLLAMA_MODEL; update the test in
packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
to isolate the legacy override by removing the OLLAMA_DEFAULT_SONNET_MODEL
assignment (or add a new test) so only process.env.OLLAMA_MODEL is set and then
assert resolveOllamaModel('claude-sonnet-4-6') returns the hardcoded
'qwen3-coder', referencing the resolveOllamaModel function to ensure the legacy
global is never read.

In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts:
- Around line 72-75: The test accesses messageDelta.delta without ensuring
messageDelta exists, which can throw a TypeError; update the test in
streamAdapter.test.ts to assert the find result is defined before reading
properties (e.g., assert expect(messageDelta).toBeDefined() or use a guard like
if (!messageDelta) fail('expected message_delta event')), then continue to check
messageDelta.delta.stop_reason and the usage fields so failures produce clear
assertion errors; reference the messageDelta variable and the events.find(...)
call in the test for the change.

In `@packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts`:
- Around line 59-61: The test file globally mocks getAPIProvider to return
'ollama', which combined with beforeEach deleting OLLAMA_USE_NATIVE_WEB_FETCH
causes shouldUseOllamaWebFetch() to select the Ollama path for any
getURLMarkdownContent calls; update the beforeEach block to add a clear comment
explaining that we intentionally delete OLLAMA_USE_NATIVE_WEB_FETCH to force the
default provider behavior and that tests which must avoid the Ollama path should
explicitly set OLLAMA_USE_NATIVE_WEB_FETCH = 'false' (or otherwise opt out), and
add the same explanatory comment near the other beforeEach at lines 79-87 so
future readers understand why opt-out is required when calling
getURLMarkdownContent.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts`:
- Around line 388-399: shouldUseOllamaWebFetch currently defaults to enabling
native web fetch for the Ollama provider when OLLAMA_BASE_URL is unset by
assuming 'https://ollama.com/api'; add a concise inline comment inside the
shouldUseOllamaWebFetch function near the default baseURL logic explaining the
implicit contract: cloud Ollama (ollama.com) supports native web fetch while
local/offline Ollama instances do not, and mention that
OLLAMA_USE_NATIVE_WEB_FETCH env overrides this behavior; reference the function
name shouldUseOllamaWebFetch, the env vars OLLAMA_USE_NATIVE_WEB_FETCH and
OLLAMA_BASE_URL, and getAPIProvider() to help locate where to add the comment.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts`:
- Around line 29-35: isThirdPartyProvider() currently checks only raw env vars
and can miss cases where the user configured a third-party provider via
settings.modelType; update isThirdPartyProvider() to call getAPIProvider() and
return true when getAPIProvider() returns 'openai', 'gemini', or 'grok' (in
addition to or instead of the existing env-var checks), so the factory routing
matches the same provider resolution as getAPIProvider().

In `@src/commands/approval/__tests__/approvalModes.test.ts`:
- Around line 1-63: Add tests covering the HELP_ARGS branch and additional
formatApprovalMode outputs: update approvalModes.test.ts to assert
parseApprovalModeArg returns { type: 'help' } for each string in HELP_ARGS
(e.g., 'help', '-h', '--help'), and add formatApprovalMode assertions for other
modes (e.g., 'default' -> "Default (default)", 'acceptEdits' -> "Accept edits
(acceptEdits)", 'dontAsk' -> "Don't ask (dontAsk)", etc.). Reference the
parseApprovalModeArg function and HELP_ARGS set to locate the missing branch and
the formatApprovalMode function to add the extra expectations. Ensure tests
follow the existing bun:test structure and mirror the style used for existing
cases.

In `@src/commands/approval/approval.tsx`:
- Around line 91-94: The help text is hardcoded and can drift from the canonical
APPROVAL_MODE_DESCRIPTORS; replace the literal multi-line string returned in the
function (the object with key message in src/commands/approval/approval.tsx)
with a generated string built from APPROVAL_MODE_DESCRIPTORS (iterate its
entries to produce the usage line and a bullet list of mode names and
descriptions), ensuring the usage header includes the modes joined from the
descriptor keys and that each descriptor's description is used verbatim so
adding/renaming a mode in APPROVAL_MODE_DESCRIPTORS automatically updates this
help output.

In `@src/commands/approval/approvalModes.ts`:
- Around line 103-110: getApprovalModeDescriptor currently returns the first
descriptor as a silent fallback when a mode like the internal 'bubble' isn't
present in APPROVAL_MODE_DESCRIPTORS, causing showCurrentApprovalMode (in
approval.tsx) to display a misleading "Default" label; update
getApprovalModeDescriptor to detect missing modes and return a clear fallback
descriptor (e.g., { mode, label: "Internal/Unknown", description: "This mode is
internal and not displayed" }) or change its signature to return undefined so
the caller (showCurrentApprovalMode) can render an explicit "Internal/Unknown"
message; reference the function getApprovalModeDescriptor and the constant
APPROVAL_MODE_DESCRIPTORS when making the change.

In `@src/components/ConsoleOAuthFlow.tsx`:
- Line 6: The import in ConsoleOAuthFlow.tsx uses a relative path for
resetModelStringsForTestingOnly; update the import to use the repository alias
(src/*) instead of '../bootstrap/state.js' so it follows the project's
import-path convention — locate the import of resetModelStringsForTestingOnly in
ConsoleOAuthFlow.tsx and replace the relative path with the aliased path (e.g.,
import { resetModelStringsForTestingOnly } from 'src/bootstrap/state').

In `@src/services/api/ollama/__tests__/client.test.ts`:
- Around line 82-116: The tests for extractOllamaModelInfoContextLength,
extractOllamaNumCtxParameter and getOllamaContextLength are placed under the
wrong describe block (showOllamaModel), which causes misleading test names on
failure; move the block that contains the tests referencing
extractOllamaModelInfoContextLength, extractOllamaNumCtxParameter and
getOllamaContextLength out of the describe('showOllamaModel') suite into either
a new describe('context') suite or the appropriate existing describe for
context.ts so each test is named after the function it tests and the caching
test for getOllamaContextLength sits with other context-related tests.

In `@src/services/api/ollama/__tests__/think.test.ts`:
- Around line 2-6: The test file uses deep relative imports for symbols like
getEmptyToolPermissionContext, asSystemPrompt, Options, resolveOllamaThink, and
queryModelOllama; update the import paths to use the repo's src/* path aliases
instead of '../../../' style paths so the test follows project conventions and
is resilient to file moves (replace '../../../../Tool.js',
'../../../../utils/systemPromptType.js', '../../claude.js', and '../index.js'
imports with their corresponding src/... aliases).

In `@src/services/api/ollama/client.ts`:
- Around line 106-127: postOllamaJSON currently accepts an optional AbortSignal
but has no default timeout, so add a default timeout by creating an
AbortController inside postOllamaJSON when requestOptions?.signal is not
provided (or to combine/chain signals if one is provided), set a timer (e.g.,
30s) to call controller.abort(), pass controller.signal into fetchOllama, and
clear the timer after the fetch completes or errors; reference the
postOllamaJSON function and the call to fetchOllama to locate where to
create/attach the AbortController and timer.

In `@src/utils/auth.ts`:
- Around line 115-125: The code uses (settings as any).modelType to check
modelType which violates the no-as-any rule; instead stop masking the typed
return by removing the "|| {}" fallback and use optional chaining on the raw
getSettings_DEPRECATED() result — e.g., assign const settings =
getSettings_DEPRECATED(); then replace each (settings as any).modelType ===
'openai'|'gemini'|'grok'|'ollama' with settings?.modelType === 'openai' (and
similar) so you preserve types and eliminate the cast while keeping the other
env-based checks (CLAUDE_CODE_*, OPENAI_BASE_URL, GEMINI_BASE_URL) intact when
computing is3P.

In `@src/utils/context.ts`:
- Around line 8-9: Update the imports in src/utils/context.ts to use the project
path alias (src/...) instead of relative paths: replace the import of
getCachedOllamaContextLength from '../services/api/ollama/context.js' and
getAPIProvider from './model/providers.js' with their aliased equivalents (e.g.,
import { getCachedOllamaContextLength } from 'src/services/api/ollama/context'
and import { getAPIProvider } from 'src/utils/model/providers' or the correct
aliased module paths in your project); ensure the module specifiers match the
configured TS/webpack path mappings and adjust any file extensions to match
project conventions.

In `@src/utils/status.tsx`:
- Around line 291-310: Add an else-if branch in buildAPIProviderProperties to
mirror the other providers that surface their base URL: when apiProvider ===
'ollama' push a Property with a label like "Ollama base URL" and a value taken
from the configured Ollama endpoint (e.g., process.env.OLLAMA_BASE_URL or the
project’s config/getEnv helper) so the status pane shows the active Ollama
endpoint; follow the same pattern and null/undefined fallback used by the
existing gemini/grok/openai branches.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 45e50e50-e865-4795-9a2a-f1523ac344e7

📥 Commits

Reviewing files that changed from the base of the PR and between 2545dca and a82fdd7.

📒 Files selected for processing (43)
  • README.md
  • docs/features/ollama-provider.md
  • packages/@ant/model-provider/src/index.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/convertMessages.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/convertTools.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts
  • packages/@ant/model-provider/src/providers/ollama/convertMessages.ts
  • packages/@ant/model-provider/src/providers/ollama/convertTools.ts
  • packages/@ant/model-provider/src/providers/ollama/modelMapping.ts
  • packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
  • packages/@ant/model-provider/src/providers/ollama/types.ts
  • packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts
  • packages/builtin-tools/src/tools/WebFetchTool/utils.ts
  • packages/builtin-tools/src/tools/WebSearchTool/__tests__/adapterFactory.test.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts
  • src/commands.ts
  • src/commands/approval/__tests__/approvalModes.test.ts
  • src/commands/approval/approval.tsx
  • src/commands/approval/approvalModes.ts
  • src/commands/approval/index.ts
  • src/commands/provider.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/services/api/claude.ts
  • src/services/api/ollama/__tests__/client.test.ts
  • src/services/api/ollama/__tests__/think.test.ts
  • src/services/api/ollama/client.ts
  • src/services/api/ollama/context.ts
  • src/services/api/ollama/index.ts
  • src/utils/auth.ts
  • src/utils/context.ts
  • src/utils/effort.ts
  • src/utils/managedEnvConstants.ts
  • src/utils/model/__tests__/ollamaModelOptions.test.ts
  • src/utils/model/configs.ts
  • src/utils/model/model.ts
  • src/utils/model/modelOptions.ts
  • src/utils/model/providers.ts
  • src/utils/permissions/permissionSetup.ts
  • src/utils/settings/types.ts
  • src/utils/status.tsx
  • src/utils/thinking.ts

Comment thread src/commands/approval/approval.tsx Outdated
Comment thread src/commands/provider.ts
Comment thread src/components/ConsoleOAuthFlow.tsx
Comment thread src/services/api/ollama/index.ts
Comment thread src/utils/model/modelOptions.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
src/commands/approval/approvalModes.ts (2)

97-100: ⚡ Quick win

Derive “valid options” from descriptors instead of hard-coding

This message duplicates mode names and can drift from APPROVAL_MODE_DESCRIPTORS. Build it from the descriptor list so parser/help/UI stay in sync.

Suggested change
+const PRIMARY_APPROVAL_ALIASES = APPROVAL_MODE_DESCRIPTORS.map(descriptor => descriptor.aliases[0]).join(', ');
+
 export function parseApprovalModeArg(args: string): ApprovalModeArgResult {
   const normalized = normalizeApprovalArg(args)
   if (CURRENT_ARGS.has(normalized)) {
     return { type: 'current' }
@@
   return {
     type: 'invalid',
-    message: `Invalid approval mode: ${args}. Valid options are: default, accept-edits, plan, auto, dont-ask, full-access`,
+    message: `Invalid approval mode: ${args}. Valid options are: ${PRIMARY_APPROVAL_ALIASES}`,
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/approval/approvalModes.ts` around lines 97 - 100, The error
message hard-codes valid mode names and can drift; instead derive the list from
APPROVAL_MODE_DESCRIPTORS. In the function that returns the invalid result (the
return with type: 'invalid' and message: `Invalid approval mode...`), build the
valid options string by mapping APPROVAL_MODE_DESCRIPTORS to their keys/names
(e.g., Object.keys or .map over the descriptors) and join them into the message
so the message uses the live descriptor list rather than a hard-coded string.

1-1: ⚡ Quick win

Use the src/* alias for this TypeScript import

Please replace the relative import with the project alias to match repo conventions and avoid fragile path traversal.

Suggested change
-import type { PermissionMode } from '../../utils/permissions/PermissionMode.js'
+import type { PermissionMode } from 'src/utils/permissions/PermissionMode.js';

As per coding guidelines, "Use src/* path alias in imports; do not use relative paths like ../../../".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/approval/approvalModes.ts` at line 1, Replace the relative
import of PermissionMode in approvalModes.ts with the project path alias: change
the import of PermissionMode from '../../utils/permissions/PermissionMode.js' to
use the 'src/*' alias (e.g., 'src/utils/permissions/PermissionMode.js') so the
file imports PermissionMode via the src path alias instead of a relative
traversal.
src/services/api/ollama/__tests__/think.test.ts (1)

2-4: ⚡ Quick win

Use src/* aliases instead of deep relative imports.

Lines 2-4 currently use ../../../../... paths; switch these to src/* aliases to keep imports stable and consistent with repo conventions.

Proposed change
-import { getEmptyToolPermissionContext } from '../../../../Tool.js'
-import { asSystemPrompt } from '../../../../utils/systemPromptType.js'
-import type { Options } from '../../claude.js'
+import { getEmptyToolPermissionContext } from 'src/Tool.js'
+import { asSystemPrompt } from 'src/utils/systemPromptType.js'
+import type { Options } from 'src/services/api/claude.js'

As per coding guidelines, Use src/* path alias in imports; do not use relative paths like ../../../.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/ollama/__tests__/think.test.ts` around lines 2 - 4, The test
file uses deep relative imports for getEmptyToolPermissionContext,
asSystemPrompt and Options; update the import statements in
src/services/api/ollama/__tests__/think.test.ts to use the repo path aliases
(replace '../../../../Tool.js' and '../../../../utils/systemPromptType.js' and
'../../claude.js' with their equivalent 'src/...' aliased paths), ensuring you
import getEmptyToolPermissionContext, asSystemPrompt and the Options type via
the src/* aliases to match project conventions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/`@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts:
- Around line 14-27: The test setup currently deletes envKeys only once at
module load and restores them in afterEach, which allows host environment
variables to leak into later tests; add a beforeEach that clears those keys from
process.env (for each key in envKeys delete process.env[key]) so every test
starts with a clean environment, keep savedEnv and the existing afterEach
restore logic intact; reference envKeys, savedEnv, beforeEach and afterEach to
locate the change.

In `@src/utils/status.tsx`:
- Around line 400-405: The status display shows raw OLLAMA_BASE_URL which can
differ from the actual endpoint because createOllamaClient auto-appends "/api";
update the apiProvider === 'ollama' branch to derive the effective Ollama URL
the same way as createOllamaClient (e.g., take process.env.OLLAMA_BASE_URL or
default, and if it doesn't end with "/api" append "/api") before pushing
properties so the label/value for "Ollama base URL" matches the runtime
endpoint; reference the ollamaBaseUrl variable and createOllamaClient behavior
when making the change.

---

Nitpick comments:
In `@src/commands/approval/approvalModes.ts`:
- Around line 97-100: The error message hard-codes valid mode names and can
drift; instead derive the list from APPROVAL_MODE_DESCRIPTORS. In the function
that returns the invalid result (the return with type: 'invalid' and message:
`Invalid approval mode...`), build the valid options string by mapping
APPROVAL_MODE_DESCRIPTORS to their keys/names (e.g., Object.keys or .map over
the descriptors) and join them into the message so the message uses the live
descriptor list rather than a hard-coded string.
- Line 1: Replace the relative import of PermissionMode in approvalModes.ts with
the project path alias: change the import of PermissionMode from
'../../utils/permissions/PermissionMode.js' to use the 'src/*' alias (e.g.,
'src/utils/permissions/PermissionMode.js') so the file imports PermissionMode
via the src path alias instead of a relative traversal.

In `@src/services/api/ollama/__tests__/think.test.ts`:
- Around line 2-4: The test file uses deep relative imports for
getEmptyToolPermissionContext, asSystemPrompt and Options; update the import
statements in src/services/api/ollama/__tests__/think.test.ts to use the repo
path aliases (replace '../../../../Tool.js' and
'../../../../utils/systemPromptType.js' and '../../claude.js' with their
equivalent 'src/...' aliased paths), ensuring you import
getEmptyToolPermissionContext, asSystemPrompt and the Options type via the src/*
aliases to match project conventions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 178c29c0-728d-4fc9-9ee8-62add9aa7cdf

📥 Commits

Reviewing files that changed from the base of the PR and between a82fdd7 and 238dbd1.

📒 Files selected for processing (45)
  • README.md
  • docs/features/ollama-provider.md
  • packages/@ant/model-provider/src/index.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/convertMessages.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/convertTools.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts
  • packages/@ant/model-provider/src/providers/ollama/convertMessages.ts
  • packages/@ant/model-provider/src/providers/ollama/convertTools.ts
  • packages/@ant/model-provider/src/providers/ollama/modelMapping.ts
  • packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
  • packages/@ant/model-provider/src/providers/ollama/types.ts
  • packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts
  • packages/builtin-tools/src/tools/WebFetchTool/utils.ts
  • packages/builtin-tools/src/tools/WebSearchTool/__tests__/adapterFactory.test.ts
  • packages/builtin-tools/src/tools/WebSearchTool/__tests__/ollamaAdapter.test.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts
  • src/commands.ts
  • src/commands/approval/__tests__/approvalModes.test.ts
  • src/commands/approval/approval.tsx
  • src/commands/approval/approvalModes.ts
  • src/commands/approval/index.ts
  • src/commands/provider.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/services/api/claude.ts
  • src/services/api/ollama/__tests__/client.test.ts
  • src/services/api/ollama/__tests__/think.test.ts
  • src/services/api/ollama/client.ts
  • src/services/api/ollama/context.ts
  • src/services/api/ollama/index.ts
  • src/utils/auth.ts
  • src/utils/context.ts
  • src/utils/effort.ts
  • src/utils/managedEnvConstants.ts
  • src/utils/model/__tests__/ollamaModelOptions.test.ts
  • src/utils/model/__tests__/providers.test.ts
  • src/utils/model/configs.ts
  • src/utils/model/model.ts
  • src/utils/model/modelOptions.ts
  • src/utils/model/providers.ts
  • src/utils/permissions/permissionSetup.ts
  • src/utils/settings/types.ts
  • src/utils/status.tsx
  • src/utils/thinking.ts
✅ Files skipped from review due to trivial changes (8)
  • src/commands/approval/index.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/convertMessages.test.ts
  • src/utils/model/providers.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/convertTools.test.ts
  • src/commands/approval/tests/approvalModes.test.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts
  • packages/builtin-tools/src/tools/WebFetchTool/tests/headers.test.ts
  • src/services/api/ollama/index.ts
🚧 Files skipped from review as they are similar to previous changes (28)
  • src/utils/permissions/permissionSetup.ts
  • packages/builtin-tools/src/tools/WebSearchTool/tests/adapterFactory.test.ts
  • src/commands.ts
  • packages/@ant/model-provider/src/providers/ollama/modelMapping.ts
  • src/utils/model/tests/ollamaModelOptions.test.ts
  • src/utils/context.ts
  • src/utils/model/model.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/streamAdapter.test.ts
  • README.md
  • src/services/api/ollama/context.ts
  • packages/@ant/model-provider/src/providers/ollama/types.ts
  • packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/utils/model/configs.ts
  • docs/features/ollama-provider.md
  • src/services/api/ollama/tests/client.test.ts
  • packages/@ant/model-provider/src/providers/ollama/convertTools.ts
  • src/services/api/claude.ts
  • packages/builtin-tools/src/tools/WebFetchTool/utils.ts
  • src/commands/provider.ts
  • packages/@ant/model-provider/src/index.ts
  • src/utils/auth.ts
  • src/commands/approval/approval.tsx
  • src/utils/model/modelOptions.ts
  • src/utils/effort.ts
  • src/utils/thinking.ts
  • packages/@ant/model-provider/src/providers/ollama/convertMessages.ts
  • src/services/api/ollama/client.ts

Comment thread src/utils/status.tsx
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/utils/thinking.ts (1)

117-155: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

modelSupportsAdaptiveThinking — Ollama check placed after legacy-model exclusion block, silently disabling adaptive thinking for Ollama Haiku/Sonnet/Opus tiers.

The legacy exclusion block (lines 132–138) returns false for any model whose canonical name contains "opus", "sonnet", or "haiku" — with no provider guard. The Ollama return true at lines 151–153 is only reachable when those strings are absent. Since the PR explicitly introduces a Haiku/Sonnet/Opus three-tier model mapping for Ollama (via /login config), those canonical names are expected to match the exclusion block, and adaptive thinking will be silently disabled for them.

Compare modelSupportsThinking (lines 91–114): the Ollama early-return is placed before any canonical-based logic, which is the correct pattern.

🐛 Proposed fix — move `provider` retrieval and Ollama guard before the legacy exclusion block
 export function modelSupportsAdaptiveThinking(model: string): boolean {
   const supported3P = get3PModelCapabilityOverride(model, 'adaptive_thinking')
   if (supported3P !== undefined) {
     return supported3P
   }
+  const provider = getAPIProvider()
+  if (provider === 'ollama') {
+    return true
+  }
   const canonical = getCanonicalName(model)
   // Supported by a subset of Claude 4 models
   if (
     canonical.includes('opus-4-7') ||
     canonical.includes('opus-4-6') ||
     canonical.includes('sonnet-4-6')
   ) {
     return true
   }
   // Exclude any other known legacy models (allowlist above catches 4-6+ variants first)
   if (
     canonical.includes('opus') ||
     canonical.includes('sonnet') ||
     canonical.includes('haiku')
   ) {
     return false
   }
   // ...
-  const provider = getAPIProvider()
-  if (provider === 'ollama') {
-    return true
-  }
   return provider === 'firstParty' || provider === 'foundry'
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/thinking.ts` around lines 117 - 155, The function
modelSupportsAdaptiveThinking currently checks canonical names for
"opus"/"sonnet"/"haiku" before considering the API provider, which causes Ollama
models to be incorrectly excluded; move the provider lookup (call to
getAPIProvider()) and the Ollama early-return (return true for provider ===
'ollama') to before the canonical-based legacy exclusion block so Ollama model
strings are allowed to opt-in via the three-tier mapping, and ensure subsequent
logic (getCanonicalName and the legacy excludes) runs only for non-Ollama
providers.
🧹 Nitpick comments (5)
packages/@ant/model-provider/src/providers/ollama/modelMapping.ts (1)

34-40: ⚡ Quick win

No generic OLLAMA_DEFAULT_MODEL escape hatch before the hardcoded qwen3-coder fallback

qwen3-coder exists in the official Ollama registry, but the full model requires a minimum of 250 GB of memory or unified memory — most users won't have it pre-installed. When no family-specific env var is configured (or when getModelFamily returns null for an unrecognized Claude model family), there is no way to set a single generic override without configuring all three family env vars separately.

Adding OLLAMA_DEFAULT_MODEL as a lower-priority tier before the hardcoded name would let users configure one variable for all cases:

♻️ Proposed fix to add a generic fallback override
   // 2. Per-family env var (OLLAMA_DEFAULT_OPUS_MODEL, etc.)
   if (family) {
     const ollamaEnvVar = `OLLAMA_DEFAULT_${family.toUpperCase()}_MODEL`
     const ollamaOverride = process.env[ollamaEnvVar]
     if (ollamaOverride) return ollamaOverride
   }

-  return 'qwen3-coder'
+  // 3. Generic override for any remaining Claude ID (e.g. unrecognised family)
+  const genericOverride = process.env['OLLAMA_DEFAULT_MODEL']
+  if (genericOverride) return genericOverride
+
+  return 'qwen3-coder'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts around
lines 34 - 40, The current mapping returns the hardcoded 'qwen3-coder' when no
family-specific env var is set; add a lower-priority generic override by
checking process.env.OLLAMA_DEFAULT_MODEL before falling back to 'qwen3-coder'.
Update the logic in the same scope where ollamaEnvVar/ollamaOverride are used
(in modelMapping.ts, around the family handling and return) to first return
process.env.OLLAMA_DEFAULT_MODEL if defined, otherwise return the hardcoded
'qwen3-coder'.
packages/@ant/model-provider/src/providers/ollama/types.ts (2)

34-37: ⚡ Quick win

num_ctx is missing from options despite the PR's stated dynamic context-length feature.

The PR description states that the provider reads the model's context window via /api/show and adjusts the output cap accordingly, but num_ctx — one of the valid Ollama options keys used to set the context window size — is absent. Any implementation code that sets it will need to bypass the type, risking an as any assertion.

♻️ Proposed fix
 options?: {
   temperature?: number
   num_predict?: number
+  num_ctx?: number
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@ant/model-provider/src/providers/ollama/types.ts around lines 34 -
37, The options type is missing the Ollama key for context window size; update
the options declaration (the options object where temperature and num_predict
are defined) to include num_ctx?: number so callers can type-safely set the
model context length without using any assertions; ensure the new field is
optional and typed as number to match Ollama's API.

1-8: ⚡ Quick win

Consider replacing OllamaMessage with a role-discriminated union.

The current flat interface allows invalid combinations at the type level (e.g., role: 'system' with tool_calls, or role: 'assistant' with tool_name). A discriminated union enforces correct field presence per role at compile time, which is valuable for all callers — especially the stream adapter and any future API clients.

♻️ Proposed discriminated-union refactor
-export interface OllamaMessage {
-  role: 'system' | 'user' | 'assistant' | 'tool'
-  content?: string
-  thinking?: string
-  tool_name?: string
-  images?: string[]
-  tool_calls?: OllamaToolCall[]
-}
+export type OllamaMessage =
+  | { role: 'system'; content: string }
+  | { role: 'user'; content?: string; images?: string[] }
+  | {
+      role: 'assistant'
+      content?: string
+      thinking?: string
+      tool_calls?: OllamaToolCall[]
+    }
+  | { role: 'tool'; tool_name?: string; content?: string }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@ant/model-provider/src/providers/ollama/types.ts around lines 1 -
8, OllamaMessage is currently a flat interface that allows invalid role/field
combos; replace it with a role-discriminated union (e.g., type OllamaMessage =
SystemMessage | UserMessage | AssistantMessage | ToolMessage) where each variant
explicitly declares only the valid fields (for example SystemMessage: { role:
'system'; content?: string }, UserMessage: { role: 'user'; content?: string },
AssistantMessage: { role: 'assistant'; content?: string; thinking?: string },
ToolMessage: { role: 'tool'; tool_name: string; tool_calls?: OllamaToolCall[];
images?: string[] }), export the union as OllamaMessage, and update all
consumers (notably the stream adapter and any serializers/parsers) to use the
new discriminated types so TypeScript enforces correct field presence by role.
packages/builtin-tools/src/tools/WebFetchTool/utils.ts (2)

417-420: ⚡ Quick win

Type assertion on response.json() bypasses structural validation.

Response.json() resolves to any; asserting any directly to OllamaWebFetchResponse is functionally equivalent to an as any cast — it provides no runtime or compile-time guarantee about the payload shape. Per the coding guidelines, objects with unknown structure should be accessed through Record<string, unknown> with explicit type guards.

♻️ Proposed refactor
-  const payload = (await response.json()) as OllamaWebFetchResponse
-  const title = payload.title?.trim()
-  const content = payload.content?.trim() ?? ''
-  const links = Array.isArray(payload.links) ? payload.links : []
+  const raw = (await response.json()) as Record<string, unknown>
+  const title = typeof raw.title === 'string' ? raw.title.trim() : undefined
+  const content = typeof raw.content === 'string' ? raw.content.trim() : ''
+  const links = Array.isArray(raw.links)
+    ? raw.links.filter((l): l is string => typeof l === 'string')
+    : []

As per coding guidelines: "Use Record<string, unknown> instead of any for objects with unknown structure" and "Use type guards (type narrowing) with union types instead of forced type casting."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 417 -
420, The code currently casts response.json() directly to OllamaWebFetchResponse
(payload), bypassing structural validation; change to first parse JSON into an
unknown/Record<string, unknown> and add a type guard function (e.g.,
isOllamaWebFetchResponse(obj): obj is OllamaWebFetchResponse) that checks
payload.title is string|undefined, payload.content is string|undefined, and
payload.links is an array if present. Use that guard before extracting
title/content/links (trim only after verifying string), and fall back to safe
defaults when the guard fails to avoid unsafe as-casts in the
payload/title/content/links handling.

401-438: 💤 Low value

getURLMarkdownContentFromOllama does not populate URL_CACHE.

Every call to this function goes over the network, even for repeated identical URLs. The standard path deduplicates with a 15-minute TTL LRU cache. Whether caching is desirable here depends on whether freshness is critical for the Ollama use-case, but the discrepancy is worth an explicit comment either way.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 401 -
438, The function getURLMarkdownContentFromOllama currently never writes to
URL_CACHE, causing repeated network calls for the same URL; either populate
URL_CACHE after a successful fetch or add an explicit comment explaining why
freshness prevents caching. Implement the fix by, inside
getURLMarkdownContentFromOllama after building the return object (and before
returning), inserting a URL_CACHE.set(url, { bytes, code, codeText, content,
contentType }) with the same LRU/TTL semantics used elsewhere (15-minute TTL) so
callers using the shared cache are deduplicated; if opting not to cache, add a
clear comment in the function explaining why Ollama responses must always bypass
URL_CACHE and reference the URL_CACHE symbol in that comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts:
- Around line 26-29: The current clean-up only removes a trailing "[1m]" and can
leave ANSI artifacts that break isClaudeModelId; update the sanitization around
selectedModel (the cleanModel assignment) to remove ANSI escape sequences more
robustly (strip any leading/trailing or embedded ANSI CSI sequences) before
calling isClaudeModelId so model-family detection works reliably; modify the
cleanModel logic used by isClaudeModelId and any callers in modelMapping.ts to
use the broader strip, or add a small helper (e.g., sanitizeModelName) and use
it wherever selectedModel is normalized.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts`:
- Around line 448-450: The Ollama branch currently returns early from
shouldUseOllamaWebFetch() to call getURLMarkdownContentFromOllama(), which
bypasses the checkDomainBlocklist() preflight when skipWebFetchPreflight is
false; move or add the same blocklist guard used in the non-Ollama path so that
before calling getURLMarkdownContentFromOllama() you conditionally call
checkDomainBlocklist(url) (respecting skipWebFetchPreflight) and
abort/throw/return the same way the other branch does if the domain is blocked.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts`:
- Around line 41-52: The env override check for adapterKey currently ignores
WEB_SEARCH_ADAPTER=ollama because the whitelist only includes
'api','bing','brave','exa'; update the conditional that sets adapterKey (the
envAdapter === ... branch) to also accept 'ollama' so an explicit envAdapter ===
'ollama' returns 'ollama' instead of falling through to getAPIProvider(); keep
the existing fallback to getAPIProvider() and the ternary structure intact.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`:
- Around line 25-30: The adapter uses Math.min(Math.max(options.numResults ?? 5,
1), 10) which defaults to 5, but SearchOptions documents a default of 8; update
the fallback to use 8 so callers who omit numResults get the expected default.
Modify the expression that computes maxResults (referenced in ollamaAdapter.ts
as maxResults and options.numResults) to use ?? 8 while keeping the same min/max
bounds before calling client.webSearch.
- Around line 45-47: The code force-casts the fetch body to
OllamaWebSearchResponse and iterates payload.results without runtime validation;
replace that with a type guard: check Array.isArray(payload.results) before the
for...of loop so you only iterate when payload.results is an array, keeping the
existing per-item guards inside the loop (referencing payload.results,
OllamaWebSearchResponse, and SearchResult) and leaving results: SearchResult[]
initialization unchanged.

In `@src/commands/approval/approval.tsx`:
- Around line 117-125: The effect in ApplyApprovalAndClose (the React.useEffect
that calls setAppState and onDone) can call onDone twice due to message
changing; add a run-once ref guard: create a ref (e.g., doneCalledRef =
useRef(false)) in the component, check if doneCalledRef.current is false before
invoking onDone(message), set doneCalledRef.current = true immediately after
calling onDone, and keep the existing setAppState/applyModeUpdate logic intact;
reference applyModeUpdate, executeApproval, and ApprovalCommandWithArgs to
locate the surrounding code.

---

Outside diff comments:
In `@src/utils/thinking.ts`:
- Around line 117-155: The function modelSupportsAdaptiveThinking currently
checks canonical names for "opus"/"sonnet"/"haiku" before considering the API
provider, which causes Ollama models to be incorrectly excluded; move the
provider lookup (call to getAPIProvider()) and the Ollama early-return (return
true for provider === 'ollama') to before the canonical-based legacy exclusion
block so Ollama model strings are allowed to opt-in via the three-tier mapping,
and ensure subsequent logic (getCanonicalName and the legacy excludes) runs only
for non-Ollama providers.

---

Nitpick comments:
In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts:
- Around line 34-40: The current mapping returns the hardcoded 'qwen3-coder'
when no family-specific env var is set; add a lower-priority generic override by
checking process.env.OLLAMA_DEFAULT_MODEL before falling back to 'qwen3-coder'.
Update the logic in the same scope where ollamaEnvVar/ollamaOverride are used
(in modelMapping.ts, around the family handling and return) to first return
process.env.OLLAMA_DEFAULT_MODEL if defined, otherwise return the hardcoded
'qwen3-coder'.

In `@packages/`@ant/model-provider/src/providers/ollama/types.ts:
- Around line 34-37: The options type is missing the Ollama key for context
window size; update the options declaration (the options object where
temperature and num_predict are defined) to include num_ctx?: number so callers
can type-safely set the model context length without using any assertions;
ensure the new field is optional and typed as number to match Ollama's API.
- Around line 1-8: OllamaMessage is currently a flat interface that allows
invalid role/field combos; replace it with a role-discriminated union (e.g.,
type OllamaMessage = SystemMessage | UserMessage | AssistantMessage |
ToolMessage) where each variant explicitly declares only the valid fields (for
example SystemMessage: { role: 'system'; content?: string }, UserMessage: {
role: 'user'; content?: string }, AssistantMessage: { role: 'assistant';
content?: string; thinking?: string }, ToolMessage: { role: 'tool'; tool_name:
string; tool_calls?: OllamaToolCall[]; images?: string[] }), export the union as
OllamaMessage, and update all consumers (notably the stream adapter and any
serializers/parsers) to use the new discriminated types so TypeScript enforces
correct field presence by role.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts`:
- Around line 417-420: The code currently casts response.json() directly to
OllamaWebFetchResponse (payload), bypassing structural validation; change to
first parse JSON into an unknown/Record<string, unknown> and add a type guard
function (e.g., isOllamaWebFetchResponse(obj): obj is OllamaWebFetchResponse)
that checks payload.title is string|undefined, payload.content is
string|undefined, and payload.links is an array if present. Use that guard
before extracting title/content/links (trim only after verifying string), and
fall back to safe defaults when the guard fails to avoid unsafe as-casts in the
payload/title/content/links handling.
- Around line 401-438: The function getURLMarkdownContentFromOllama currently
never writes to URL_CACHE, causing repeated network calls for the same URL;
either populate URL_CACHE after a successful fetch or add an explicit comment
explaining why freshness prevents caching. Implement the fix by, inside
getURLMarkdownContentFromOllama after building the return object (and before
returning), inserting a URL_CACHE.set(url, { bytes, code, codeText, content,
contentType }) with the same LRU/TTL semantics used elsewhere (15-minute TTL) so
callers using the shared cache are deduplicated; if opting not to cache, add a
clear comment in the function explaining why Ollama responses must always bypass
URL_CACHE and reference the URL_CACHE symbol in that comment.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6243cfe1-ea88-41ab-9c7c-a2450f9025cf

📥 Commits

Reviewing files that changed from the base of the PR and between 238dbd1 and ad674a0.

📒 Files selected for processing (45)
  • README.md
  • docs/features/ollama-provider.md
  • packages/@ant/model-provider/src/index.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/convertMessages.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/convertTools.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/modelMapping.test.ts
  • packages/@ant/model-provider/src/providers/ollama/__tests__/streamAdapter.test.ts
  • packages/@ant/model-provider/src/providers/ollama/convertMessages.ts
  • packages/@ant/model-provider/src/providers/ollama/convertTools.ts
  • packages/@ant/model-provider/src/providers/ollama/modelMapping.ts
  • packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
  • packages/@ant/model-provider/src/providers/ollama/types.ts
  • packages/builtin-tools/src/tools/WebFetchTool/__tests__/headers.test.ts
  • packages/builtin-tools/src/tools/WebFetchTool/utils.ts
  • packages/builtin-tools/src/tools/WebSearchTool/__tests__/adapterFactory.test.ts
  • packages/builtin-tools/src/tools/WebSearchTool/__tests__/ollamaAdapter.test.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts
  • packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts
  • src/commands.ts
  • src/commands/approval/__tests__/approvalModes.test.ts
  • src/commands/approval/approval.tsx
  • src/commands/approval/approvalModes.ts
  • src/commands/approval/index.ts
  • src/commands/provider.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/services/api/claude.ts
  • src/services/api/ollama/__tests__/client.test.ts
  • src/services/api/ollama/__tests__/think.test.ts
  • src/services/api/ollama/client.ts
  • src/services/api/ollama/context.ts
  • src/services/api/ollama/index.ts
  • src/utils/auth.ts
  • src/utils/context.ts
  • src/utils/effort.ts
  • src/utils/managedEnvConstants.ts
  • src/utils/model/__tests__/ollamaModelOptions.test.ts
  • src/utils/model/__tests__/providers.test.ts
  • src/utils/model/configs.ts
  • src/utils/model/model.ts
  • src/utils/model/modelOptions.ts
  • src/utils/model/providers.ts
  • src/utils/permissions/permissionSetup.ts
  • src/utils/settings/types.ts
  • src/utils/status.tsx
  • src/utils/thinking.ts
✅ Files skipped from review due to trivial changes (16)
  • packages/builtin-tools/src/tools/WebSearchTool/tests/ollamaAdapter.test.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/convertMessages.test.ts
  • src/services/api/ollama/context.ts
  • packages/builtin-tools/src/tools/WebSearchTool/tests/adapterFactory.test.ts
  • src/utils/effort.ts
  • src/utils/settings/types.ts
  • src/utils/permissions/permissionSetup.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/convertTools.test.ts
  • docs/features/ollama-provider.md
  • src/services/api/ollama/tests/think.test.ts
  • src/utils/model/model.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/streamAdapter.test.ts
  • src/commands/approval/approvalModes.ts
  • packages/@ant/model-provider/src/index.ts
  • src/commands.ts
  • src/utils/model/tests/providers.test.ts
🚧 Files skipped from review as they are similar to previous changes (18)
  • README.md
  • src/commands/approval/tests/approvalModes.test.ts
  • src/commands/approval/index.ts
  • src/utils/auth.ts
  • src/utils/model/configs.ts
  • src/utils/model/tests/ollamaModelOptions.test.ts
  • src/utils/context.ts
  • packages/builtin-tools/src/tools/WebFetchTool/tests/headers.test.ts
  • src/services/api/ollama/tests/client.test.ts
  • packages/@ant/model-provider/src/providers/ollama/tests/modelMapping.test.ts
  • src/commands/provider.ts
  • src/utils/status.tsx
  • packages/@ant/model-provider/src/providers/ollama/convertTools.ts
  • src/services/api/ollama/client.ts
  • packages/@ant/model-provider/src/providers/ollama/streamAdapter.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/services/api/ollama/index.ts
  • src/utils/model/providers.ts

Comment on lines +26 to +29
const cleanModel = selectedModel.replace(/\[1m\]$/, '')
if (!isClaudeModelId(cleanModel)) {
return cleanModel
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Incomplete ANSI artifact stripping may silently bypass Claude-ID detection

The regex /\[1m\]$/ only removes a trailing [1m] suffix. If the terminal UI also prepends the same escape code (e.g. the model name arrives as [1mclaude-3-5-sonnet-20241022), isClaudeModelId will fail to match: \bclaude requires a word boundary, but m and c are both word characters so the boundary does not exist, causing the raw artifact-laden string to be forwarded verbatim to Ollama — resulting in a silent model-not-found error rather than the expected family-mapped fallback.

Consider stripping leading/trailing ANSI codes more broadly, or asserting via test that selectedModel is always clean before reaching this function:

🛡️ Proposed fix for more robust ANSI stripping
-  const cleanModel = selectedModel.replace(/\[1m\]$/, '')
+  // Strip common ANSI SGR escape sequences (e.g. \x1b[1m bold, \x1b[0m reset)
+  // that may appear when the model name is taken from a terminal UI display string.
+  const cleanModel = selectedModel.replace(/(\x1b\[[\d;]*m|\[[\d;]*m)/g, '').trim()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@ant/model-provider/src/providers/ollama/modelMapping.ts around
lines 26 - 29, The current clean-up only removes a trailing "[1m]" and can leave
ANSI artifacts that break isClaudeModelId; update the sanitization around
selectedModel (the cleanModel assignment) to remove ANSI escape sequences more
robustly (strip any leading/trailing or embedded ANSI CSI sequences) before
calling isClaudeModelId so model-family detection works reliably; modify the
cleanModel logic used by isClaudeModelId and any callers in modelMapping.ts to
use the broader strip, or add a small helper (e.g., sanitizeModelName) and use
it wherever selectedModel is normalized.

Comment on lines +448 to +450
if (shouldUseOllamaWebFetch()) {
return getURLMarkdownContentFromOllama(url, abortController)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Ollama path silently bypasses the domain blocklist preflight check.

The early return at line 449 exits before the checkDomainBlocklist call at line 485. Because skipWebFetchPreflight === false is the effective default (meaning "do check"), most users would have the blocklist enforcement active on the non-Ollama path, but switching the provider to Ollama silently removes it. A domain Anthropic has flagged as blocked becomes reachable via the Ollama native endpoint.

The fix is to run the same blocklist gate (conditioned on skipWebFetchPreflight) before dispatching to getURLMarkdownContentFromOllama, mirroring the existing pattern:

🛡️ Proposed fix
+  if (shouldUseOllamaWebFetch()) {
+    // Domain blocklist check still applies even for the Ollama native path
+    if (getSettings_DEPRECATED().skipWebFetchPreflight === false) {
+      try {
+        const hostname = new URL(url).hostname
+        const checkResult = await checkDomainBlocklist(hostname)
+        switch (checkResult.status) {
+          case 'blocked':
+            throw new DomainBlockedError(hostname)
+          case 'check_failed':
+            throw new DomainCheckFailedError(hostname)
+        }
+      } catch (e) {
+        if (e instanceof DomainBlockedError || e instanceof DomainCheckFailedError) throw e
+        logError(e)
+      }
+    }
-  if (shouldUseOllamaWebFetch()) {
     return getURLMarkdownContentFromOllama(url, abortController)
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (shouldUseOllamaWebFetch()) {
return getURLMarkdownContentFromOllama(url, abortController)
}
if (shouldUseOllamaWebFetch()) {
// Domain blocklist check still applies even for the Ollama native path
if (getSettings_DEPRECATED().skipWebFetchPreflight === false) {
try {
const hostname = new URL(url).hostname
const checkResult = await checkDomainBlocklist(hostname)
switch (checkResult.status) {
case 'blocked':
throw new DomainBlockedError(hostname)
case 'check_failed':
throw new DomainCheckFailedError(hostname)
}
} catch (e) {
if (e instanceof DomainBlockedError || e instanceof DomainCheckFailedError) throw e
logError(e)
}
}
return getURLMarkdownContentFromOllama(url, abortController)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebFetchTool/utils.ts` around lines 448 -
450, The Ollama branch currently returns early from shouldUseOllamaWebFetch() to
call getURLMarkdownContentFromOllama(), which bypasses the
checkDomainBlocklist() preflight when skipWebFetchPreflight is false; move or
add the same blocklist guard used in the non-Ollama path so that before calling
getURLMarkdownContentFromOllama() you conditionally call
checkDomainBlocklist(url) (respecting skipWebFetchPreflight) and
abort/throw/return the same way the other branch does if the domain is blocked.

Comment on lines +41 to +52
// 2. Ollama provider → ollama (native /api/web_search)
// 3. Third-party provider (OpenAI/Gemini/Grok) → bing (no server_tools support)
// 4. First-party Anthropic API → api (server-side web search + connector_text)
// 5. Fallback → exa
const adapterKey =
envAdapter === 'api' ||
envAdapter === 'bing' ||
envAdapter === 'brave' ||
envAdapter === 'exa'
? envAdapter
: isThirdPartyProvider()
? 'bing'
: isFirstPartyAnthropicBaseUrl()
? 'api'
: 'exa'
: getAPIProvider() === 'ollama'
? 'ollama'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Honor WEB_SEARCH_ADAPTER=ollama in the explicit override path.

The factory now supports an ollama adapter, but the override whitelist still excludes it. With the current ternary, WEB_SEARCH_ADAPTER=ollama is silently ignored and provider detection takes over instead.

Suggested fix
   const adapterKey =
     envAdapter === 'api' ||
     envAdapter === 'bing' ||
     envAdapter === 'brave' ||
-    envAdapter === 'exa'
+    envAdapter === 'exa' ||
+    envAdapter === 'ollama'
       ? envAdapter
       : getAPIProvider() === 'ollama'
         ? 'ollama'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/index.ts` around
lines 41 - 52, The env override check for adapterKey currently ignores
WEB_SEARCH_ADAPTER=ollama because the whitelist only includes
'api','bing','brave','exa'; update the conditional that sets adapterKey (the
envAdapter === ... branch) to also accept 'ollama' so an explicit envAdapter ===
'ollama' returns 'ollama' instead of falling through to getAPIProvider(); keep
the existing fallback to getAPIProvider() and the ternary structure intact.

Comment on lines +25 to +30
const maxResults = Math.min(Math.max(options.numResults ?? 5, 1), 10)
const response = await client.webSearch(
{
query,
max_results: maxResults,
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Keep the default numResults aligned with SearchOptions.

SearchOptions.numResults documents a default of 8, but this adapter falls back to 5. When callers omit numResults, Ollama search will under-fetch compared with the shared contract.

Suggested fix
-    const maxResults = Math.min(Math.max(options.numResults ?? 5, 1), 10)
+    const maxResults = Math.min(Math.max(options.numResults ?? 8, 1), 10)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const maxResults = Math.min(Math.max(options.numResults ?? 5, 1), 10)
const response = await client.webSearch(
{
query,
max_results: maxResults,
},
const maxResults = Math.min(Math.max(options.numResults ?? 8, 1), 10)
const response = await client.webSearch(
{
query,
max_results: maxResults,
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`
around lines 25 - 30, The adapter uses Math.min(Math.max(options.numResults ??
5, 1), 10) which defaults to 5, but SearchOptions documents a default of 8;
update the fallback to use 8 so callers who omit numResults get the expected
default. Modify the expression that computes maxResults (referenced in
ollamaAdapter.ts as maxResults and options.numResults) to use ?? 8 while keeping
the same min/max bounds before calling client.webSearch.

Comment on lines +45 to +47
const payload = (await response.json()) as OllamaWebSearchResponse
const results: SearchResult[] = []
for (const result of payload.results ?? []) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the file structure and find the adapter file
git ls-files | grep -i ollama | head -20

Repository: claude-code-best/claude-code

Length of output: 1164


🏁 Script executed:

# Let's read the specific file and examine the lines in question
cat -n packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts | head -60

Repository: claude-code-best/claude-code

Length of output: 2253


🏁 Script executed:

# Look for the OllamaWebSearchResponse type definition
rg "OllamaWebSearchResponse" --type ts -A 5 -B 2

Repository: claude-code-best/claude-code

Length of output: 1779


🏁 Script executed:

cat -n packages/builtin-tools/src/tools/WebSearchTool/__tests__/ollamaAdapter.test.ts | head -100

Repository: claude-code-best/claude-code

Length of output: 1526


Use a type guard to validate payload.results is an array before iteration.

The forced cast on line 45 doesn't validate the response shape at runtime. The nullish coalescing operator payload.results ?? [] only checks for null or undefined, not whether results is actually an array. If the API returns { results: "string" } or any non-iterable value, the for...of loop will throw before the per-item guards on line 48 execute.

Add proper type narrowing to check that results is an array:

const results: SearchResult[] = []
if (Array.isArray(payload.results)) {
  for (const result of payload.results) {
    // existing logic
  }
}

This aligns with the coding guideline: use type guards instead of forced type casting with union types.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/builtin-tools/src/tools/WebSearchTool/adapters/ollamaAdapter.ts`
around lines 45 - 47, The code force-casts the fetch body to
OllamaWebSearchResponse and iterates payload.results without runtime validation;
replace that with a type guard: check Array.isArray(payload.results) before the
for...of loop so you only iterate when payload.results is an array, keeping the
existing per-item guards inside the loop (referencing payload.results,
OllamaWebSearchResponse, and SearchResult) and leaving results: SearchResult[]
initialization unchanged.

Comment on lines +117 to +125
React.useEffect(() => {
if (modeUpdate) {
setAppState(prev => ({
...prev,
toolPermissionContext: applyModeUpdate(prev.toolPermissionContext, modeUpdate),
}));
}
onDone(message);
}, [setAppState, message, modeUpdate, onDone]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

onDone can be called twice with conflicting messages — add a useRef run-once guard.

The effect's message dependency causes a second firing: after the first setAppState call commits the mode update, ApprovalCommandWithArgs re-renders with the updated toolPermissionContext; executeApproval now returns { message: "Approval mode is already X.", modeUpdate: undefined }; the changed message prop triggers the effect again, calling onDone a second time with the wrong "already set" text.

Updates queued in the passive-effects phase are deferred to the end of the effects phase, so the setAppState queued in the first firing and any unmount triggered by onDone are processed in the same deferred batch — but whether the unmount beats the re-render of ApprovalCommandWithArgs is not guaranteed and depends on external command-system behaviour this component does not control.

A useRef run-once guard is the idiomatic fix:

🐛 Proposed fix — run-once guard in ApplyApprovalAndClose
 function ApplyApprovalAndClose({
   result,
   onDone,
 }: {
   result: ApprovalCommandResult;
   onDone: (result: string) => void;
 }): React.ReactNode {
   const setAppState = useSetAppState();
   const { message, modeUpdate } = result;
+  const firedRef = React.useRef(false);
   React.useEffect(() => {
+    if (firedRef.current) return;
+    firedRef.current = true;
     if (modeUpdate) {
       setAppState(prev => ({
         ...prev,
         toolPermissionContext: applyModeUpdate(prev.toolPermissionContext, modeUpdate),
       }));
     }
     onDone(message);
   }, [setAppState, message, modeUpdate, onDone]);
   return null;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/approval/approval.tsx` around lines 117 - 125, The effect in
ApplyApprovalAndClose (the React.useEffect that calls setAppState and onDone)
can call onDone twice due to message changing; add a run-once ref guard: create
a ref (e.g., doneCalledRef = useRef(false)) in the component, check if
doneCalledRef.current is false before invoking onDone(message), set
doneCalledRef.current = true immediately after calling onDone, and keep the
existing setAppState/applyModeUpdate logic intact; reference applyModeUpdate,
executeApproval, and ApprovalCommandWithArgs to locate the surrounding code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant