Conversation
基于 OpenAI Codex CLI 官方实现,支持 PKCE 流程和手动 code 输入。 API key 交换为非致命步骤,兼容无 organization 的个人账户。 Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
添加 Codex ChatGPT 菜单项、OAuth 等待界面、手动 code 输入支持。 Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- providers.ts: 添加 codex 到 APIProvider 类型和路由 - provider.ts: /provider codex 切换,含 CODEX_API_KEY 检查 - configs.ts: 所有 12 个模型配置添加 codex 字段 - status.tsx: 状态栏显示 Codex API - managedEnvConstants.ts: 注册 CODEX_* 环境变量 Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- 新增 codex API 客户端、流适配、消息/工具转换、模型映射 - 支持 CODEX_API_KEY 和 CODEX_ACCESS_TOKEN 双认证 fallback - 集成到 claude.ts 调度链和 Langfuse 可观测性 - 包含模型映射单元测试(16 cases) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- configs.ts: 将 codex 字段从 Anthropic 模型名改为实际 OpenAI 模型名 (opus→gpt-5.4, sonnet→gpt-5.4-mini, haiku→gpt-5.4-mini, opus47→gpt-5.5) - modelMapping.ts: 移除不存在的 gpt-5.4-nano,修复 haiku 映射,添加 opus47 - ConsoleOAuthFlow.tsx: OAuth 成功后显示模型配置面板,可编辑三种模型名称 - 已登录用户再次选择 Codex 时跳过 OAuth 直接进入模型配置 - Ctrl+R 快捷键清除登录状态并重新 OAuth - modelOptions.ts: codex provider 支持 CODEX_DEFAULT_*_MODEL 环境变量 Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR adds comprehensive support for OpenAI's Codex (Responses API) as a new provider alongside existing Anthropic, OpenAI, Gemini, and Grok integrations. It includes OAuth authentication, message/tool conversion, streaming, error handling, and all necessary provider configuration and routing infrastructure. Changes
Sequence DiagramsequenceDiagram
participant User
participant ConsoleOAuth as ConsoleOAuthFlow
participant OAuthSvc as OAuth Service
participant Browser
participant OpenAI as OpenAI Auth
participant API as Codex API
participant Claude as queryModelCodex
participant Convert as Convert Module
participant Stream as Streaming Handler
User->>ConsoleOAuth: Select Codex provider
ConsoleOAuth->>OAuthSvc: Start OAuth flow
OAuthSvc->>Browser: Open authorization URL
OAuthSvc->>OpenAI: PKCE/state handshake
Browser->>OpenAI: User authorizes
OpenAI->>OAuthSvc: Return code + state
OAuthSvc->>OpenAI: Exchange code for tokens
OpenAI-->>OAuthSvc: id_token, access_token
OAuthSvc->>OpenAI: Exchange id_token for API key
OpenAI-->>OAuthSvc: API key response
OAuthSvc-->>ConsoleOAuth: OAuth result
User->>Claude: Send message
Claude->>Claude: Preflight validation
Claude->>Convert: Convert messages/tools
Convert-->>Claude: Codex input format
Claude->>API: Stream responses
API-->>Stream: Response chunks
Stream->>Stream: Build assistant blocks
Stream-->>Claude: Yield events
Claude->>Claude: Check if incomplete
alt Incomplete Response
Claude->>API: Send continuation (replay + synthesis)
API-->>Stream: Continue stream
end
Claude-->>User: Final AssistantMessage with usage
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
|
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/components/ConsoleOAuthFlow.tsx (1)
20-20:⚠️ Potential issue | 🔴 CriticalRemove the unused Zod locale import.
Line 20 contains a stray import:
import { fi } from 'zod/v4/locales'. Thefiidentifier is not used anywhere in the file and may break the build depending on Zod's export map.Proposed fix
-import { fi } from 'zod/v4/locales'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ConsoleOAuthFlow.tsx` at line 20, Remove the unused Zod locale import by deleting the stray line importing "fi" from 'zod/v4/locales' in the ConsoleOAuthFlow module; locate the import statement "import { fi } from 'zod/v4/locales'" at the top of the file (near the ConsoleOAuthFlow component) and remove it so the unused identifier is not present and the build/export map issues are avoided.
🧹 Nitpick comments (12)
src/utils/managedEnvConstants.ts (1)
95-108: Duplicate Codex entries inPROVIDER_MANAGED_ENV_VARS.
CODEX_BASE_URLis listed both at line 35 (under "Endpoint config") and line 99, andCODEX_API_KEYat line 48 (under "Auth") and line 100. The Set deduplicates so behavior is unaffected, but the duplication is misleading during audits — pick one location (the per-provider block at the bottom is the established pattern, see OPENAI_/GEMINI_ groupings) and remove the strays from the upper sections.🧹 Proposed cleanup
// Provider selection 'CLAUDE_CODE_USE_BEDROCK', 'CLAUDE_CODE_USE_VERTEX', 'CLAUDE_CODE_USE_FOUNDRY', 'CLAUDE_CODE_USE_GEMINI', 'CLAUDE_CODE_USE_CODEX', // Endpoint config (base URLs, project/resource identifiers) 'ANTHROPIC_BASE_URL', 'ANTHROPIC_BEDROCK_BASE_URL', 'ANTHROPIC_VERTEX_BASE_URL', 'ANTHROPIC_FOUNDRY_BASE_URL', 'ANTHROPIC_FOUNDRY_RESOURCE', 'ANTHROPIC_VERTEX_PROJECT_ID', 'GEMINI_BASE_URL', - 'CODEX_BASE_URL', // Region routing (per-model VERTEX_REGION_CLAUDE_* handled by prefix below) 'CLOUD_ML_REGION', // Auth ... 'GEMINI_API_KEY', - 'CODEX_API_KEY',(retain the entries in the per-provider Codex block on lines 99–108)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/managedEnvConstants.ts` around lines 95 - 108, Duplicate Codex env var entries exist in the Set PROVIDER_MANAGED_ENV_VARS: remove the stray CODEX_BASE_URL and CODEX_API_KEY occurrences from the earlier "Endpoint config" and "Auth" sections and keep only the per-provider Codex block entries (CODEX_BASE_URL, CODEX_API_KEY, CODEX_MODEL, CODEX_DEFAULT_HAIKU_MODEL, CODEX_DEFAULT_SONNET_MODEL, CODEX_DEFAULT_OPUS_MODEL, CODEX_IMGBB_API_KEY, CODEX_LOGIN_METHOD, CODEX_ACCESS_TOKEN, CODEX_REFRESH_TOKEN) so the Set remains readable and follows the established per-provider grouping.src/commands/provider.ts (1)
137-140: Stale comment doesn't reflect the providers actually persisted to settings.json.The comment block on lines 138–139 still lists only
'anthropic', 'openai', 'gemini', but the condition now matches'anthropic' || 'openai' || 'codex' || 'gemini' || 'grok'. Update the comment so future readers don't misread the persistent vs env-only split.📝 Proposed comment update
// Handle different provider types - // - 'anthropic', 'openai', 'gemini' are stored in settings.json (persistent) - // - 'bedrock', 'vertex', 'foundry' are env-only (do NOT touch settings.json) - if (arg === 'anthropic' || arg === 'openai' || arg === 'codex' || arg === 'gemini' || arg === 'grok') { + // - 'anthropic', 'openai', 'codex', 'gemini', 'grok' are stored in settings.json (persistent) + // - 'bedrock', 'vertex', 'foundry' are env-only (do NOT touch settings.json) + if (arg === 'anthropic' || arg === 'openai' || arg === 'codex' || arg === 'gemini' || arg === 'grok') {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/provider.ts` around lines 137 - 140, The comment above the provider handling no longer matches the condition in the if statement (if (arg === 'anthropic' || arg === 'openai' || arg === 'codex' || arg === 'gemini' || arg === 'grok')), so update that comment in src/commands/provider.ts to list the exact providers persisted to settings.json (anthropic, openai, codex, gemini, grok) and clarify that bedrock, vertex, foundry remain env-only; adjust the comment text near the provider handling block to reflect this precise persistent vs env-only split.src/services/api/codex/imageUpload.ts (2)
4-4: Unbounded in-memory cache.
resolvedImageUrlsis a module-levelMapthat's only ever written to. In long-running sessions where a user pastes many distinct images, this grows without bound (each entry: 32-byte hash key + ImgBB URL string). Bound it with a simple LRU or a max size + FIFO eviction.🧹 Suggested bound
-const resolvedImageUrls = new Map<string, string>() +const MAX_CACHE_ENTRIES = 256 +const resolvedImageUrls = new Map<string, string>() + +function setCache(key: string, value: string): void { + if (resolvedImageUrls.size >= MAX_CACHE_ENTRIES) { + const oldest = resolvedImageUrls.keys().next().value + if (oldest !== undefined) resolvedImageUrls.delete(oldest) + } + resolvedImageUrls.set(key, value) +}…and replace
resolvedImageUrls.set(cacheKey, url)on line 130 withsetCache(cacheKey, url).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/imageUpload.ts` at line 4, resolvedImageUrls is an unbounded module-level Map; replace it with a bounded cache (LRU or FIFO) to avoid memory growth: create a small helper (e.g., setCache/getCache using an LRUMap or a Map with MAX_RESOLVED_IMAGE_URLS and eviction policy) and replace all direct reads/writes to resolvedImageUrls (notably the resolvedImageUrls.set(cacheKey, url) call) with getCache(cacheKey) / setCache(cacheKey, url); ensure the eviction removes oldest or least-recently-used entries and expose a single MAX_RESOLVED_IMAGE_URLS constant to control the bound.
115-132: Add a privacy/exfiltration note at the top of this file.This module sends every base64 image the model receives to
api.imgbb.com. While gated byCODEX_IMGBB_API_KEY(so it's opt-in), nothing in this file warns future maintainers that user-pasted screenshots leave the user's machine. A short header comment would help — and pairs with the security concern raised onmanagedEnvConstants.tsaboutCODEX_IMGBB_API_KEYbelonging inSAFE_ENV_VARS.📝 Suggested header
+/** + * Codex image upload helper. + * + * ⚠️ Privacy: when CODEX_IMGBB_API_KEY is set, base64 images from the + * conversation are POSTed to https://api.imgbb.com/ to obtain a hosted URL + * (the Codex/Responses API does not accept raw base64 images directly). + * Behavior is opt-in via env var; if the key is unset this module returns + * null and the caller falls back to skipping the image block. + */ import { createHash } from 'crypto' import { logForDebugging } from '../../../utils/debug.js'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/imageUpload.ts` around lines 115 - 132, Add a short header comment at the top of this file that warns maintainers that uploadCodexBase64Image sends base64 image data off‑device to api.imgbb.com via uploadToImgbb (controlled only by the CODEX_IMGBB_API_KEY feature flag), notes that user-pasted screenshots may be exfiltrated and to avoid pasting sensitive data, and references the opt‑in nature of the API key and store (resolvedImageUrls) caching; place the comment above the function definitions so it’s immediately visible to future readers.packages/@ant/model-provider/src/providers/codex/modelMapping.ts (1)
72-86: Optional: add a JSDoc toresolveCodexMaxTokensdocumenting precedence and the 0/NaN fallback.
resolveCodexMaxTokenslacks a docblock unlikeresolveCodexModel, and theparseInt(...) || undefinedidiom silently treatsCODEX_MAX_TOKENS=0(or any non-numeric value) as "unset" and falls through toCLAUDE_CODE_MAX_OUTPUT_TOKENSthenupperLimit. Capturing this in a docblock prevents users from being surprised when they setCODEX_MAX_TOKENS=0orCODEX_MAX_TOKENS=fooand seeupperLimitinstead.📝 Proposed docblock
+/** + * Resolve the Codex max output tokens. + * + * Priority: + * 1. `maxOutputTokensOverride` argument + * 2. `CODEX_MAX_TOKENS` env var (positive integer; 0/NaN/invalid are ignored) + * 3. `CLAUDE_CODE_MAX_OUTPUT_TOKENS` env var (positive integer; 0/NaN/invalid are ignored) + * 4. `upperLimit` (caller-provided ceiling) + */ export function resolveCodexMaxTokens(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/codex/modelMapping.ts around lines 72 - 86, Add a JSDoc above resolveCodexMaxTokens that documents the precedence and fallback behavior: maxOutputTokensOverride (explicit argument) takes highest precedence, then the CODEX_MAX_TOKENS env var (parsed with parseInt and using the current parseInt(...) || undefined idiom), then CLAUDE_CODE_MAX_OUTPUT_TOKENS, and finally the upperLimit; explicitly note that the existing parseInt(...) || undefined logic treats 0 or any non-numeric value as "unset" (falls through to the next source), so callers or maintainers understand why CODEX_MAX_TOKENS=0 or invalid values result in upperLimit being used.src/services/api/codex/errors.ts (1)
80-114: Missing 4xx classification for 400/422.The status-based branches handle 401/403, 404, and 429 explicitly, but 400 (bad request) and 422 (unprocessable entity) fall through to the generic
unknownbucket. Both are clearly invalid-request shapes; classifying them as such gives the user a more actionable message and lets retry/back-off code (if any) treat them as non-retriable.Proposed fix
if (status === 404) { return { content: 'Codex endpoint not found (404). Verify CODEX_BASE_URL points to a Responses API root.', error: 'invalid_request', } } + + if (status === 400 || status === 422) { + return { + content: `Codex rejected the request (${status}): ${message}`, + error: 'invalid_request', + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/errors.ts` around lines 80 - 114, The error mapping currently treats 400 and 422 as unknown errors; update the status-based branches in the same error-normalization function in src/services/api/codex/errors.ts to classify 400 (Bad Request) and 422 (Unprocessable Entity) as invalid requests: add a branch (similar to the 404/429 checks) that returns error: 'invalid_request' with a clear content message like "Codex invalid request (400/422). Check request payload and parameters." Ensure this check runs before the generic 5xx and unknown branches so 400/422 no longer fall through to 'unknown'.src/services/api/codex/client.ts (1)
42-42:parseInthas no NaN guard.If
API_TIMEOUT_MSis set to a non-numeric value (e.g.,"10s",""which parses fine, or"abc"),parseIntyieldsNaNwhich the OpenAI client may treat as "no timeout". UseNumber.parseInt(...)with a fallback:Proposed fix
- timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10), + timeout: (() => { + const parsed = Number.parseInt(process.env.API_TIMEOUT_MS ?? '', 10) + return Number.isFinite(parsed) && parsed > 0 ? parsed : 600_000 + })(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/client.ts` at line 42, The timeout assignment for the OpenAI client (the timeout property using parseInt on process.env.API_TIMEOUT_MS) can produce NaN for non-numeric env values; change it to parse the env with Number.parseInt (or Number(...)) and provide a fallback default by checking Number.isNaN — e.g., compute a local const timeoutVal = Number.parseInt(process.env.API_TIMEOUT_MS ?? '600000', 10); if Number.isNaN(timeoutVal) set timeoutVal = 600000, then use that timeoutVal for the timeout property so the client never receives NaN.src/services/api/codex/preflight.ts (1)
21-30: Shallow message validation — content shape not checked.
sanitizeMessageItemonly verifies thatroleis'user' | 'assistant'and thatcontentis an array, then returns the originalitemcast toResponseInputItem. Each content part can still have an arbitrary shape, so a malformedinput_text/input_imageblock will only fail at the OpenAI request site (where the error message is far less actionable than theCodex preflight:prefix you've established). Consider asserting that each part is aRecord<string, unknown>with a knowntypehere, mirroring the strictness applied to function/function_call_output items.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/preflight.ts` around lines 21 - 30, sanitizeMessageItem currently only checks role and that content is an array but returns the original item cast to ResponseInputItem, leaving each content part unchecked; update sanitizeMessageItem to iterate over item.content, assert each element is a Record<string, unknown> and that it has a string "type" property, then validate that type is one of the allowed message part types (e.g., 'input_text', 'input_image', etc.) and optionally assert presence of required fields per type (like "text" for input_text or "url"/"bytes" for input_image) before returning the typed ResponseInputItem; reference sanitizeMessageItem, ResponseInputItem and item.content when adding these checks so malformed parts fail fast with a clear "Codex preflight" error.src/services/api/codex/index.ts (1)
261-274: PII exposure:provider: 'codex-chatgpt'is emitted from env at observation time.
process.env.CODEX_LOGIN_METHOD === 'chatgpt_subscription' ? 'codex-chatgpt' : 'codex'correctly reflects the auth path, but Langfuse will record one or the other based on whatever happens to be inprocess.envat that instant. If multiple providers / multi-account workflows mutate env between the request start (start = Date.now()at line 140) and the observation here, you can mis-attribute the trace. Capture the value once, nearstart, and reuse it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/index.ts` around lines 261 - 274, The provider value from process.env.CODEX_LOGIN_METHOD should be captured once at request start and reused when calling recordLLMObservation to avoid PII/mis-attribution if env changes mid-request; locate where start is set (start = Date.now()) and introduce a local const like capturedCodexLoginMethod = process.env.CODEX_LOGIN_METHOD (or computed provider) near that point, then replace the inline ternary in the recordLLMObservation call with the captured variable so recordLLMObservation(model, provider, ...) uses the stable value.packages/@ant/model-provider/src/providers/codex/callIds.ts (1)
10-15: Minor: trailing underscores can survive truncation.The order
\s+ → _, then forbidden →_, then_+ → _, thenslice(...)is correct, but truncating after collapse can still leave a trailing_at the boundary. Codex accepts it (it's in the allowed character class), so this is non-blocking — just consider a final.replace(/^_+|_+$/g, '')and a final empty-check if you'd prefer cleaner IDs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@ant/model-provider/src/providers/codex/callIds.ts around lines 10 - 15, The sanitized ID creation can leave a trailing or leading underscore after truncation; in the expression that builds sanitized (the chain using MAX_CODEX_CALL_ID_LENGTH), append a final trim step to remove leading/trailing underscores (e.g., replace(/^_+|_+$/g, '') ) immediately after slice(...) and then perform an empty-check on the resulting value (in the same scope where sanitized is defined) to provide a fallback or throw if the ID becomes empty; update the code that uses sanitized to rely on this cleaned/fallback value.src/components/ConsoleOAuthFlow.tsx (1)
359-403:useCallbackdependency list is wrong forstartCodexOAuth.The dependency array is
[onDone], butonDoneis never referenced inside the body. The function does close oversetOAuthStatus,setShowCodexPastePrompt, andcodexManualCodeResolveRef(all stable), and onperformOpenAICodexLogin/logError(module-level). This is currently harmless because the unused dep doesn't change, but it's misleading and will silently break if any of the (real) closures stops being stable. RemoveonDonefrom the deps (or replace with[]).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ConsoleOAuthFlow.tsx` around lines 359 - 403, The useCallback for startCodexOAuth incorrectly lists onDone in its dependency array even though onDone is unused; update the dependency array to remove onDone and instead supply only the actual dependencies (or an empty array if you confirm setOAuthStatus, setShowCodexPastePrompt, codexManualCodeResolveRef, performOpenAICodexLogin and logError are stable). Specifically, edit the startCodexOAuth useCallback dependency array so it no longer contains onDone (e.g., [] or [setOAuthStatus, setShowCodexPastePrompt, codexManualCodeResolveRef, performOpenAICodexLogin, logError] as appropriate).src/services/api/codex/streaming.ts (1)
539-564: Recompute the blocks once.
responseToRawAssistantBlocks(candidate)is called insideArray.prototype.findfor every candidate, then again twice on lines 560-561 for the chosen response. For a Codex response with many output items this repeats the same work up to 3× per candidate. Compute once and reuse.♻️ Suggested refactor
function selectResponse( state: CodexStreamState, streamedResponse?: Response, ): CodexStreamResult { - const response = - [streamedResponse, state.finalResponse, state.incompleteResponse, state.failedResponse] - .find( - candidate => - candidate !== undefined && - responseToRawAssistantBlocks(candidate).length > 0, - ) ?? - streamedResponse ?? - state.finalResponse ?? - state.incompleteResponse ?? - state.failedResponse - - return { - response, - incompleteResponse: state.incompleteResponse, - partialMessage: state.partialMessage, - assistantBlocks: - response !== undefined && responseToRawAssistantBlocks(response).length > 0 - ? responseToRawAssistantBlocks(response) - : getCompletedAssistantBlocks(state.completedBlocks), - } + const candidates = [ + streamedResponse, + state.finalResponse, + state.incompleteResponse, + state.failedResponse, + ] + let response: Response | undefined + let blocks: RawAssistantBlock[] = [] + for (const candidate of candidates) { + if (!candidate) continue + const candidateBlocks = responseToRawAssistantBlocks(candidate) + if (candidateBlocks.length > 0) { + response = candidate + blocks = candidateBlocks + break + } + } + response ??= + streamedResponse ?? state.finalResponse ?? state.incompleteResponse ?? state.failedResponse + + return { + response, + incompleteResponse: state.incompleteResponse, + partialMessage: state.partialMessage, + assistantBlocks: + blocks.length > 0 ? blocks : getCompletedAssistantBlocks(state.completedBlocks), + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/codex/streaming.ts` around lines 539 - 564, In selectResponse, avoid calling responseToRawAssistantBlocks repeatedly: build an ordered list of candidates [streamedResponse, state.finalResponse, state.incompleteResponse, state.failedResponse], map each candidate to its computed blocks once (using responseToRawAssistantBlocks), pick the first candidate whose computed blocks.length > 0 as response (falling back to the original null-coalescing order if none match), and then reuse the cached blocks for assistantBlocks (otherwise call getCompletedAssistantBlocks(state.completedBlocks)); keep other return fields (incompleteResponse, partialMessage) unchanged and preserve the same candidate order and fallback behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/`@ant/model-provider/src/providers/codex/__tests__/modelMapping.test.ts:
- Around line 5-21: The afterEach restores environment by calling
Object.assign(process.env, originalEnv) which turns undefined into the string
"undefined"; change afterEach to iterate the keys of originalEnv (the variables
captured in originalEnv: CODEX_MODEL, CODEX_DEFAULT_HAIKU_MODEL,
CODEX_DEFAULT_SONNET_MODEL, CODEX_DEFAULT_OPUS_MODEL) and for each key do: if
originalEnv[key] === undefined then delete process.env[key] else set
process.env[key] = originalEnv[key]; keep beforeEach and originalEnv as-is and
apply this logic inside the afterEach block so tests don’t get polluted.
In `@packages/`@ant/model-provider/src/providers/codex/convertMessages.ts:
- Around line 244-253: resolveToolResultCallId can return null for empty/invalid
tool_use.id because resolveAssistantCallId stores a hash fallback only when
originalId.length > 0, leaving byOriginalId without a mapping and causing
tool_result/function_call_output to be dropped; fix by ensuring assistant→callId
mapping is always recorded for the assistant call even when originalId is empty
(i.e., remove or adjust the originalId.length guard in resolveAssistantCallId so
it always sets state.byOriginalId with a stable surrogate key), and update
resolveToolResultCallId to also accept that surrogate (or fallback to looking up
by positional index) so tool_result entries can resolve to the same callId
rather than relying on normalizeCodexCallId returning null.
In `@packages/`@ant/model-provider/src/providers/codex/convertTools.ts:
- Around line 20-25: The current guard in the tools.flatMap callback only
excludes two specific value.type strings and then falls back to
!isClientFunctionTool(tool), which lets other Anthropic server-side tools slip
through; update this to broadly detect and skip all Anthropic server-side tools
instead of hardcoding 'advisor_20260301' and 'computer_20250124' — e.g., replace
the explicit value.type checks with a single predicate that checks
Anthropic-specific metadata (such as value.provider === 'anthropic' or a
server-side flag like value.serverSide === true or similar schema fields
available) and short-circuits conversion for any tool matching that predicate
while still using isClientFunctionTool(tool) for client-side determination.
Ensure the change is made in the same callback where value is cast and
isClientFunctionTool is called so all Anthropic server tools are filtered out.
In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 1664-1693: The shortcut comment, the displayed hint and the actual
binding for the relogin shortcut are inconsistent: reconcile them by making the
useKeybinding call, the inline comment above it, and the footer hint string use
the same key (either Ctrl+D or Ctrl+R). Locate the useKeybinding invocation for
'oauth:codex-relogin' and either change its bound key to match the footer hint
or update the footer hint and the inline comment to match the current binding;
ensure the text in the footer (the "Ctrl+R to re-login" string) and the inline
comment above useKeybinding both reflect the chosen key so all three
(useKeybinding('oauth:codex-relogin', ...), the comment, and the footer hint)
are identical.
- Around line 230-242: The handleCodexPasteSubmit handler sets OAuth error state
using an unsafe cast and an incorrect retry target; replace the `toRetry:
oauthStatus as any` with a correctly typed OAuthStatus value (no `as any`) and
set the retry target to a state that will actually restart the flow (e.g.,
'codex_oauth_start' or 'idle') so retry triggers startCodexOAuth (which checks
pendingCodexOAuthRef) instead of leaving the UI stuck on 'codex_oauth_waiting';
update the setOAuthStatus call in handleCodexPasteSubmit and ensure the
OAuthStatus union type is used directly or narrowed to the chosen literal.
- Around line 1571-1606: The env object currently includes keys with undefined
values and is passed into updateSettingsForSource using `as any` (also
`modelType: 'codex' as any`); remove any `as any` assertions and drop keys with
undefined values before calling updateSettingsForSource. Concretely: build env
from codexResult but filter out entries where value is null/undefined (so
CODEX_API_KEY is omitted when absent), replace the `as any` usages by a safe
cast like `as unknown as <appropriate SettingsUpdate type>` or extend the
settings type to accept optional env keys and the 'codex' modelType, and then
call updateSettingsForSource('userSettings', { modelType: 'codex', env } ) using
that proper type; keep the existing process.env[k] = v loop unchanged.
In `@src/keybindings/defaultBindings.ts`:
- Around line 159-160: The new 'ctrl+r' default is scoped to FormField but the
keybinding schema doesn't include FormField, so users can't remap/unbind it;
update the keybinding schema (the KEYBINDING_CONTEXTS definition in
src/keybindings/schema.ts) to include 'FormField' (and any related type/enum
unions or exported types used by keybindings.json validation), adjust any
TypeScript types or JSON schema generation that reference KEYBINDING_CONTEXTS so
'FormField' is a valid context, and run/adjust tests that validate keybinding
parsing to ensure keybindings.json can now override or clear the
'oauth:codex-relogin' binding defined in defaultBindings.ts.
In `@src/services/api/claude.ts`:
- Around line 1350-1353: The Codex branch is calling queryModelCodex with
filteredTools which has already dropped undiscovered deferred tools, so Codex
never sees tools marked defer_loading; change the call in the getAPIProvider()
=== 'codex' branch to pass the full tools pool (the original tools variable used
earlier in this module) instead of filteredTools when invoking
queryModelCodex(messagesForAPI, systemPrompt, tools, signal, options) so the
Codex adapter can perform tool-search/dynamic-loading the same way as the OpenAI
path; ensure the symbol queryModelCodex is used and that convertTools.ts
deferred-loading behavior is preserved by passing the unfiltered tools list.
In `@src/services/api/codex/client.ts`:
- Around line 8-53: getCodexClient() caches an OpenAI instance which captures
API credentials at construction, so after the re-login flow in
ConsoleOAuthFlow.tsx clears environment credentials the cached client keeps
using stale tokens; call the existing clearCodexClientCache() from the re-login
handler (ConsoleOAuthFlow.tsx re-login block) immediately after clearing
process.env and settings to invalidate the cached OpenAI, or alternatively
modify getCodexClient() to key its cache by the credential tuple (e.g., apiKey +
baseURL) so a credential change automatically creates a new client; reference
getCodexClient, clearCodexClientCache, and the re-login handler in
ConsoleOAuthFlow.tsx when implementing the fix.
In `@src/services/api/codex/imageUpload.ts`:
- Around line 43-59: The function pickImgbbImageUrl currently prefers downscaled
variants (medium/thumb) before full-size URLs; either add a concise comment
above pickImgbbImageUrl documenting that choice and the tradeoff (e.g., to limit
token cost), or change the candidate preference to favor highest-fidelity images
by reordering the candidates array so payload.data.image.url, payload.data.url,
and payload.data.display_url are checked before medium and thumb; update the
function comment accordingly to reflect the chosen behavior.
In `@src/services/api/codex/index.ts`:
- Line 257: Replace the unsafe "as any" cast on assistantMessage.message.usage
by asserting the correct type: ensure totalUsage is converted to the expected
usage type (e.g., CodexUsage or the local UsageType) using a proper typed
assertion like "as unknown as <UsageType>" or by mapping/extending the local
type so you can use "as CodexUsage"; update the assignment to use that typed
assertion (references: assistantMessage.message.usage, totalUsage, CodexUsage).
- Around line 199-220: The loop that consumes attemptStream (using
attemptStream.next() and variables attemptResult/partialMessage/finalResponse)
currently does "if (!attemptResult?.response) { continue }" which causes the
same replayMessages payload to be resent repeatedly; change that continue to a
break so the outer continuation loop stops and error handling can surface
immediately when streamCodexAttempt returns a missing response (avoid burning
MAX_CODEX_CONTINUATIONS retries). Ensure the change is applied in the block that
checks attemptResult?.response and references attemptStream, attemptResult,
streamCodexAttempt, replayMessages, and MAX_CODEX_CONTINUATIONS.
In `@src/services/api/codex/streaming.ts`:
- Around line 86-87: The code uses pervasive `as any` casts (e.g., the `usage`
assignment using getCodexUsage(response) and multiple `event` assignments in
this file) which violates the guideline; replace each `as any` with a proper
typed solution by importing or declaring the raw stream-event union from the
Anthropic SDK (suggest name RawMessageStreamEvent) and casting via `as unknown
as RawMessageStreamEvent` for the `event` shapes, cast `usage` as `as unknown as
Anthropic.Usage` (or change getCodexUsage to return that type), and replace
literal `stop_reason` casts with `as unknown as Message['stop_reason']` or
adjust the Message type so the compiler infers it—apply these changes for the
identifiers referenced in this file (getCodexUsage, event assignments around the
noted lines, and any stop_reason usages) to remove all `as any` in production
code.
- Around line 47-57: Fix the null-pointer risk in getCodexUsage by guarding
access to usage.input_tokens_details before reading cached_tokens (use optional
chaining or a safe fallback so
response?.usage?.input_tokens_details?.cached_tokens ?? 0), and update
rawAssistantBlocksToAssistantMessage to reuse getCodexUsage instead of
duplicating the logic so both places compute cache_read_input_tokens
consistently and safely.
In `@src/services/oauth/openai-codex.ts`:
- Around line 163-168: The callback currently processes and renders the OAuth
`error` (via ERROR_HTML) before verifying the OAuth `state` and injects
`error_description` directly into the HTML; update the handler to first validate
the incoming `state` against the expected value and only then handle any
`error`, and ensure ERROR_HTML (or the value passed into it) HTML-escapes the
message to prevent markup/script injection; apply the same order and escaping
changes to the other error-handling block referenced around the same callback
(the code handling `error_description` at the 197-210 area) so both error paths
validate state first and escape user-provided text.
- Around line 315-326: The manual fallback fails because openBrowser(authUrl) is
awaited before manualCode is consumed and manual inputs aren't parsed; change
the flow so openBrowser is launched but not awaited (start it non-blocking) and
start consuming manualCode immediately so the Promise.race between
server.waitForCode() and manual input can win even if openBrowser fails;
additionally, before calling exchangeCodeForTokens(), parse the manualCode
string to extract the actual authorization code (handle full URLs with
?code=..., fragment formats like `#code`=... or /?code=..., and inputs containing
code#state) and pass the extracted code to exchangeCodeForTokens(); update logic
around openBrowser, manualCode, server.waitForCode, and exchangeCodeForTokens
accordingly and handle/openBrowser errors without preventing manual fallback.
In `@src/utils/managedEnvConstants.ts`:
- Around line 233-237: Remove CODEX_IMGBB_API_KEY from the SAFE_ENV_VARS
allowlist so project-scoped settings cannot inject it into process.env; keep it
(or add it) only in PROVIDER_MANAGED_ENV_VARS so the key must come from a
trusted source (env/policy) rather than project settings. Update the array that
defines SAFE_ENV_VARS to exclude 'CODEX_IMGBB_API_KEY' and verify
PROVIDER_MANAGED_ENV_VARS includes it; also consider checking
src/services/api/codex/imageUpload.ts to ensure it still reads the key from
process.env as intended.
In `@src/utils/settings/types.ts`:
- Around line 374-376: The settings description for the API provider (the string
passed to .describe in types.ts) omits the ChatGPT subscription OAuth token;
update that blurb to include CODEX_ACCESS_TOKEN (and optionally
CODEX_REFRESH_TOKEN) as valid env vars for the "codex" provider so it matches
the logic in provider.ts (which checks CODEX_API_KEY || CODEX_ACCESS_TOKEN) and
the entries in managedEnvConstants.ts; edit the .describe text to mention
CODEX_ACCESS_TOKEN as the ChatGPT subscription path alongside CODEX_API_KEY and
CODEX_BASE_URL.
---
Outside diff comments:
In `@src/components/ConsoleOAuthFlow.tsx`:
- Line 20: Remove the unused Zod locale import by deleting the stray line
importing "fi" from 'zod/v4/locales' in the ConsoleOAuthFlow module; locate the
import statement "import { fi } from 'zod/v4/locales'" at the top of the file
(near the ConsoleOAuthFlow component) and remove it so the unused identifier is
not present and the build/export map issues are avoided.
---
Nitpick comments:
In `@packages/`@ant/model-provider/src/providers/codex/callIds.ts:
- Around line 10-15: The sanitized ID creation can leave a trailing or leading
underscore after truncation; in the expression that builds sanitized (the chain
using MAX_CODEX_CALL_ID_LENGTH), append a final trim step to remove
leading/trailing underscores (e.g., replace(/^_+|_+$/g, '') ) immediately after
slice(...) and then perform an empty-check on the resulting value (in the same
scope where sanitized is defined) to provide a fallback or throw if the ID
becomes empty; update the code that uses sanitized to rely on this
cleaned/fallback value.
In `@packages/`@ant/model-provider/src/providers/codex/modelMapping.ts:
- Around line 72-86: Add a JSDoc above resolveCodexMaxTokens that documents the
precedence and fallback behavior: maxOutputTokensOverride (explicit argument)
takes highest precedence, then the CODEX_MAX_TOKENS env var (parsed with
parseInt and using the current parseInt(...) || undefined idiom), then
CLAUDE_CODE_MAX_OUTPUT_TOKENS, and finally the upperLimit; explicitly note that
the existing parseInt(...) || undefined logic treats 0 or any non-numeric value
as "unset" (falls through to the next source), so callers or maintainers
understand why CODEX_MAX_TOKENS=0 or invalid values result in upperLimit being
used.
In `@src/commands/provider.ts`:
- Around line 137-140: The comment above the provider handling no longer matches
the condition in the if statement (if (arg === 'anthropic' || arg === 'openai'
|| arg === 'codex' || arg === 'gemini' || arg === 'grok')), so update that
comment in src/commands/provider.ts to list the exact providers persisted to
settings.json (anthropic, openai, codex, gemini, grok) and clarify that bedrock,
vertex, foundry remain env-only; adjust the comment text near the provider
handling block to reflect this precise persistent vs env-only split.
In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 359-403: The useCallback for startCodexOAuth incorrectly lists
onDone in its dependency array even though onDone is unused; update the
dependency array to remove onDone and instead supply only the actual
dependencies (or an empty array if you confirm setOAuthStatus,
setShowCodexPastePrompt, codexManualCodeResolveRef, performOpenAICodexLogin and
logError are stable). Specifically, edit the startCodexOAuth useCallback
dependency array so it no longer contains onDone (e.g., [] or [setOAuthStatus,
setShowCodexPastePrompt, codexManualCodeResolveRef, performOpenAICodexLogin,
logError] as appropriate).
In `@src/services/api/codex/client.ts`:
- Line 42: The timeout assignment for the OpenAI client (the timeout property
using parseInt on process.env.API_TIMEOUT_MS) can produce NaN for non-numeric
env values; change it to parse the env with Number.parseInt (or Number(...)) and
provide a fallback default by checking Number.isNaN — e.g., compute a local
const timeoutVal = Number.parseInt(process.env.API_TIMEOUT_MS ?? '600000', 10);
if Number.isNaN(timeoutVal) set timeoutVal = 600000, then use that timeoutVal
for the timeout property so the client never receives NaN.
In `@src/services/api/codex/errors.ts`:
- Around line 80-114: The error mapping currently treats 400 and 422 as unknown
errors; update the status-based branches in the same error-normalization
function in src/services/api/codex/errors.ts to classify 400 (Bad Request) and
422 (Unprocessable Entity) as invalid requests: add a branch (similar to the
404/429 checks) that returns error: 'invalid_request' with a clear content
message like "Codex invalid request (400/422). Check request payload and
parameters." Ensure this check runs before the generic 5xx and unknown branches
so 400/422 no longer fall through to 'unknown'.
In `@src/services/api/codex/imageUpload.ts`:
- Line 4: resolvedImageUrls is an unbounded module-level Map; replace it with a
bounded cache (LRU or FIFO) to avoid memory growth: create a small helper (e.g.,
setCache/getCache using an LRUMap or a Map with MAX_RESOLVED_IMAGE_URLS and
eviction policy) and replace all direct reads/writes to resolvedImageUrls
(notably the resolvedImageUrls.set(cacheKey, url) call) with getCache(cacheKey)
/ setCache(cacheKey, url); ensure the eviction removes oldest or
least-recently-used entries and expose a single MAX_RESOLVED_IMAGE_URLS constant
to control the bound.
- Around line 115-132: Add a short header comment at the top of this file that
warns maintainers that uploadCodexBase64Image sends base64 image data off‑device
to api.imgbb.com via uploadToImgbb (controlled only by the CODEX_IMGBB_API_KEY
feature flag), notes that user-pasted screenshots may be exfiltrated and to
avoid pasting sensitive data, and references the opt‑in nature of the API key
and store (resolvedImageUrls) caching; place the comment above the function
definitions so it’s immediately visible to future readers.
In `@src/services/api/codex/index.ts`:
- Around line 261-274: The provider value from process.env.CODEX_LOGIN_METHOD
should be captured once at request start and reused when calling
recordLLMObservation to avoid PII/mis-attribution if env changes mid-request;
locate where start is set (start = Date.now()) and introduce a local const like
capturedCodexLoginMethod = process.env.CODEX_LOGIN_METHOD (or computed provider)
near that point, then replace the inline ternary in the recordLLMObservation
call with the captured variable so recordLLMObservation(model, provider, ...)
uses the stable value.
In `@src/services/api/codex/preflight.ts`:
- Around line 21-30: sanitizeMessageItem currently only checks role and that
content is an array but returns the original item cast to ResponseInputItem,
leaving each content part unchecked; update sanitizeMessageItem to iterate over
item.content, assert each element is a Record<string, unknown> and that it has a
string "type" property, then validate that type is one of the allowed message
part types (e.g., 'input_text', 'input_image', etc.) and optionally assert
presence of required fields per type (like "text" for input_text or
"url"/"bytes" for input_image) before returning the typed ResponseInputItem;
reference sanitizeMessageItem, ResponseInputItem and item.content when adding
these checks so malformed parts fail fast with a clear "Codex preflight" error.
In `@src/services/api/codex/streaming.ts`:
- Around line 539-564: In selectResponse, avoid calling
responseToRawAssistantBlocks repeatedly: build an ordered list of candidates
[streamedResponse, state.finalResponse, state.incompleteResponse,
state.failedResponse], map each candidate to its computed blocks once (using
responseToRawAssistantBlocks), pick the first candidate whose computed
blocks.length > 0 as response (falling back to the original null-coalescing
order if none match), and then reuse the cached blocks for assistantBlocks
(otherwise call getCompletedAssistantBlocks(state.completedBlocks)); keep other
return fields (incompleteResponse, partialMessage) unchanged and preserve the
same candidate order and fallback behavior.
In `@src/utils/managedEnvConstants.ts`:
- Around line 95-108: Duplicate Codex env var entries exist in the Set
PROVIDER_MANAGED_ENV_VARS: remove the stray CODEX_BASE_URL and CODEX_API_KEY
occurrences from the earlier "Endpoint config" and "Auth" sections and keep only
the per-provider Codex block entries (CODEX_BASE_URL, CODEX_API_KEY,
CODEX_MODEL, CODEX_DEFAULT_HAIKU_MODEL, CODEX_DEFAULT_SONNET_MODEL,
CODEX_DEFAULT_OPUS_MODEL, CODEX_IMGBB_API_KEY, CODEX_LOGIN_METHOD,
CODEX_ACCESS_TOKEN, CODEX_REFRESH_TOKEN) so the Set remains readable and follows
the established per-provider grouping.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 68c57a2a-9c98-4bd8-92f2-2072c953b153
📒 Files selected for processing (26)
packages/@ant/model-provider/src/index.tspackages/@ant/model-provider/src/providers/codex/__tests__/modelMapping.test.tspackages/@ant/model-provider/src/providers/codex/callIds.tspackages/@ant/model-provider/src/providers/codex/convertMessages.tspackages/@ant/model-provider/src/providers/codex/convertTools.tspackages/@ant/model-provider/src/providers/codex/modelMapping.tssrc/commands/provider.tssrc/components/ConsoleOAuthFlow.tsxsrc/keybindings/defaultBindings.tssrc/keybindings/schema.tssrc/services/api/claude.tssrc/services/api/codex/client.tssrc/services/api/codex/errors.tssrc/services/api/codex/imageUpload.tssrc/services/api/codex/index.tssrc/services/api/codex/preflight.tssrc/services/api/codex/streaming.tssrc/services/langfuse/tracing.tssrc/services/oauth/__tests__/openai-codex.test.tssrc/services/oauth/openai-codex.tssrc/utils/managedEnvConstants.tssrc/utils/model/configs.tssrc/utils/model/modelOptions.tssrc/utils/model/providers.tssrc/utils/settings/types.tssrc/utils/status.tsx
| const originalEnv = { | ||
| CODEX_MODEL: process.env.CODEX_MODEL, | ||
| CODEX_DEFAULT_HAIKU_MODEL: process.env.CODEX_DEFAULT_HAIKU_MODEL, | ||
| CODEX_DEFAULT_SONNET_MODEL: process.env.CODEX_DEFAULT_SONNET_MODEL, | ||
| CODEX_DEFAULT_OPUS_MODEL: process.env.CODEX_DEFAULT_OPUS_MODEL, | ||
| } | ||
|
|
||
| beforeEach(() => { | ||
| delete process.env.CODEX_MODEL | ||
| delete process.env.CODEX_DEFAULT_HAIKU_MODEL | ||
| delete process.env.CODEX_DEFAULT_SONNET_MODEL | ||
| delete process.env.CODEX_DEFAULT_OPUS_MODEL | ||
| }) | ||
|
|
||
| afterEach(() => { | ||
| Object.assign(process.env, originalEnv) | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Node.js / Bun coerce undefined to the string "undefined" when assigning to process.env, or does it delete the key?
💡 Result:
In both Node.js and Bun, assigning undefined to a key in process.env coerces it to the string "undefined", rather than deleting the key. To delete a key, use delete process.env[key]. For Node.js: - Official documentation explicitly states: "Assigning a property on process.env will implicitly convert the value to a string... env.test = undefined; console.log(env.test); // => 'undefined' Use delete to delete a property from process.env." This behavior is deprecated but maintained for compatibility. - Confirmed in GitHub issue #48875 and multiple Stack Overflow answers. For Bun: - Bun aims for Node.js compatibility on process.env. Issues show historical bugs with assignment and delete (e.g., #4383 fixed in 1.0.15, #6762 fixed in 1.0.15), but no evidence of differing behavior on undefined coercion. - Bun docs show programmatic assignment like process.env.FOO = "hello"; and TypeScript typing as string | undefined, consistent with Node.js string coercion. - Recent issues (up to 2026) focus on inlining/build behavior, not runtime assignment differing from Node.js.
Citations:
- 1: Setting
undefinedtoprocess.env.*sets a"undefined"string value nodejs/node#48875 - 2: https://nodejs.org/docs/latest/api/process.html
- 3: https://bun.com/docs/runtime/env
- 4: Assigning to
process.envdoes not overwrite value oven-sh/bun#6762 - 5:
delete process.env.XXXnot work oven-sh/bun#4383
Fix afterEach to properly delete undefined env vars instead of coercing to the string "undefined".
When restoring environment variables, Object.assign(process.env, originalEnv) coerces undefined values to the string "undefined" instead of deleting the keys. This pollutes subsequent tests: after this suite runs, any test importing this module will see process.env.CODEX_MODEL === "undefined", causing resolveCodexModel(...) to return the string "undefined" for every call.
Explicitly delete keys whose original value was undefined:
🔧 Proposed fix
afterEach(() => {
- Object.assign(process.env, originalEnv)
+ for (const [key, value] of Object.entries(originalEnv)) {
+ if (value === undefined) {
+ delete process.env[key]
+ } else {
+ process.env[key] = value
+ }
+ }
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const originalEnv = { | |
| CODEX_MODEL: process.env.CODEX_MODEL, | |
| CODEX_DEFAULT_HAIKU_MODEL: process.env.CODEX_DEFAULT_HAIKU_MODEL, | |
| CODEX_DEFAULT_SONNET_MODEL: process.env.CODEX_DEFAULT_SONNET_MODEL, | |
| CODEX_DEFAULT_OPUS_MODEL: process.env.CODEX_DEFAULT_OPUS_MODEL, | |
| } | |
| beforeEach(() => { | |
| delete process.env.CODEX_MODEL | |
| delete process.env.CODEX_DEFAULT_HAIKU_MODEL | |
| delete process.env.CODEX_DEFAULT_SONNET_MODEL | |
| delete process.env.CODEX_DEFAULT_OPUS_MODEL | |
| }) | |
| afterEach(() => { | |
| Object.assign(process.env, originalEnv) | |
| }) | |
| const originalEnv = { | |
| CODEX_MODEL: process.env.CODEX_MODEL, | |
| CODEX_DEFAULT_HAIKU_MODEL: process.env.CODEX_DEFAULT_HAIKU_MODEL, | |
| CODEX_DEFAULT_SONNET_MODEL: process.env.CODEX_DEFAULT_SONNET_MODEL, | |
| CODEX_DEFAULT_OPUS_MODEL: process.env.CODEX_DEFAULT_OPUS_MODEL, | |
| } | |
| beforeEach(() => { | |
| delete process.env.CODEX_MODEL | |
| delete process.env.CODEX_DEFAULT_HAIKU_MODEL | |
| delete process.env.CODEX_DEFAULT_SONNET_MODEL | |
| delete process.env.CODEX_DEFAULT_OPUS_MODEL | |
| }) | |
| afterEach(() => { | |
| for (const [key, value] of Object.entries(originalEnv)) { | |
| if (value === undefined) { | |
| delete process.env[key] | |
| } else { | |
| process.env[key] = value | |
| } | |
| } | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/`@ant/model-provider/src/providers/codex/__tests__/modelMapping.test.ts
around lines 5 - 21, The afterEach restores environment by calling
Object.assign(process.env, originalEnv) which turns undefined into the string
"undefined"; change afterEach to iterate the keys of originalEnv (the variables
captured in originalEnv: CODEX_MODEL, CODEX_DEFAULT_HAIKU_MODEL,
CODEX_DEFAULT_SONNET_MODEL, CODEX_DEFAULT_OPUS_MODEL) and for each key do: if
originalEnv[key] === undefined then delete process.env[key] else set
process.env[key] = originalEnv[key]; keep beforeEach and originalEnv as-is and
apply this logic inside the afterEach block so tests don’t get polluted.
| function resolveToolResultCallId( | ||
| toolUseId: unknown, | ||
| state: CodexCallIdState, | ||
| ): string | null { | ||
| if (typeof toolUseId !== 'string') { | ||
| return null | ||
| } | ||
|
|
||
| return state.byOriginalId.get(toolUseId) ?? normalizeCodexCallId(toolUseId) | ||
| } |
There was a problem hiding this comment.
Edge case: tool_result pairing breaks when original tool_use.id was empty.
In resolveAssistantCallId, when originalId.length === 0, the resolved callId is a hash fallback but is not stored in byOriginalId (the guard at line 236 skips the set). A later tool_result referencing the same empty / invalid id will:
- miss
byOriginalId.get(toolUseId), - fall back to
normalizeCodexCallId(toolUseId)— which returnsnullfor empty/invalid input,
so the function_call_output is silently dropped (line 281 continue), leaving an orphan function_call in the request. Codex/Responses API will then reject the conversation for an unmatched call_id, or worse, produce inconsistent tool replay.
Consider also keying byOriginalId by a stable surrogate (e.g., positional index) or always recording the assistant→callId mapping so the result side can find it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/`@ant/model-provider/src/providers/codex/convertMessages.ts around
lines 244 - 253, resolveToolResultCallId can return null for empty/invalid
tool_use.id because resolveAssistantCallId stores a hash fallback only when
originalId.length > 0, leaving byOriginalId without a mapping and causing
tool_result/function_call_output to be dropped; fix by ensuring assistant→callId
mapping is always recorded for the assistant call even when originalId is empty
(i.e., remove or adjust the originalId.length guard in resolveAssistantCallId so
it always sets state.byOriginalId with a stable surrogate key), and update
resolveToolResultCallId to also accept that surrogate (or fallback to looking up
by positional index) so tool_result entries can resolve to the same callId
rather than relying on normalizeCodexCallId returning null.
| return tools.flatMap(tool => { | ||
| const value = tool as unknown as Record<string, unknown> | ||
| if ( | ||
| value.type === 'advisor_20260301' || | ||
| value.type === 'computer_20250124' || | ||
| !isClientFunctionTool(tool) |
There was a problem hiding this comment.
Filter all server-side Anthropic tools here, not just two named variants.
This guard only drops advisor_20260301 and computer_20250124. Any other server-side tool that still has a name will be converted into a Codex function, which diverges from the existing OpenAI/Gemini converters and can send unsupported tool definitions downstream.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/`@ant/model-provider/src/providers/codex/convertTools.ts around
lines 20 - 25, The current guard in the tools.flatMap callback only excludes two
specific value.type strings and then falls back to !isClientFunctionTool(tool),
which lets other Anthropic server-side tools slip through; update this to
broadly detect and skip all Anthropic server-side tools instead of hardcoding
'advisor_20260301' and 'computer_20250124' — e.g., replace the explicit
value.type checks with a single predicate that checks Anthropic-specific
metadata (such as value.provider === 'anthropic' or a server-side flag like
value.serverSide === true or similar schema fields available) and short-circuits
conversion for any tool matching that predicate while still using
isClientFunctionTool(tool) for client-side determination. Ensure the change is
made in the same callback where value is cast and isClientFunctionTool is called
so all Anthropic server tools are filtered out.
| const handleCodexPasteSubmit = useCallback((value: string) => { | ||
| const code = parseManualCodeInput(value) | ||
| if (!code) { | ||
| setOAuthStatus({ | ||
| state: 'error', | ||
| message: 'Invalid code. Paste the full redirect URL or just the authorization code.', | ||
| toRetry: oauthStatus as any, | ||
| }) | ||
| return | ||
| } | ||
| codexManualCodeResolveRef.current?.(code) | ||
| codexManualCodeResolveRef.current = null | ||
| }, [oauthStatus]) |
There was a problem hiding this comment.
as any cast and broken retry target.
Two issues here:
toRetry: oauthStatus as anyviolates the "noas anyin production code" rule.OAuthStatusalready coverscodex_oauth_waitingso a properly-typed assertion is enough.- Setting
toRetryto the currentoauthStatus(which may already becodex_oauth_waiting) means pressing Enter on the error reverts to the waiting screen, butstartCodexOAuthis gated bypendingCodexOAuthRefand only re-runs on thecodex_oauth_starttransition (lines 421–431). The retry will therefore strand the user on a waiting screen with no live OAuth flow. Usecodex_oauth_start(oridle) as the retry target.
Proposed fix
- const handleCodexPasteSubmit = useCallback((value: string) => {
+ const handleCodexPasteSubmit = useCallback((value: string) => {
const code = parseManualCodeInput(value)
if (!code) {
setOAuthStatus({
state: 'error',
message: 'Invalid code. Paste the full redirect URL or just the authorization code.',
- toRetry: oauthStatus as any,
+ toRetry: { state: 'codex_oauth_start' },
})
return
}
codexManualCodeResolveRef.current?.(code)
codexManualCodeResolveRef.current = null
}, [oauthStatus])As per coding guidelines: "Prohibit as any type assertions in production code".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleCodexPasteSubmit = useCallback((value: string) => { | |
| const code = parseManualCodeInput(value) | |
| if (!code) { | |
| setOAuthStatus({ | |
| state: 'error', | |
| message: 'Invalid code. Paste the full redirect URL or just the authorization code.', | |
| toRetry: oauthStatus as any, | |
| }) | |
| return | |
| } | |
| codexManualCodeResolveRef.current?.(code) | |
| codexManualCodeResolveRef.current = null | |
| }, [oauthStatus]) | |
| const handleCodexPasteSubmit = useCallback((value: string) => { | |
| const code = parseManualCodeInput(value) | |
| if (!code) { | |
| setOAuthStatus({ | |
| state: 'error', | |
| message: 'Invalid code. Paste the full redirect URL or just the authorization code.', | |
| toRetry: { state: 'codex_oauth_start' }, | |
| }) | |
| return | |
| } | |
| codexManualCodeResolveRef.current?.(code) | |
| codexManualCodeResolveRef.current = null | |
| }, [oauthStatus]) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/ConsoleOAuthFlow.tsx` around lines 230 - 242, The
handleCodexPasteSubmit handler sets OAuth error state using an unsafe cast and
an incorrect retry target; replace the `toRetry: oauthStatus as any` with a
correctly typed OAuthStatus value (no `as any`) and set the retry target to a
state that will actually restart the flow (e.g., 'codex_oauth_start' or 'idle')
so retry triggers startCodexOAuth (which checks pendingCodexOAuthRef) instead of
leaving the UI stuck on 'codex_oauth_waiting'; update the setOAuthStatus call in
handleCodexPasteSubmit and ensure the OAuthStatus union type is used directly or
narrowed to the chosen literal.
| const env: Record<string, string | undefined> = { | ||
| CODEX_API_KEY: codexResult.apiKey ?? undefined, | ||
| CODEX_ACCESS_TOKEN: codexResult.accessToken, | ||
| CODEX_REFRESH_TOKEN: codexResult.refreshToken, | ||
| CODEX_LOGIN_METHOD: 'chatgpt_subscription', | ||
| CODEX_DEFAULT_HAIKU_MODEL: finalVals.haiku_model, | ||
| CODEX_DEFAULT_SONNET_MODEL: finalVals.sonnet_model, | ||
| CODEX_DEFAULT_OPUS_MODEL: finalVals.opus_model, | ||
| } | ||
| const { error } = updateSettingsForSource('userSettings', { | ||
| modelType: 'codex' as any, | ||
| env, | ||
| } as any) | ||
| if (error) { | ||
| setOAuthStatus({ | ||
| state: 'error', | ||
| message: 'Failed to save settings. Please try again.', | ||
| toRetry: { | ||
| state: 'codex_models', | ||
| haikuModel: finalVals.haiku_model, | ||
| sonnetModel: finalVals.sonnet_model, | ||
| opusModel: finalVals.opus_model, | ||
| activeField: 'haiku_model', | ||
| codexResult, | ||
| }, | ||
| }) | ||
| } else { | ||
| for (const [k, v] of Object.entries(env)) { | ||
| if (v !== undefined) { | ||
| process.env[k] = v | ||
| } | ||
| } | ||
| setOAuthStatus({ state: 'success' }) | ||
| void onDone() | ||
| } | ||
| }, [activeField, codexModelInput, codexDisplayValues, codexResult, setOAuthStatus, onDone]) |
There was a problem hiding this comment.
Don't write undefined env values via as any.
env: Record<string, string | undefined> is then passed through as any into updateSettingsForSource. Two concerns:
as anyviolates the project's "noas anyin production" rule (same applies tomodelType: 'codex' as any). Preferas unknown as <SettingsUpdate>or extending the settings type to accept'codex'/optional env values.- When
codexResult.apiKeyis null,CODEX_API_KEYis set toundefinedand persisted via the settings updater. Depending on settings serialization,undefinedmay be JSON-stripped or coerced to the string"undefined"; theprocess.env[k] = vloop on line 1600 already guards withv !== undefined, but the settings write does not. Drop the key entirely instead of writingundefined:
Proposed fix
- const env: Record<string, string | undefined> = {
- CODEX_API_KEY: codexResult.apiKey ?? undefined,
- CODEX_ACCESS_TOKEN: codexResult.accessToken,
- CODEX_REFRESH_TOKEN: codexResult.refreshToken,
- CODEX_LOGIN_METHOD: 'chatgpt_subscription',
- CODEX_DEFAULT_HAIKU_MODEL: finalVals.haiku_model,
- CODEX_DEFAULT_SONNET_MODEL: finalVals.sonnet_model,
- CODEX_DEFAULT_OPUS_MODEL: finalVals.opus_model,
- }
+ const env: Record<string, string> = {
+ CODEX_ACCESS_TOKEN: codexResult.accessToken,
+ CODEX_REFRESH_TOKEN: codexResult.refreshToken,
+ CODEX_LOGIN_METHOD: 'chatgpt_subscription',
+ CODEX_DEFAULT_HAIKU_MODEL: finalVals.haiku_model,
+ CODEX_DEFAULT_SONNET_MODEL: finalVals.sonnet_model,
+ CODEX_DEFAULT_OPUS_MODEL: finalVals.opus_model,
+ }
+ if (codexResult.apiKey) env.CODEX_API_KEY = codexResult.apiKeyAs per coding guidelines: "Prohibit as any type assertions in production code; in test files, as any is permitted for mock data. Use as unknown as SpecificType double assertion or interface supplementation when type mismatches occur".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/ConsoleOAuthFlow.tsx` around lines 1571 - 1606, The env object
currently includes keys with undefined values and is passed into
updateSettingsForSource using `as any` (also `modelType: 'codex' as any`);
remove any `as any` assertions and drop keys with undefined values before
calling updateSettingsForSource. Concretely: build env from codexResult but
filter out entries where value is null/undefined (so CODEX_API_KEY is omitted
when absent), replace the `as any` usages by a safe cast like `as unknown as
<appropriate SettingsUpdate type>` or extend the settings type to accept
optional env keys and the 'codex' modelType, and then call
updateSettingsForSource('userSettings', { modelType: 'codex', env } ) using that
proper type; keep the existing process.env[k] = v loop unchanged.
| usage: getCodexUsage(response) as any, | ||
| } as AssistantMessage['message'] |
There was a problem hiding this comment.
Coding-guideline violation: pervasive as any casts in production code.
This file uses as any in many places — line 86 (usage), 164/192/313/332/360/376/408/426/513/519 (event shapes), 254/256/260/267/272 (content/tools/stop_reason). The coding guideline explicitly prohibits as any in production code and requires as unknown as SpecificType (or interface supplementation / type guards) for genuine type mismatches.
Most of these are casts onto the Anthropic stream-event type. Define (or import) the proper raw stream-event union from the Anthropic SDK / internal types and either:
- build the events with that type and let
event:be inferred, or - cast as
as unknown as RawMessageStreamEvent(whatever the project's StreamEvent inner type is).
Same for usage (use as unknown as Anthropic.Usage) and the stop_reason literal (as Message['stop_reason']).
As per coding guidelines: "Prohibit as any type assertions in production code; in test files, as any is permitted for mock data. Use as unknown as SpecificType double assertion or interface supplementation when type mismatches occur".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/codex/streaming.ts` around lines 86 - 87, The code uses
pervasive `as any` casts (e.g., the `usage` assignment using
getCodexUsage(response) and multiple `event` assignments in this file) which
violates the guideline; replace each `as any` with a proper typed solution by
importing or declaring the raw stream-event union from the Anthropic SDK
(suggest name RawMessageStreamEvent) and casting via `as unknown as
RawMessageStreamEvent` for the `event` shapes, cast `usage` as `as unknown as
Anthropic.Usage` (or change getCodexUsage to return that type), and replace
literal `stop_reason` casts with `as unknown as Message['stop_reason']` or
adjust the Message type so the compiler infers it—apply these changes for the
identifiers referenced in this file (getCodexUsage, event assignments around the
noted lines, and any stop_reason usages) to remove all `as any` in production
code.
| const ERROR_HTML = (msg: string) => `<!DOCTYPE html> | ||
| <html><head><meta charset="utf-8"><title>Login Error</title> | ||
| <style>body{font-family:system-ui,sans-serif;display:flex;justify-content:center;align-items:center;height:100vh;margin:0;background:#1a1a2e;color:#eee} | ||
| .card{text-align:center;padding:2rem;border-radius:12px;background:#16213e;box-shadow:0 4px 24px rgba(0,0,0,.3)} | ||
| h1{color:#f87171;font-size:1.5rem}p{color:#94a3b8;margin-top:.5rem}</style></head> | ||
| <body><div class="card"><h1>Authentication Failed</h1><p>${msg}</p></div></body></html>` |
There was a problem hiding this comment.
Validate state before honoring error, and escape the error text.
The callback handles error before checking state, and error_description is injected into ERROR_HTML() without escaping. A request to the localhost callback can therefore abort the login flow without a valid state, and can reflect arbitrary markup/script into the browser page.
Also applies to: 197-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/oauth/openai-codex.ts` around lines 163 - 168, The callback
currently processes and renders the OAuth `error` (via ERROR_HTML) before
verifying the OAuth `state` and injects `error_description` directly into the
HTML; update the handler to first validate the incoming `state` against the
expected value and only then handle any `error`, and ensure ERROR_HTML (or the
value passed into it) HTML-escapes the message to prevent markup/script
injection; apply the same order and escaping changes to the other error-handling
block referenced around the same callback (the code handling `error_description`
at the 197-210 area) so both error paths validate state first and escape
user-provided text.
| // Step 4: Open browser | ||
| await openBrowser(authUrl) | ||
|
|
||
| // Step 5: Wait for code (from callback or manual input) | ||
| let code: string | ||
|
|
||
| if (manualCode) { | ||
| // Race between browser callback and manual input | ||
| const result = await Promise.race([ | ||
| server.waitForCode().then(c => ({ source: 'callback' as const, code: c })), | ||
| manualCode.then(c => ({ source: 'manual' as const, code: c })), | ||
| ]) |
There was a problem hiding this comment.
Manual fallback is broken in the exact cases it is supposed to cover.
openBrowser(authUrl) is awaited before manualCode is consumed, so headless sessions fail immediately on browser-launch errors. The manual branch also forwards the raw input straight to exchangeCodeForTokens(), so the documented full-URL / code#state formats are never parsed.
💡 Suggested fix
- await openBrowser(authUrl)
+ try {
+ await openBrowser(authUrl)
+ } catch {
+ if (!manualCode) {
+ throw new Error('Failed to open browser for Codex OAuth flow')
+ }
+ }
// Step 5: Wait for code (from callback or manual input)
let code: string
if (manualCode) {
// Race between browser callback and manual input
const result = await Promise.race([
server.waitForCode().then(c => ({ source: 'callback' as const, code: c })),
- manualCode.then(c => ({ source: 'manual' as const, code: c })),
+ manualCode.then(input => {
+ const code = parseManualCodeInput(input)
+ if (!code) {
+ throw new Error('Invalid manual authorization code')
+ }
+ return { source: 'manual' as const, code }
+ }),
])
code = result.code
} else {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Step 4: Open browser | |
| await openBrowser(authUrl) | |
| // Step 5: Wait for code (from callback or manual input) | |
| let code: string | |
| if (manualCode) { | |
| // Race between browser callback and manual input | |
| const result = await Promise.race([ | |
| server.waitForCode().then(c => ({ source: 'callback' as const, code: c })), | |
| manualCode.then(c => ({ source: 'manual' as const, code: c })), | |
| ]) | |
| // Step 4: Open browser | |
| try { | |
| await openBrowser(authUrl) | |
| } catch { | |
| if (!manualCode) { | |
| throw new Error('Failed to open browser for Codex OAuth flow') | |
| } | |
| } | |
| // Step 5: Wait for code (from callback or manual input) | |
| let code: string | |
| if (manualCode) { | |
| // Race between browser callback and manual input | |
| const result = await Promise.race([ | |
| server.waitForCode().then(c => ({ source: 'callback' as const, code: c })), | |
| manualCode.then(input => { | |
| const code = parseManualCodeInput(input) | |
| if (!code) { | |
| throw new Error('Invalid manual authorization code') | |
| } | |
| return { source: 'manual' as const, code } | |
| }), | |
| ]) | |
| code = result.code | |
| } else { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/oauth/openai-codex.ts` around lines 315 - 326, The manual
fallback fails because openBrowser(authUrl) is awaited before manualCode is
consumed and manual inputs aren't parsed; change the flow so openBrowser is
launched but not awaited (start it non-blocking) and start consuming manualCode
immediately so the Promise.race between server.waitForCode() and manual input
can win even if openBrowser fails; additionally, before calling
exchangeCodeForTokens(), parse the manualCode string to extract the actual
authorization code (handle full URLs with ?code=..., fragment formats like
`#code`=... or /?code=..., and inputs containing code#state) and pass the
extracted code to exchangeCodeForTokens(); update logic around openBrowser,
manualCode, server.waitForCode, and exchangeCodeForTokens accordingly and
handle/openBrowser errors without preventing manual fallback.
| // Codex provider specific | ||
| 'CODEX_DEFAULT_HAIKU_MODEL', | ||
| 'CODEX_DEFAULT_SONNET_MODEL', | ||
| 'CODEX_DEFAULT_OPUS_MODEL', | ||
| 'CODEX_IMGBB_API_KEY', |
There was a problem hiding this comment.
Security: CODEX_IMGBB_API_KEY in SAFE_ENV_VARS enables image exfiltration via project settings.
SAFE_ENV_VARS is the allowlist of env vars that can flow from project-scoped settings (projectSettings/localSettings) into process.env without the trust dialog. Adding CODEX_IMGBB_API_KEY here means a hostile checked-in .claude/settings.json can set this key, after which src/services/api/codex/imageUpload.ts will POST every base64 image the user pastes (screenshots, diagrams, possibly sensitive content) to api.imgbb.com under an attacker-controlled account.
This is the same threat class the file's own header calls out (REDIRECT TO ATTACKER-CONTROLLED SERVER). Recommend removing CODEX_IMGBB_API_KEY from SAFE_ENV_VARS so it must come from user/policy settings (or env var) — i.e. through a trusted source — and keeping it in PROVIDER_MANAGED_ENV_VARS is fine.
🛡️ Proposed change
// Codex provider specific
'CODEX_DEFAULT_HAIKU_MODEL',
'CODEX_DEFAULT_SONNET_MODEL',
'CODEX_DEFAULT_OPUS_MODEL',
- 'CODEX_IMGBB_API_KEY',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Codex provider specific | |
| 'CODEX_DEFAULT_HAIKU_MODEL', | |
| 'CODEX_DEFAULT_SONNET_MODEL', | |
| 'CODEX_DEFAULT_OPUS_MODEL', | |
| 'CODEX_IMGBB_API_KEY', | |
| // Codex provider specific | |
| 'CODEX_DEFAULT_HAIKU_MODEL', | |
| 'CODEX_DEFAULT_SONNET_MODEL', | |
| 'CODEX_DEFAULT_OPUS_MODEL', |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/managedEnvConstants.ts` around lines 233 - 237, Remove
CODEX_IMGBB_API_KEY from the SAFE_ENV_VARS allowlist so project-scoped settings
cannot inject it into process.env; keep it (or add it) only in
PROVIDER_MANAGED_ENV_VARS so the key must come from a trusted source
(env/policy) rather than project settings. Update the array that defines
SAFE_ENV_VARS to exclude 'CODEX_IMGBB_API_KEY' and verify
PROVIDER_MANAGED_ENV_VARS includes it; also consider checking
src/services/api/codex/imageUpload.ts to ensure it still reads the key from
process.env as intended.
| .describe( | ||
| 'API provider type. "anthropic" uses the Anthropic API (default), "openai" uses the OpenAI Chat Completions API, "gemini" uses the Gemini API, and "grok" uses the xAI Grok API (OpenAI-compatible). ' + | ||
| 'When set to "openai", configure OPENAI_API_KEY, OPENAI_BASE_URL, and OPENAI_MODEL. When set to "gemini", configure GEMINI_API_KEY and optional GEMINI_BASE_URL. When set to "grok", configure GROK_API_KEY (or XAI_API_KEY), optional GROK_BASE_URL, GROK_MODEL, and GROK_MODEL_MAP.', | ||
| 'API provider type. "anthropic" uses the Anthropic API (default), "openai" uses the OpenAI Chat Completions API, "gemini" uses the Gemini API, "grok" uses the xAI Grok API (OpenAI-compatible), and "codex" uses the OpenAI Responses API via ChatGPT subscription or API key. ' + | ||
| 'When set to "openai", configure OPENAI_API_KEY, OPENAI_BASE_URL, and OPENAI_MODEL. When set to "gemini", configure GEMINI_API_KEY and optional GEMINI_BASE_URL. When set to "grok", configure GROK_API_KEY (or XAI_API_KEY), optional GROK_BASE_URL, GROK_MODEL, and GROK_MODEL_MAP. When set to "codex", configure CODEX_API_KEY and optional CODEX_BASE_URL.', |
There was a problem hiding this comment.
Description omits CODEX_ACCESS_TOKEN (used by ChatGPT subscription path).
The blurb says "codex" uses the OpenAI Responses API via ChatGPT subscription or API key but the env-var hint only lists CODEX_API_KEY / CODEX_BASE_URL. The provider.ts codex check accepts CODEX_API_KEY || CODEX_ACCESS_TOKEN (lines 127, 132), and managedEnvConstants.ts adds both CODEX_ACCESS_TOKEN and CODEX_REFRESH_TOKEN to the provider-managed set. Worth surfacing the ChatGPT subscription token in this user-facing description so settings.json authors know the OAuth path exists.
📝 Proposed wording
- 'API provider type. "anthropic" uses the Anthropic API (default), "openai" uses the OpenAI Chat Completions API, "gemini" uses the Gemini API, "grok" uses the xAI Grok API (OpenAI-compatible), and "codex" uses the OpenAI Responses API via ChatGPT subscription or API key. ' +
- 'When set to "openai", configure OPENAI_API_KEY, OPENAI_BASE_URL, and OPENAI_MODEL. When set to "gemini", configure GEMINI_API_KEY and optional GEMINI_BASE_URL. When set to "grok", configure GROK_API_KEY (or XAI_API_KEY), optional GROK_BASE_URL, GROK_MODEL, and GROK_MODEL_MAP. When set to "codex", configure CODEX_API_KEY and optional CODEX_BASE_URL.',
+ 'API provider type. "anthropic" uses the Anthropic API (default), "openai" uses the OpenAI Chat Completions API, "gemini" uses the Gemini API, "grok" uses the xAI Grok API (OpenAI-compatible), and "codex" uses the OpenAI Responses API via ChatGPT subscription or API key. ' +
+ 'When set to "openai", configure OPENAI_API_KEY, OPENAI_BASE_URL, and OPENAI_MODEL. When set to "gemini", configure GEMINI_API_KEY and optional GEMINI_BASE_URL. When set to "grok", configure GROK_API_KEY (or XAI_API_KEY), optional GROK_BASE_URL, GROK_MODEL, and GROK_MODEL_MAP. When set to "codex", configure either CODEX_API_KEY (API key) or CODEX_ACCESS_TOKEN (ChatGPT subscription via /login), and optional CODEX_BASE_URL.',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/settings/types.ts` around lines 374 - 376, The settings description
for the API provider (the string passed to .describe in types.ts) omits the
ChatGPT subscription OAuth token; update that blurb to include
CODEX_ACCESS_TOKEN (and optionally CODEX_REFRESH_TOKEN) as valid env vars for
the "codex" provider so it matches the logic in provider.ts (which checks
CODEX_API_KEY || CODEX_ACCESS_TOKEN) and the entries in managedEnvConstants.ts;
edit the .describe text to mention CODEX_ACCESS_TOKEN as the ChatGPT
subscription path alongside CODEX_API_KEY and CODEX_BASE_URL.
| if (error) { | ||
| const desc = url.searchParams.get('error_description') ?? error | ||
| res.writeHead(400, { 'Content-Type': 'text/html; charset=utf-8' }) | ||
| res.end(ERROR_HTML(desc)) |
Summary by CodeRabbit
Release Notes