fix(validation): fall back to completions probe for custom providers returning 401/403 on /models#894
Conversation
…that return 401/403 on /models (fixes ValueCell-ai#882) Some custom provider implementations (e.g. opencode-go) return 401 or 403 on the OpenAI-compatible GET /models endpoint even when the API key is valid, because they simply don't implement that endpoint. Previously, ClawX would immediately report 'Invalid API key' without retrying, blocking users from saving their custom provider. This change adds a secondary validation probe for the custom provider type: when GET /models returns 401 or 403, ClawX now also tries POST /chat/completions (or /responses for openai-responses protocol) with the same key. If the server accepts the token and returns a non-auth error (e.g. 400 'unknown model: validation-probe'), the key is considered valid. If the fallback probe also returns an auth failure, the original error is returned unchanged. Note: 400 responses with explicit auth error messages from /models are not retried - those indicate the server explicitly rejected the credentials.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f75b18a385
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const probeResult = apiProtocol === 'openai-responses' | ||
| ? await performResponsesProbe(providerType, probeUrl, headers) | ||
| : await performChatCompletionsProbe(providerType, probeUrl, headers); | ||
| if (probeResult.valid) return probeResult; |
There was a problem hiding this comment.
Exclude ambiguous 429 probes when overriding auth failures
In the new custom-provider fallback, any probeResult.valid immediately overrides the original /models 401/403. Because classifyProbeResponse treats HTTP 429 as valid, a provider that rate-limits before credential checks can cause invalid keys to be accepted and saved after a 401/403 from /models. This regression is specific to the new auth-fallback path for custom; consider not accepting 429 as proof of valid auth in this branch.
Useful? React with 👍 / 👎.
Fixes #882
Problem
Some custom AI provider implementations (e.g. opencode-go) return 401 or 403 on the
GET /modelsendpoint even when the API key is valid, because they don't expose the standard OpenAI model listing endpoint. Previously, ClawX would immediately interpret these responses as "Invalid API key" and block the user from saving the custom provider — even though the key itself is correct.Solution
For the
customprovider type, whenGET /modelsreturns 401 or 403, ClawX now runs a secondary validation probe againstPOST /chat/completions(or/responsesforopenai-responsesprotocol) using the same API key:400 "unknown model: validation-probe"), it means the server accepted the auth token — the key is valid and the user's provider is saved.This fallback only applies to the
customprovider type. Well-known providers (openai, anthropic, etc.) use dedicated validation profiles and are unaffected.Testing
Added 4 new unit tests covering:
/modelsreturns 401 → fallback probe returns 400 (valid model error) → key accepted/modelsreturns 403 → fallback to/responses→ key accepted/modelsreturns 401 → probe also returns 401 → key correctly rejected/modelsreturns 401 → no secondary probe (single call, rejected immediately)All 19 tests in
provider-validation.test.tspass.