Describe the bug
During first launch, the Copilot CLI setup wizard prompts the user to configure reasoning_effort (offering options like "high"). When the user selects this, it is saved to ~/.copilot/config.json. However, reasoning_effort is an OpenAI-specific parameter that Claude models do not support. When the CLI sends this parameter to the API for any Claude model (including the default claude-sonnet-4.6), the API returns 400 Bad Request immediately — before any tool is called or any response is returned.
The result: the tool is completely non-functional from first launch, because the setup wizard itself configures it into a broken state.
Affected version
GitHub Copilot CLI 1.0.9
Steps to reproduce
- Install Copilot CLI fresh
- On first launch, when prompted about
reasoning_effort, select "high"
- The config is saved to
~/.copilot/config.json:
{
"reasoning_effort": "high",
"streamer_mode": true
}
- Type any prompt and press Enter
- Every single prompt fails immediately:
✗ Execution failed: CAPIError: 400 400 Bad Request
(Request ID: FB0A:105CE9:33A8B7:3D6D1E:69BB8BD2)
Root cause (from logs)
The error occurs in getCompletionWithTools — the API rejects the request before any AI response is produced. The reasoning_effort parameter is being forwarded to Claude models that don't support it.
From ~/.copilot/logs/process-*.log:
[ERROR] error (Request-ID FB0A:105CE9:33A8B7:3D6D1E:69BB8BD2)
[ERROR] {
"status": 400,
"name": "CAPIError",
"message": "400 400 Bad Request\n",
...
at wst.getCompletionWithTools
Workaround
Manually remove reasoning_effort from ~/.copilot/config.json:
{
"firstLaunchAt": "...",
"banner": "never",
"experimental": true
}
Expected behavior
Either:
- The CLI should not offer
reasoning_effort as a setup option when the default model is Claude, OR
- The CLI should only forward
reasoning_effort to models that support it (OpenAI o-series / GPT models), and silently ignore it for Claude/Gemini models
Additional context
- OS: WSL2 (Ubuntu), also reproduced on Linux
- Default model at time of failure:
claude-sonnet-4.6
- The error is silent — there's no indication that the config is the cause; users see a generic "400 Bad Request" with no actionable guidance
Describe the bug
During first launch, the Copilot CLI setup wizard prompts the user to configure
reasoning_effort(offering options like "high"). When the user selects this, it is saved to~/.copilot/config.json. However,reasoning_effortis an OpenAI-specific parameter that Claude models do not support. When the CLI sends this parameter to the API for any Claude model (including the defaultclaude-sonnet-4.6), the API returns400 Bad Requestimmediately — before any tool is called or any response is returned.The result: the tool is completely non-functional from first launch, because the setup wizard itself configures it into a broken state.
Affected version
GitHub Copilot CLI 1.0.9
Steps to reproduce
reasoning_effort, select "high"~/.copilot/config.json:{ "reasoning_effort": "high", "streamer_mode": true }Root cause (from logs)
The error occurs in
getCompletionWithTools— the API rejects the request before any AI response is produced. Thereasoning_effortparameter is being forwarded to Claude models that don't support it.From
~/.copilot/logs/process-*.log:Workaround
Manually remove
reasoning_effortfrom~/.copilot/config.json:{ "firstLaunchAt": "...", "banner": "never", "experimental": true }Expected behavior
Either:
reasoning_effortas a setup option when the default model is Claude, ORreasoning_effortto models that support it (OpenAI o-series / GPT models), and silently ignore it for Claude/Gemini modelsAdditional context
claude-sonnet-4.6