Error Details
- Model: deepseek-coder-6.7b
- Provider: ollama
- Status Code: 400
Error Output
"registry.ollama.ai/library/deepseek-coder:6.7b does not support thinking"
Additional Context
Environment
- Continue plugin version: 1.0.60
- JetBrains platform version: 2024.1+
- Continue build date: Feb 05, 2026
- IDE: IntelliJ IDEA
- OS: macOS
- Provider: Ollama
- Models tested:
- llama3.1:8b
- deepseek-coder:6.7b
- phi3:mini
- qwen2.5-coder:7b
Problem
I am trying to disable thinking/reasoning for local Ollama models in Continue, but Continue still sends requests in chat as if thinking were enabled.
As a result, Continue chat keeps failing with errors like:
- "registry.ollama.ai/library/llama3.1:8b does not support thinking"
- "registry.ollama.ai/library/deepseek-coder:6.7b does not support thinking"
- "registry.ollama.ai/library/phi3:mini does not support thinking"
Important detail
The problem appears to affect the chat flow specifically.
If I use Edit mode with Command+I, it works correctly.
If I use the normal chat, it fails with "does not support thinking".
Expected behavior
If I explicitly set reasoning: false in the model configuration, Continue should stop sending thinking/reasoning-related parameters or behavior for those models in chat as well as edit flows.
Actual behavior
Even with reasoning disabled in config.yaml, Continue chat still behaves as if thinking is enabled, and Ollama models that do not support it fail.
At the same time, Edit mode (Command+I) works correctly with the same setup, which suggests the issue may be specific to the chat path rather than the model configuration itself.
Current ~/.continue/config.yaml
name: Local Config
version: 1.0.0
schema: v1
models:
- name: llama3.1-8b
provider: ollama
model: llama3.1:8b
defaultCompletionOptions:
reasoning: false
- name: deepseek-coder-6.7b
provider: ollama
model: deepseek-coder:6.7b
defaultCompletionOptions:
reasoning: false
- name: phi3-mini
provider: ollama
model: phi3:mini
defaultCompletionOptions:
reasoning: false
Notes
~/.continue/config.yaml exists and is being edited correctly.
- There is no
~/.continue/config.json
- There is no
~/.continue/config.ts
- In
~/.continue I only have config.yaml plus Continue internal folders/files.
- I searched for
"thinking" under ~/.continue and the repeated errors appear in logs/core.log.
- The models are correctly installed in Ollama and work when run directly from terminal.
Ollama models installed locally
- deepseek-coder:6.7b
- qwen2.5-coder:7b
- llama3.1:8b
- phi3:mini
Relevant log evidence from ~/.continue/logs/core.log
{"error":"registry.ollama.ai/library/deepseek-coder:6.7b does not support thinking"}
{"error":"registry.ollama.ai/library/llama3.1:8b does not support thinking"}
{"error":"registry.ollama.ai/library/phi3:mini does not support thinking"}
Additional context
At one point I also got this config parsing error:
Failed to parse config: models: Expected array, received null
But even after correcting the YAML structure and keeping models as a proper array, the main issue remains: Continue chat still appears to enable thinking internally for Ollama models that do not support it.
Steps to reproduce
- Install Continue version 1.0.60 in IntelliJ IDEA (JetBrains 2024.1+).
- Use Ollama as provider.
- Configure local models in
~/.continue/config.yaml with:
defaultCompletionOptions:
reasoning: false
- Select one of these models in Continue.
- Send a prompt in chat.
- Observe Continue logs and failed requests mentioning that the model "does not support thinking".
- Then try Edit mode with Command+I using the same model.
- Observe that Edit mode works, while chat fails.
Impact
This makes the configured Ollama models unusable in Continue chat, even though:
- they work correctly in Ollama itself
- they work in Continue Edit mode via Command+I
Question
Is this a bug in how Continue chat handles reasoning=false for Ollama models, or is there another required setting to fully disable thinking in chat?
Error Details
Error Output
Additional Context
Environment
Problem
I am trying to disable thinking/reasoning for local Ollama models in Continue, but Continue still sends requests in chat as if thinking were enabled.
As a result, Continue chat keeps failing with errors like:
Important detail
The problem appears to affect the chat flow specifically.
If I use Edit mode with Command+I, it works correctly.
If I use the normal chat, it fails with "does not support thinking".
Expected behavior
If I explicitly set
reasoning: falsein the model configuration, Continue should stop sending thinking/reasoning-related parameters or behavior for those models in chat as well as edit flows.Actual behavior
Even with reasoning disabled in
config.yaml, Continue chat still behaves as if thinking is enabled, and Ollama models that do not support it fail.At the same time, Edit mode (Command+I) works correctly with the same setup, which suggests the issue may be specific to the chat path rather than the model configuration itself.
Current ~/.continue/config.yaml
Notes
~/.continue/config.yamlexists and is being edited correctly.~/.continue/config.json~/.continue/config.ts~/.continueI only haveconfig.yamlplus Continue internal folders/files."thinking"under~/.continueand the repeated errors appear inlogs/core.log.Ollama models installed locally
Relevant log evidence from ~/.continue/logs/core.log
Additional context
At one point I also got this config parsing error:
But even after correcting the YAML structure and keeping
modelsas a proper array, the main issue remains: Continue chat still appears to enable thinking internally for Ollama models that do not support it.Steps to reproduce
~/.continue/config.yamlwith:Impact
This makes the configured Ollama models unusable in Continue chat, even though:
Question
Is this a bug in how Continue chat handles
reasoning=falsefor Ollama models, or is there another required setting to fully disable thinking in chat?