You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: add LM Studio provider for local Qwen model support (#340)
* feat: add LM Studio provider for local Qwen model support
Register `lmstudio` as an OpenAI-compatible provider in
`opencode.jsonc`, pointing at the default LM Studio local
server (`localhost:1234`).
* fix: correct LM Studio port and model IDs to match actual server
- Port: 11434 (not default 1234)
- Models: `gpt-oss:20b` and `deepseek-r1:70b` (actual loaded models)
* fix: address PR review — generic config, correct port, add docs
- Use LM Studio default port 1234 (not Ollama's 11434)
- Replace hardcoded model IDs with commented examples users fill in
- Add `LMSTUDIO_API_KEY` env var support
- Add LM Studio section to providers.md docs
- Add LM Studio tab to quickstart.md provider examples
Copy file name to clipboardExpand all lines: docs/docs/configure/providers.md
+42Lines changed: 42 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -148,6 +148,48 @@ No API key needed. Runs entirely on your local machine.
148
148
!!! info
149
149
Make sure Ollama is running before starting altimate. Install it from [ollama.com](https://ollama.com) and pull your desired model with `ollama pull llama3.1`.
150
150
151
+
## LM Studio (Local)
152
+
153
+
Run local models through [LM Studio](https://lmstudio.ai)'s OpenAI-compatible server:
154
+
155
+
```json
156
+
{
157
+
"provider": {
158
+
"lmstudio": {
159
+
"name": "LM Studio",
160
+
"npm": "@ai-sdk/openai-compatible",
161
+
"env": ["LMSTUDIO_API_KEY"],
162
+
"options": {
163
+
"apiKey": "lm-studio",
164
+
"baseURL": "http://localhost:1234/v1"
165
+
},
166
+
"models": {
167
+
"qwen2.5-7b-instruct": {
168
+
"name": "Qwen 2.5 7B Instruct",
169
+
"tool_call": true,
170
+
"limit": { "context": 131072, "output": 8192 }
171
+
}
172
+
}
173
+
}
174
+
},
175
+
"model": "lmstudio/qwen2.5-7b-instruct"
176
+
}
177
+
```
178
+
179
+
**Setup:**
180
+
181
+
1. Open LM Studio → **Developer** tab → **Start Server** (default port: 1234)
182
+
2. Load a model in LM Studio
183
+
3. Find your model ID: `curl http://localhost:1234/v1/models`
184
+
4. Add the model ID to the `models` section in your config
185
+
5. Use it: `altimate-code run -m lmstudio/<model-id>`
186
+
187
+
!!! tip
188
+
The model key in your config must match the model ID returned by LM Studio's `/v1/models` endpoint. If you change models in LM Studio, update the config to match.
189
+
190
+
!!! note
191
+
If you changed LM Studio's default port, update the `baseURL` accordingly. No real API key is needed — the `"lm-studio"` placeholder satisfies the SDK requirement.
0 commit comments