Skip to content

Commit 3ce1b78

Browse files
authored
feat: add LM Studio provider for local Qwen model support (#340)
* feat: add LM Studio provider for local Qwen model support Register `lmstudio` as an OpenAI-compatible provider in `opencode.jsonc`, pointing at the default LM Studio local server (`localhost:1234`). * fix: correct LM Studio port and model IDs to match actual server - Port: 11434 (not default 1234) - Models: `gpt-oss:20b` and `deepseek-r1:70b` (actual loaded models) * fix: address PR review — generic config, correct port, add docs - Use LM Studio default port 1234 (not Ollama's 11434) - Replace hardcoded model IDs with commented examples users fill in - Add `LMSTUDIO_API_KEY` env var support - Add LM Studio section to providers.md docs - Add LM Studio tab to quickstart.md provider examples
1 parent 7a6fee0 commit 3ce1b78

File tree

3 files changed

+96
-0
lines changed

3 files changed

+96
-0
lines changed

.opencode/opencode.jsonc

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,35 @@
44
"opencode": {
55
"options": {},
66
},
7+
// LM Studio — local inference via OpenAI-compatible API
8+
// 1. Open LM Studio → Developer tab → Start Server (default port: 1234)
9+
// 2. Load a model in LM Studio
10+
// 3. Run: curl http://localhost:1234/v1/models to find the model ID
11+
// 4. Add the model ID to the "models" section below
12+
// 5. Use as: altimate-code run -m lmstudio/<model-id>
13+
"lmstudio": {
14+
"name": "LM Studio",
15+
"npm": "@ai-sdk/openai-compatible",
16+
"env": ["LMSTUDIO_API_KEY"],
17+
"options": {
18+
"apiKey": "lm-studio",
19+
"baseURL": "http://localhost:1234/v1",
20+
},
21+
"models": {
22+
// Add your loaded models here. The key must match the model ID from LM Studio.
23+
// Examples:
24+
// "qwen2.5-7b-instruct": {
25+
// "name": "Qwen 2.5 7B Instruct",
26+
// "tool_call": true,
27+
// "limit": { "context": 131072, "output": 8192 }
28+
// },
29+
// "deepseek-r1:70b": {
30+
// "name": "DeepSeek R1 70B",
31+
// "tool_call": true,
32+
// "limit": { "context": 65536, "output": 8192 }
33+
// }
34+
},
35+
},
736
},
837
"permission": {
938
"edit": {

docs/docs/configure/providers.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,48 @@ No API key needed. Runs entirely on your local machine.
148148
!!! info
149149
Make sure Ollama is running before starting altimate. Install it from [ollama.com](https://ollama.com) and pull your desired model with `ollama pull llama3.1`.
150150

151+
## LM Studio (Local)
152+
153+
Run local models through [LM Studio](https://lmstudio.ai)'s OpenAI-compatible server:
154+
155+
```json
156+
{
157+
"provider": {
158+
"lmstudio": {
159+
"name": "LM Studio",
160+
"npm": "@ai-sdk/openai-compatible",
161+
"env": ["LMSTUDIO_API_KEY"],
162+
"options": {
163+
"apiKey": "lm-studio",
164+
"baseURL": "http://localhost:1234/v1"
165+
},
166+
"models": {
167+
"qwen2.5-7b-instruct": {
168+
"name": "Qwen 2.5 7B Instruct",
169+
"tool_call": true,
170+
"limit": { "context": 131072, "output": 8192 }
171+
}
172+
}
173+
}
174+
},
175+
"model": "lmstudio/qwen2.5-7b-instruct"
176+
}
177+
```
178+
179+
**Setup:**
180+
181+
1. Open LM Studio → **Developer** tab → **Start Server** (default port: 1234)
182+
2. Load a model in LM Studio
183+
3. Find your model ID: `curl http://localhost:1234/v1/models`
184+
4. Add the model ID to the `models` section in your config
185+
5. Use it: `altimate-code run -m lmstudio/<model-id>`
186+
187+
!!! tip
188+
The model key in your config must match the model ID returned by LM Studio's `/v1/models` endpoint. If you change models in LM Studio, update the config to match.
189+
190+
!!! note
191+
If you changed LM Studio's default port, update the `baseURL` accordingly. No real API key is needed — the `"lm-studio"` placeholder satisfies the SDK requirement.
192+
151193
## OpenRouter
152194

153195
```json

docs/docs/getting-started/quickstart.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,31 @@ Switch providers at any time by updating the `provider` and `model` fields in `a
130130
}
131131
```
132132

133+
=== "LM Studio (Local)"
134+
135+
```json
136+
{
137+
"provider": {
138+
"lmstudio": {
139+
"name": "LM Studio",
140+
"npm": "@ai-sdk/openai-compatible",
141+
"options": {
142+
"apiKey": "lm-studio",
143+
"baseURL": "http://localhost:1234/v1"
144+
},
145+
"models": {
146+
"qwen2.5-7b-instruct": {
147+
"name": "Qwen 2.5 7B Instruct",
148+
"tool_call": true,
149+
"limit": { "context": 131072, "output": 8192 }
150+
}
151+
}
152+
}
153+
},
154+
"model": "lmstudio/qwen2.5-7b-instruct"
155+
}
156+
```
157+
133158
=== "OpenRouter"
134159

135160
```json

0 commit comments

Comments
 (0)