You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As an optimization, only providers referenced in your `llms.json` are saved locally, keeping your configuration lightweight.
35
35
36
-
### Extra Providers
37
-
38
-
Any additional providers you want to use that are not included in models.dev can be added to your `~/.llms/providers-extra.json`, which get merged into your `providers.json` on every update. This keeps your local configuration file lightweight by only including the providers that are available for use.
39
-
40
-
This is used in the default [providers-extra.json](https://github.com/ServiceStack/llms/blob/main/llms/providers-extra.json) for image generation providers which are not yet supported in models.dev, e.g:
41
-
42
-
```json
43
-
{
44
-
"chutes": {
45
-
"models": {
46
-
"chutes-z-image-turbo": {
47
-
"name": "Z Image Turbo",
48
-
"modalities": {
49
-
"input": [
50
-
"text"
51
-
],
52
-
"output": [
53
-
"image"
54
-
]
55
-
}
56
-
}
57
-
}
58
-
},
59
-
"zai": {
60
-
"models": {
61
-
"glm-image": {
62
-
"name": "GLM-Image",
63
-
"modalities": {
64
-
"input": [
65
-
"text"
66
-
],
67
-
"output": [
68
-
"image"
69
-
]
70
-
},
71
-
"cost": {
72
-
"input": 0,
73
-
"output": 0.015
74
-
}
75
-
}
76
-
}
77
-
}
78
-
}
79
-
```
80
-
81
36
---
82
37
83
38
## Supported Providers
@@ -319,9 +274,12 @@ For providers not included in models.dev, add them to `~/.llms/providers-extra.j
319
274
```json
320
275
{
321
276
"my_custom_api": {
322
-
"type": "OpenAiProvider",
323
-
"base_url": "https://my-api.example.com",
324
-
"api_key": "$MY_API_KEY",
277
+
"id": "my_custom_api",
278
+
"npm": "@ai-sdk/openai-compatible",
279
+
"api": "https://my-api.example.com/v1",
280
+
"env": [
281
+
"MY_CUSTOM_API_KEY"
282
+
],
325
283
"models": {
326
284
"custom-model-1": "model-id-1",
327
285
"custom-model-2": "model-id-2"
@@ -340,14 +298,115 @@ Then enable in your `llms.json`:
340
298
}
341
299
```
342
300
301
+
### Extra Providers
302
+
303
+
This is used in the default [providers-extra.json](https://github.com/ServiceStack/llms/blob/main/llms/providers-extra.json) for image generation providers which are not yet supported in models.dev, like [GLM-Image](https://z.ai/blog/glm-image) e.g:
|`AnthropicProvider`| Native Anthropic Messages API (interleaved thinking) |
350
-
|`OllamaProvider`| Local Ollama with auto-discovery |
331
+
llms.py providers are implemented as classes extending the base `OpenAiCompatible` class for OpenAI-compatible APIs and includes built-in implementations for several popular OpenAI Chat Completion providers.
|`OllamaProvider`| ollama | Access local models via Ollama |
341
+
|`LMStudioProvider`| lmstudio | Access local models via LM Studio |
342
+
|`OpenAiLocalProvider`| openai-local | Access generic OpenAI-compatible local endpoints |
343
+
344
+
Additional OpenAI-compatible providers are implemented and registered in the [providers](https://github.com/ServiceStack/llms/tree/main/llms/extensions/providers) folder using the `ctx.add_provider()` API with a custom `OpenAiCompatible`, e.g:
|`GoogleProvider`|@ai-sdk/google| Google models using the Gemini API |
363
+
|`OpenAiProvider`|@ai-sdk/openai| Access OpenAI models using Chat Completions API |
364
+
365
+
### Multi Modal Generation Providers
366
+
367
+
For providers that support multiple modalities (e.g., image generation), custom provider implementations should instead extend the `GeneratorBase` class as done in [providers/openrouter.py](https://github.com/ServiceStack/llms/blob/main/llms/extensions/providers/openrouter.py) that extends the `GeneratorBase` class to add support for image generation in **OpenRouter**.
0 commit comments