Skip to content

Commit 4919711

Browse files
committed
update docs
1 parent 53459f9 commit 4919711

File tree

2 files changed

+114
-55
lines changed

2 files changed

+114
-55
lines changed

content/docs/extensions/server.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ See [Tool Support Docs](/docs/extensions/tools) and [core_tools implementation](
191191

192192
## Custom Provider Implementation example
193193

194-
[providers/openrouter.py](https://github.com/ServiceStack/llms/blob/main/llms/providers/openrouter.py)
194+
[providers/openrouter.py](https://github.com/ServiceStack/llms/blob/main/llms/extensions/providers/openrouter.py)
195195

196196
Example of creating a custom provider that extends the `GeneratorBase` class to add support for image generation in **OpenRouter**.
197197

content/docs/features/providers.mdx

Lines changed: 113 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -33,51 +33,6 @@ llms --update-providers
3333

3434
As an optimization, only providers referenced in your `llms.json` are saved locally, keeping your configuration lightweight.
3535

36-
### Extra Providers
37-
38-
Any additional providers you want to use that are not included in models.dev can be added to your `~/.llms/providers-extra.json`, which get merged into your `providers.json` on every update. This keeps your local configuration file lightweight by only including the providers that are available for use.
39-
40-
This is used in the default [providers-extra.json](https://github.com/ServiceStack/llms/blob/main/llms/providers-extra.json) for image generation providers which are not yet supported in models.dev, e.g:
41-
42-
```json
43-
{
44-
"chutes": {
45-
"models": {
46-
"chutes-z-image-turbo": {
47-
"name": "Z Image Turbo",
48-
"modalities": {
49-
"input": [
50-
"text"
51-
],
52-
"output": [
53-
"image"
54-
]
55-
}
56-
}
57-
}
58-
},
59-
"zai": {
60-
"models": {
61-
"glm-image": {
62-
"name": "GLM-Image",
63-
"modalities": {
64-
"input": [
65-
"text"
66-
],
67-
"output": [
68-
"image"
69-
]
70-
},
71-
"cost": {
72-
"input": 0,
73-
"output": 0.015
74-
}
75-
}
76-
}
77-
}
78-
}
79-
```
80-
8136
---
8237

8338
## Supported Providers
@@ -319,9 +274,12 @@ For providers not included in models.dev, add them to `~/.llms/providers-extra.j
319274
```json
320275
{
321276
"my_custom_api": {
322-
"type": "OpenAiProvider",
323-
"base_url": "https://my-api.example.com",
324-
"api_key": "$MY_API_KEY",
277+
"id": "my_custom_api",
278+
"npm": "@ai-sdk/openai-compatible",
279+
"api": "https://my-api.example.com/v1",
280+
"env": [
281+
"MY_CUSTOM_API_KEY"
282+
],
325283
"models": {
326284
"custom-model-1": "model-id-1",
327285
"custom-model-2": "model-id-2"
@@ -340,14 +298,115 @@ Then enable in your `llms.json`:
340298
}
341299
```
342300

301+
### Extra Providers
302+
303+
This is used in the default [providers-extra.json](https://github.com/ServiceStack/llms/blob/main/llms/providers-extra.json) for image generation providers which are not yet supported in models.dev, like [GLM-Image](https://z.ai/blog/glm-image) e.g:
304+
305+
```json
306+
{
307+
"zai": {
308+
"models": {
309+
"glm-image": {
310+
"name": "GLM-Image",
311+
"modalities": {
312+
"input": [
313+
"text"
314+
],
315+
"output": [
316+
"image"
317+
]
318+
},
319+
"cost": {
320+
"input": 0,
321+
"output": 0.015
322+
}
323+
}
324+
}
325+
}
326+
}
327+
```
328+
343329
### Provider Types
344330

345-
| Type | Used For |
346-
|------|----------|
347-
| `OpenAiProvider` | OpenAI-compatible APIs (most providers) |
348-
| `GoogleProvider` | Native Google Gemini API |
349-
| `AnthropicProvider` | Native Anthropic Messages API (interleaved thinking) |
350-
| `OllamaProvider` | Local Ollama with auto-discovery |
331+
llms.py providers are implemented as classes extending the base `OpenAiCompatible` class for OpenAI-compatible APIs and includes built-in implementations for several popular OpenAI Chat Completion providers.
332+
333+
| Type | npm | Description |
334+
|-----------------------|---------------------------|-------------|
335+
| `OpenAiCompatible` | @ai-sdk/openai-compatible | OpenAI-compatible APIs (most providers) |
336+
| `MistralProvider` | @ai-sdk/mistral | Access Mistral models |
337+
| `GroqProvider` | @ai-sdk/groq | Access models hosted on Groq |
338+
| `XaiProvider` | @ai-sdk/xai | Access xAI models |
339+
| `CodestralProvider` | codestral | Access Mistral's Codestral models |
340+
| `OllamaProvider` | ollama | Access local models via Ollama |
341+
| `LMStudioProvider` | lmstudio | Access local models via LM Studio |
342+
| `OpenAiLocalProvider` | openai-local | Access generic OpenAI-compatible local endpoints |
343+
344+
Additional OpenAI-compatible providers are implemented and registered in the [providers](https://github.com/ServiceStack/llms/tree/main/llms/extensions/providers) folder using the `ctx.add_provider()` API with a custom `OpenAiCompatible`, e.g:
345+
346+
```python
347+
from llms.main import OpenAiCompatible
348+
349+
class AnthropicProvider(OpenAiCompatible):
350+
sdk = "@ai-sdk/anthropic"
351+
#...
352+
353+
ctx.add_provider(AnthropicProvider)
354+
```
355+
356+
### Custom Provider implementations
357+
358+
| Type | npm | Description |
359+
|-----------------------|---------------------------|-------------|
360+
| `AnthropicProvider` | @ai-sdk/anthropic | Access Claude models using Anthropic's messages API |
361+
| `CerebrasProvider` | @ai-sdk/cerebras | Access Cerebras models |
362+
| `GoogleProvider` | @ai-sdk/google | Google models using the Gemini API |
363+
| `OpenAiProvider` | @ai-sdk/openai | Access OpenAI models using Chat Completions API |
364+
365+
### Multi Modal Generation Providers
366+
367+
For providers that support multiple modalities (e.g., image generation), custom provider implementations should instead extend the `GeneratorBase` class as done in [providers/openrouter.py](https://github.com/ServiceStack/llms/blob/main/llms/extensions/providers/openrouter.py) that extends the `GeneratorBase` class to add support for image generation in **OpenRouter**.
368+
369+
```python
370+
def install(ctx):
371+
from llms.main import GeneratorBase
372+
373+
# https://openrouter.ai/docs/guides/overview/multimodal/image-generation
374+
class OpenRouterGenerator(GeneratorBase):
375+
sdk = "openrouter/image"
376+
#...
377+
378+
ctx.add_provider(OpenRouterGenerator)
379+
380+
381+
__install__ = install
382+
```
383+
384+
This new implementation can be used by registering it as the **image** modality whose **npm** matches the providers **sdk** in `llms.json`, e.g:
385+
386+
```json
387+
{
388+
"openrouter": {
389+
"enabled": true,
390+
"id": "openrouter",
391+
"modalities": {
392+
"image": {
393+
"name": "OpenRouter Image",
394+
"npm": "openrouter/image"
395+
}
396+
}
397+
}
398+
}
399+
```
400+
401+
Existing Image Generation Providers implemented this way include:
402+
403+
| Type | npm | Description |
404+
|-----------------------|-------------------|-------------|
405+
| `ChutesImage` | openai-local | Chutes image generation provider |
406+
| `NvidiaGenAi` | nvidia/image | NVIDIA GenAI image generation provider |
407+
| `OpenAiGenerator` | openai/image | OpenAI image generation provider |
408+
| `OpenRouterGenerator` | openrouter/image | OpenRouter image generation provider |
409+
| `ZaiGenerator` | zai/image | Zai image generation provider |
351410

352411
### Ollama (Local)
353412

0 commit comments

Comments
 (0)