Skip to content

Commit b3e3fe2

Browse files
committed
Update providers.mdx
1 parent 4919711 commit b3e3fe2

File tree

1 file changed

+16
-13
lines changed

1 file changed

+16
-13
lines changed

content/docs/features/providers.mdx

Lines changed: 16 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -410,29 +410,32 @@ Existing Image Generation Providers implemented this way include:
410410

411411
### Ollama (Local)
412412

413-
Ollama supports automatic model discovery:
413+
If no `map_models` have been configured the Ollama will use automatic model discovery to populate its models:
414414

415415
```json
416416
{
417417
"ollama": {
418-
"enabled": true,
419-
"all_models": true
418+
"enabled": false,
419+
"npm": "ollama",
420+
"api": "http://localhost:11434"
420421
}
421422
}
422423
```
423424

424-
- `all_models`: Auto-discover all installed models
425-
- Runs locally at `http://localhost:11434`
426-
427-
---
425+
### LM Studio (Local)
428426

429-
## Non-OpenAI Compatible Providers
427+
Likewise for LM Studio, which can be enabled with minimal configuration:
430428

431-
Providers that don't use the OpenAI-compatible API format are implemented as extensions in the [providers](https://github.com/ServiceStack/llms/tree/main/llms/extensions/providers) folder using the `ctx.add_provider()` API.
432-
433-
These include specialized implementations for:
434-
- **Anthropic** - Interleaved thinking support for improved agentic performance
435-
- **Google** - Native Gemini API with tool calling and RAG features
429+
```json
430+
{
431+
"lmstudio": {
432+
"enabled": false,
433+
"npm": "lmstudio",
434+
"api": "http://127.0.0.1:1234/v1",
435+
"models": {}
436+
},
437+
}
438+
```
436439

437440
---
438441

0 commit comments

Comments
 (0)