You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/docs/features/providers.mdx
+16-13Lines changed: 16 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -410,29 +410,32 @@ Existing Image Generation Providers implemented this way include:
410
410
411
411
### Ollama (Local)
412
412
413
-
Ollama supports automatic model discovery:
413
+
If no `map_models` have been configured the Ollama will use automatic model discovery to populate its models:
414
414
415
415
```json
416
416
{
417
417
"ollama": {
418
-
"enabled": true,
419
-
"all_models": true
418
+
"enabled": false,
419
+
"npm": "ollama",
420
+
"api": "http://localhost:11434"
420
421
}
421
422
}
422
423
```
423
424
424
-
-`all_models`: Auto-discover all installed models
425
-
- Runs locally at `http://localhost:11434`
426
-
427
-
---
425
+
### LM Studio (Local)
428
426
429
-
## Non-OpenAI Compatible Providers
427
+
Likewise for LM Studio, which can be enabled with minimal configuration:
430
428
431
-
Providers that don't use the OpenAI-compatible API format are implemented as extensions in the [providers](https://github.com/ServiceStack/llms/tree/main/llms/extensions/providers) folder using the `ctx.add_provider()` API.
432
-
433
-
These include specialized implementations for:
434
-
-**Anthropic** - Interleaved thinking support for improved agentic performance
435
-
-**Google** - Native Gemini API with tool calling and RAG features
0 commit comments