Skip to content

Commit e67260c

Browse files
committed
docs: add OpenRouter provider documentation and update model examples
1 parent b057486 commit e67260c

2 files changed

Lines changed: 25 additions & 4 deletions

File tree

docs/index.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@ Currently we support the following LLM providers:
2727
- ✔︎ OpenAI
2828
- ✔︎ Anthropic
2929
- ✔︎ Google Gemini
30-
- ✔︎ Ollama
30+
- ✔︎ Ollama (your local LLM server)
31+
- ✔︎ OpenRouter (almost any LLM including open-source models)
3132
- ⏳ more to come...
3233

3334
## Quick Start
@@ -41,14 +42,15 @@ Other keys depends on which LLM providers you use.
4142
GEMINI_API_KEY=XXXX
4243
OPENAI_API_KEY=sk-XXXX
4344
ANTHROPIC_API_KEY=sk-ant-XXXXX
45+
OPENROUTER_API_KEY=XXXXX
4446
HF_TOKEN=hf_XXXXX
4547
```
4648

4749
### 2. Import Dependencies
4850
```python
4951
from datafast.datasets import ClassificationDataset
5052
from datafast.schema.config import ClassificationDatasetConfig, PromptExpansionConfig
51-
from datafast.llms import OpenAIProvider, AnthropicProvider, GeminiProvider
53+
from datafast.llms import OpenAIProvider, AnthropicProvider, GeminiProvider, OpenRouterProvider
5254
from dotenv import load_dotenv
5355

5456
# Load environment variables
@@ -93,7 +95,8 @@ config = ClassificationDatasetConfig(
9395
providers = [
9496
OpenAIProvider(model_id="gpt-4.1-mini-2025-04-14"),
9597
AnthropicProvider(model_id="claude-3-5-haiku-latest"),
96-
GeminiProvider(model_id="gemini-2.0-flash")
98+
GeminiProvider(model_id="gemini-2.5-flash"),
99+
OpenRouterProvider(model_id="z-ai/glm-4.6")
97100
]
98101
```
99102

docs/llms.md

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ openrouter_llm = OpenRouterProvider()
4141

4242
```python
4343
openai_llm = OpenAIProvider(
44-
model_id="gpt-4o-mini", # Custom model
44+
model_id="gpt-5-mini-2025-08-07", # Custom model
4545
temperature=0.2, # Lower temperature for more deterministic outputs
4646
max_completion_tokens=100, # Limit token generation
4747
top_p=0.9, # Nucleus sampling parameter
@@ -53,6 +53,17 @@ ollama_llm = OllamaProvider(
5353
model_id="llama3.2:latest",
5454
api_base="http://localhost:11434" # <--- this is the default url
5555
)
56+
57+
# OpenRouter with different models
58+
openrouter_llm = OpenRouterProvider(
59+
model_id="z-ai/glm-4.6", # Access glm-4.6 via OpenRouter
60+
temperature=0.7,
61+
max_completion_tokens=500
62+
)
63+
64+
# You can access many models through OpenRouter
65+
openrouter_deepseek = OpenRouterProvider(model_id="deepseek/deepseek-r1-0528")
66+
openrouter_qwen = OpenRouterProvider(model_id="qwen/qwen3-next-80b-a3b-instruct")
5667
```
5768

5869
## API Keys
@@ -74,6 +85,13 @@ openrouter_llm = OpenRouterProvider(api_key="your-openrouter-key")
7485

7586
**Note**: Ollama typically runs locally and doesn't require an API key. You can set `OLLAMA_API_BASE` to specify a custom endpoint (defaults to `http://localhost:11434`).
7687

88+
!!! warning
89+
Note that `gpt-oss:20b` or `gpt-oss:120b` do not work well with structured output. Therefore we recommend you not to use them with datafast.
90+
91+
## About OpenRouter
92+
93+
[OpenRouter](https://openrouter.ai/) provides access to a wide variety of LLM models through a single API key. Model IDs follow the format `provider/model-name` (e.g., `deepseek/deepseek-r1-0528`, `qwen/qwen3-next-80b-a3b-instruct`). Visit [OpenRouter's models page](https://openrouter.ai/models) for the complete list.
94+
7795
## Generation Methods
7896

7997
### Simple Text Generation

0 commit comments

Comments
 (0)