Skip to content

Commit 5e458ad

Browse files
authored
docs: streamline models page and UI tweak (#2696)
1 parent 457d1a5 commit 5e458ad

File tree

7 files changed

+130
-173
lines changed

7 files changed

+130
-173
lines changed

docs/llms-full.txt

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,7 @@ The Agents SDK delivers a focused set of Python primitives—agents, tools, guar
3838
- [Realtime guide](https://openai.github.io/openai-agents-python/realtime/guide/): Deep dive into realtime session lifecycle, structured input, approvals, interruptions, and low-level transport control.
3939

4040
## Models and Provider Integrations
41-
- [Model catalog](https://openai.github.io/openai-agents-python/models/): Lists supported OpenAI and partner models with guidance on selecting capabilities for different workloads.
42-
- [LiteLLM integration](https://openai.github.io/openai-agents-python/models/litellm/): Configure LiteLLM as a provider, map model aliases, and route requests across heterogeneous backends.
41+
- [Model catalog](https://openai.github.io/openai-agents-python/models/): Covers OpenAI model selection, non-OpenAI provider patterns, websocket transport, and the SDK's best-effort LiteLLM guidance in one place.
4342

4443
## API Reference – Agents SDK Core
4544
- [API index](https://openai.github.io/openai-agents-python/ref/index/): Directory of all documented modules, classes, and functions in the SDK.

docs/llms.txt

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,8 +52,7 @@ The SDK focuses on a concise set of primitives so you can orchestrate multi-agen
5252
- [Extensions](https://openai.github.io/openai-agents-python/ref/extensions/handoff_filters/): Extend the SDK with custom handoff filters, prompts, LiteLLM integration, and SQLAlchemy session memory.
5353

5454
## Models and Providers
55-
- [Model catalog](https://openai.github.io/openai-agents-python/models/): Overview of supported model families and configuration guidance.
56-
- [LiteLLM integration](https://openai.github.io/openai-agents-python/models/litellm/): Configure LiteLLM as a provider to fan out across multiple model backends.
55+
- [Model catalog](https://openai.github.io/openai-agents-python/models/): Overview of OpenAI models, non-OpenAI provider patterns, websocket transport, and the SDK's best-effort LiteLLM guidance.
5756

5857
## Optional
5958
- [Release notes](https://openai.github.io/openai-agents-python/release/): Track SDK changes, migration notes, and deprecations.

docs/models/index.md

Lines changed: 84 additions & 59 deletions
Large diffs are not rendered by default.

docs/models/litellm.md

Lines changed: 6 additions & 97 deletions
Original file line numberDiff line numberDiff line change
@@ -1,100 +1,9 @@
1-
# Using any model via LiteLLM
1+
# LiteLLM
22

3-
!!! note
3+
<script>
4+
window.location.replace("../#litellm");
5+
</script>
46

5-
The LiteLLM integration is in beta. You may run into issues with some model providers, especially smaller ones. Please report any issues via [Github issues](https://github.com/openai/openai-agents-python/issues) and we'll fix quickly.
7+
This page moved to the [LiteLLM section in Models](index.md#litellm).
68

7-
[LiteLLM](https://docs.litellm.ai/docs/) is a library that allows you to use 100+ models via a single interface. We've added a LiteLLM integration to allow you to use any AI model in the Agents SDK.
8-
9-
## Setup
10-
11-
You'll need to ensure `litellm` is available. You can do this by installing the optional `litellm` dependency group:
12-
13-
```bash
14-
pip install "openai-agents[litellm]"
15-
```
16-
17-
Once done, you can use [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel] in any agent.
18-
19-
## Example
20-
21-
This is a fully working example. When you run it, you'll be prompted for a model name and API key. For example, you could enter:
22-
23-
- `openai/gpt-4.1` for the model, and your OpenAI API key
24-
- `anthropic/claude-3-5-sonnet-20240620` for the model, and your Anthropic API key
25-
- etc
26-
27-
For a full list of models supported in LiteLLM, see the [litellm providers docs](https://docs.litellm.ai/docs/providers).
28-
29-
```python
30-
from __future__ import annotations
31-
32-
import asyncio
33-
34-
from agents import Agent, Runner, function_tool, set_tracing_disabled
35-
from agents.extensions.models.litellm_model import LitellmModel
36-
37-
@function_tool
38-
def get_weather(city: str):
39-
print(f"[debug] getting weather for {city}")
40-
return f"The weather in {city} is sunny."
41-
42-
43-
async def main(model: str, api_key: str):
44-
agent = Agent(
45-
name="Assistant",
46-
instructions="You only respond in haikus.",
47-
model=LitellmModel(model=model, api_key=api_key),
48-
tools=[get_weather],
49-
)
50-
51-
result = await Runner.run(agent, "What's the weather in Tokyo?")
52-
print(result.final_output)
53-
54-
55-
if __name__ == "__main__":
56-
# First try to get model/api key from args
57-
import argparse
58-
59-
parser = argparse.ArgumentParser()
60-
parser.add_argument("--model", type=str, required=False)
61-
parser.add_argument("--api-key", type=str, required=False)
62-
args = parser.parse_args()
63-
64-
model = args.model
65-
if not model:
66-
model = input("Enter a model name for Litellm: ")
67-
68-
api_key = args.api_key
69-
if not api_key:
70-
api_key = input("Enter an API key for Litellm: ")
71-
72-
asyncio.run(main(model, api_key))
73-
```
74-
75-
## Tracking usage data
76-
77-
If you want LiteLLM responses to populate the Agents SDK usage metrics, pass `ModelSettings(include_usage=True)` when creating your agent.
78-
79-
```python
80-
from agents import Agent, ModelSettings
81-
from agents.extensions.models.litellm_model import LitellmModel
82-
83-
agent = Agent(
84-
name="Assistant",
85-
model=LitellmModel(model="your/model", api_key="..."),
86-
model_settings=ModelSettings(include_usage=True),
87-
)
88-
```
89-
90-
With `include_usage=True`, LiteLLM requests report token and request counts through `result.context_wrapper.usage` just like the built-in OpenAI models.
91-
92-
## Troubleshooting
93-
94-
If you see Pydantic serializer warnings from LiteLLM responses, enable a small compatibility patch by setting:
95-
96-
```bash
97-
export OPENAI_AGENTS_ENABLE_LITELLM_SERIALIZER_PATCH=true
98-
```
99-
100-
This opt-in flag suppresses known LiteLLM serializer warnings while preserving normal behavior. Turn it off (unset or `false`) if you do not need it.
9+
If you are not redirected automatically, use the link above.

docs/stylesheets/extra.css

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -170,6 +170,32 @@
170170
font-size: 14px;
171171
}
172172

173+
.md-typeset .field-table {
174+
overflow-x: auto;
175+
}
176+
177+
.md-typeset .field-table table:not([class]) {
178+
display: table;
179+
table-layout: fixed;
180+
width: 100%;
181+
}
182+
183+
.md-typeset .field-table table:not([class]) th:first-child,
184+
.md-typeset .field-table table:not([class]) td:first-child {
185+
width: 11rem;
186+
}
187+
188+
.md-typeset .field-table table:not([class]) th:nth-child(2),
189+
.md-typeset .field-table table:not([class]) td:nth-child(2) {
190+
width: 18rem;
191+
}
192+
193+
.md-typeset .field-table table:not([class]) th:first-child code,
194+
.md-typeset .field-table table:not([class]) td:first-child code {
195+
white-space: nowrap;
196+
word-break: normal;
197+
}
198+
173199
/* Custom link styling */
174200
.md-content a {
175201
text-decoration: none;
@@ -203,3 +229,10 @@
203229
.md-sidebar__scrollwrap {
204230
scrollbar-color: auto !important;
205231
}
232+
233+
/* Let the docs layout use more of large viewports without becoming fully fluid. */
234+
@media screen and (min-width: 76.25em) {
235+
.md-grid {
236+
max-width: clamp(76rem, 92vw, 92rem);
237+
}
238+
}

docs/usage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Usage is aggregated across all model calls during the run (including tool calls
3131

3232
### Enabling usage with LiteLLM models
3333

34-
LiteLLM providers do not report usage metrics by default. When you are using [`LitellmModel`](models/litellm.md), pass `ModelSettings(include_usage=True)` to your agent so that LiteLLM responses populate `result.context_wrapper.usage`.
34+
LiteLLM providers do not report usage metrics by default. When you are using [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel], pass `ModelSettings(include_usage=True)` to your agent so that LiteLLM responses populate `result.context_wrapper.usage`. See the [LiteLLM note](models/index.md#litellm) in the Models guide for setup guidance and examples.
3535

3636
```python
3737
from agents import Agent, ModelSettings, Runner

mkdocs.yml

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -56,9 +56,7 @@ plugins:
5656
- Configuration: config.md
5757
- Documentation:
5858
- agents.md
59-
- Models:
60-
- models/index.md
61-
- models/litellm.md
59+
- Models: models/index.md
6260
- tools.md
6361
- guardrails.md
6462
- running_agents.md
@@ -177,9 +175,7 @@ plugins:
177175
- config.md
178176
- ドキュメント:
179177
- agents.md
180-
- モデル:
181-
- models/index.md
182-
- models/litellm.md
178+
- モデル: models/index.md
183179
- tools.md
184180
- guardrails.md
185181
- running_agents.md
@@ -217,9 +213,7 @@ plugins:
217213
- config.md
218214
- 문서:
219215
- agents.md
220-
- 모델:
221-
- models/index.md
222-
- models/litellm.md
216+
- 모델: models/index.md
223217
- tools.md
224218
- guardrails.md
225219
- running_agents.md
@@ -257,9 +251,7 @@ plugins:
257251
- config.md
258252
- 文档:
259253
- agents.md
260-
- 模型:
261-
- models/index.md
262-
- models/litellm.md
254+
- 模型: models/index.md
263255
- tools.md
264256
- guardrails.md
265257
- running_agents.md

0 commit comments

Comments
 (0)