|
1 | | -# Using any model via LiteLLM |
| 1 | +# LiteLLM |
2 | 2 |
|
3 | | -!!! note |
| 3 | +<script> |
| 4 | + window.location.replace("../#litellm"); |
| 5 | +</script> |
4 | 6 |
|
5 | | - The LiteLLM integration is in beta. You may run into issues with some model providers, especially smaller ones. Please report any issues via [Github issues](https://github.com/openai/openai-agents-python/issues) and we'll fix quickly. |
| 7 | +This page moved to the [LiteLLM section in Models](index.md#litellm). |
6 | 8 |
|
7 | | -[LiteLLM](https://docs.litellm.ai/docs/) is a library that allows you to use 100+ models via a single interface. We've added a LiteLLM integration to allow you to use any AI model in the Agents SDK. |
8 | | - |
9 | | -## Setup |
10 | | - |
11 | | -You'll need to ensure `litellm` is available. You can do this by installing the optional `litellm` dependency group: |
12 | | - |
13 | | -```bash |
14 | | -pip install "openai-agents[litellm]" |
15 | | -``` |
16 | | - |
17 | | -Once done, you can use [`LitellmModel`][agents.extensions.models.litellm_model.LitellmModel] in any agent. |
18 | | - |
19 | | -## Example |
20 | | - |
21 | | -This is a fully working example. When you run it, you'll be prompted for a model name and API key. For example, you could enter: |
22 | | - |
23 | | -- `openai/gpt-4.1` for the model, and your OpenAI API key |
24 | | -- `anthropic/claude-3-5-sonnet-20240620` for the model, and your Anthropic API key |
25 | | -- etc |
26 | | - |
27 | | -For a full list of models supported in LiteLLM, see the [litellm providers docs](https://docs.litellm.ai/docs/providers). |
28 | | - |
29 | | -```python |
30 | | -from __future__ import annotations |
31 | | - |
32 | | -import asyncio |
33 | | - |
34 | | -from agents import Agent, Runner, function_tool, set_tracing_disabled |
35 | | -from agents.extensions.models.litellm_model import LitellmModel |
36 | | - |
37 | | -@function_tool |
38 | | -def get_weather(city: str): |
39 | | - print(f"[debug] getting weather for {city}") |
40 | | - return f"The weather in {city} is sunny." |
41 | | - |
42 | | - |
43 | | -async def main(model: str, api_key: str): |
44 | | - agent = Agent( |
45 | | - name="Assistant", |
46 | | - instructions="You only respond in haikus.", |
47 | | - model=LitellmModel(model=model, api_key=api_key), |
48 | | - tools=[get_weather], |
49 | | - ) |
50 | | - |
51 | | - result = await Runner.run(agent, "What's the weather in Tokyo?") |
52 | | - print(result.final_output) |
53 | | - |
54 | | - |
55 | | -if __name__ == "__main__": |
56 | | - # First try to get model/api key from args |
57 | | - import argparse |
58 | | - |
59 | | - parser = argparse.ArgumentParser() |
60 | | - parser.add_argument("--model", type=str, required=False) |
61 | | - parser.add_argument("--api-key", type=str, required=False) |
62 | | - args = parser.parse_args() |
63 | | - |
64 | | - model = args.model |
65 | | - if not model: |
66 | | - model = input("Enter a model name for Litellm: ") |
67 | | - |
68 | | - api_key = args.api_key |
69 | | - if not api_key: |
70 | | - api_key = input("Enter an API key for Litellm: ") |
71 | | - |
72 | | - asyncio.run(main(model, api_key)) |
73 | | -``` |
74 | | - |
75 | | -## Tracking usage data |
76 | | - |
77 | | -If you want LiteLLM responses to populate the Agents SDK usage metrics, pass `ModelSettings(include_usage=True)` when creating your agent. |
78 | | - |
79 | | -```python |
80 | | -from agents import Agent, ModelSettings |
81 | | -from agents.extensions.models.litellm_model import LitellmModel |
82 | | - |
83 | | -agent = Agent( |
84 | | - name="Assistant", |
85 | | - model=LitellmModel(model="your/model", api_key="..."), |
86 | | - model_settings=ModelSettings(include_usage=True), |
87 | | -) |
88 | | -``` |
89 | | - |
90 | | -With `include_usage=True`, LiteLLM requests report token and request counts through `result.context_wrapper.usage` just like the built-in OpenAI models. |
91 | | - |
92 | | -## Troubleshooting |
93 | | - |
94 | | -If you see Pydantic serializer warnings from LiteLLM responses, enable a small compatibility patch by setting: |
95 | | - |
96 | | -```bash |
97 | | -export OPENAI_AGENTS_ENABLE_LITELLM_SERIALIZER_PATCH=true |
98 | | -``` |
99 | | - |
100 | | -This opt-in flag suppresses known LiteLLM serializer warnings while preserving normal behavior. Turn it off (unset or `false`) if you do not need it. |
| 9 | +If you are not redirected automatically, use the link above. |
0 commit comments