Skip to content

Commit c88f339

Browse files
authored
Update examples and defaults to GPT-5.5 (#3016)
1 parent 5ffc1ec commit c88f339

66 files changed

Lines changed: 136 additions & 105 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -301,7 +301,7 @@ By using the `clone()` method on an agent, you can duplicate an Agent, and optio
301301
pirate_agent = Agent(
302302
name="Pirate",
303303
instructions="Write like a pirate",
304-
model="gpt-5.4",
304+
model="gpt-5.5",
305305
)
306306

307307
robot_agent = pirate_agent.clone(

docs/models/index.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -22,16 +22,16 @@ Start with the simplest path that fits your setup:
2222

2323
For most OpenAI-only apps, the recommended path is to use string model names with the default OpenAI provider and stay on the Responses model path.
2424

25-
When you don't specify a model when initializing an `Agent`, the default model will be used. The default is currently [`gpt-4.1`](https://developers.openai.com/api/docs/models/gpt-4.1) for compatibility and low latency. If you have access, we recommend setting your agents to [`gpt-5.4`](https://developers.openai.com/api/docs/models/gpt-5.4) for higher quality while keeping explicit `model_settings`.
25+
When you don't specify a model when initializing an `Agent`, the default model will be used. The default is currently [`gpt-4.1`](https://developers.openai.com/api/docs/models/gpt-4.1) for compatibility and low latency. If you have access, we recommend setting your agents to [`gpt-5.5`](https://developers.openai.com/api/docs/models/gpt-5.5) for higher quality while keeping explicit `model_settings`.
2626

27-
If you want to switch to other models like [`gpt-5.4`](https://developers.openai.com/api/docs/models/gpt-5.4), there are two ways to configure your agents.
27+
If you want to switch to other models like [`gpt-5.5`](https://developers.openai.com/api/docs/models/gpt-5.5), there are two ways to configure your agents.
2828

2929
### Default model
3030

3131
First, if you want to consistently use a specific model for all agents that do not set a custom model, set the `OPENAI_DEFAULT_MODEL` environment variable before running your agents.
3232

3333
```bash
34-
export OPENAI_DEFAULT_MODEL=gpt-5.4
34+
export OPENAI_DEFAULT_MODEL=gpt-5.5
3535
python3 my_awesome_agent.py
3636
```
3737

@@ -48,13 +48,13 @@ agent = Agent(
4848
result = await Runner.run(
4949
agent,
5050
"Hello",
51-
run_config=RunConfig(model="gpt-5.4"),
51+
run_config=RunConfig(model="gpt-5.5"),
5252
)
5353
```
5454

5555
#### GPT-5 models
5656

57-
When you use any GPT-5 model such as [`gpt-5.4`](https://developers.openai.com/api/docs/models/gpt-5.4) in this way, the SDK applies default `ModelSettings`. It sets the ones that work the best for most use cases. To adjust the reasoning effort for the default model, pass your own `ModelSettings`:
57+
When you use any GPT-5 model such as [`gpt-5.5`](https://developers.openai.com/api/docs/models/gpt-5.5) in this way, the SDK applies default `ModelSettings`. It sets the ones that work the best for most use cases. To adjust the reasoning effort for the default model, pass your own `ModelSettings`:
5858

5959
```python
6060
from openai.types.shared import Reasoning
@@ -63,20 +63,20 @@ from agents import Agent, ModelSettings
6363
my_agent = Agent(
6464
name="My Agent",
6565
instructions="You're a helpful agent.",
66-
# If OPENAI_DEFAULT_MODEL=gpt-5.4 is set, passing only model_settings works.
66+
# If OPENAI_DEFAULT_MODEL=gpt-5.5 is set, passing only model_settings works.
6767
# It's also fine to pass a GPT-5 model name explicitly:
68-
model="gpt-5.4",
68+
model="gpt-5.5",
6969
model_settings=ModelSettings(reasoning=Reasoning(effort="high"), verbosity="low")
7070
)
7171
```
7272

73-
For lower latency, using `reasoning.effort="none"` with `gpt-5.4` is recommended. The gpt-4.1 family (including mini and nano variants) also remains a solid choice for building interactive agent apps.
73+
For lower latency, using `reasoning.effort="none"` with `gpt-5.5` is recommended. The gpt-4.1 family (including mini and nano variants) also remains a solid choice for building interactive agent apps.
7474

7575
#### ComputerTool model selection
7676

77-
If an agent includes [`ComputerTool`][agents.tool.ComputerTool], the effective model on the actual Responses request determines which computer-tool payload the SDK sends. Explicit `gpt-5.4` requests use the GA built-in `computer` tool, while explicit `computer-use-preview` requests keep the older `computer_use_preview` payload.
77+
If an agent includes [`ComputerTool`][agents.tool.ComputerTool], the effective model on the actual Responses request determines which computer-tool payload the SDK sends. Explicit `gpt-5.5` requests use the GA built-in `computer` tool, while explicit `computer-use-preview` requests keep the older `computer_use_preview` payload.
7878

79-
Prompt-managed calls are the main exception. If a prompt template owns the model and the SDK omits `model` from the request, the SDK defaults to the preview-compatible computer payload so it does not guess which model the prompt pins. To keep the GA path in that flow, either make `model="gpt-5.4"` explicit on the request or force the GA selector with `ModelSettings(tool_choice="computer")` or `ModelSettings(tool_choice="computer_use")`.
79+
Prompt-managed calls are the main exception. If a prompt template owns the model and the SDK omits `model` from the request, the SDK defaults to the preview-compatible computer payload so it does not guess which model the prompt pins. To keep the GA path in that flow, either make `model="gpt-5.5"` explicit on the request or force the GA selector with `ModelSettings(tool_choice="computer")` or `ModelSettings(tool_choice="computer_use")`.
8080

8181
With a registered [`ComputerTool`][agents.tool.ComputerTool], `tool_choice="computer"`, `"computer_use"`, and `"computer_use_preview"` are normalized to the built-in selector that matches the effective request model. If no `ComputerTool` is registered, those strings continue to behave like ordinary function names.
8282

@@ -108,7 +108,7 @@ from agents import set_default_openai_responses_transport
108108
set_default_openai_responses_transport("websocket")
109109
```
110110

111-
This affects OpenAI Responses models resolved by the default OpenAI provider (including string model names such as `"gpt-5.4"`).
111+
This affects OpenAI Responses models resolved by the default OpenAI provider (including string model names such as `"gpt-5.5"`).
112112

113113
Transport selection happens when the SDK resolves a model name into a model instance. If you pass a concrete [`Model`][agents.models.interface.Model] object, its transport is already fixed: [`OpenAIResponsesWSModel`][agents.models.openai_responses.OpenAIResponsesWSModel] uses websocket, [`OpenAIResponsesModel`][agents.models.openai_responses.OpenAIResponsesModel] uses HTTP, and [`OpenAIChatCompletionsModel`][agents.models.openai_chatcompletions.OpenAIChatCompletionsModel] stays on Chat Completions. If you pass `RunConfig(model_provider=...)`, that provider controls transport selection instead of the global default.
114114

@@ -275,7 +275,7 @@ triage_agent = Agent(
275275
name="Triage agent",
276276
instructions="Handoff to the appropriate agent based on the language of the request.",
277277
handoffs=[spanish_agent, english_agent],
278-
model="gpt-5.4",
278+
model="gpt-5.5",
279279
)
280280

281281
async def main():
@@ -320,7 +320,7 @@ from agents import Agent, ModelSettings
320320

321321
research_agent = Agent(
322322
name="Research agent",
323-
model="gpt-5.4",
323+
model="gpt-5.5",
324324
model_settings=ModelSettings(
325325
parallel_tool_calls=False,
326326
truncation="auto",
@@ -363,7 +363,7 @@ from agents import Agent, ModelRetrySettings, ModelSettings, retry_policies
363363

364364
agent = Agent(
365365
name="Assistant",
366-
model="gpt-5.4",
366+
model="gpt-5.5",
367367
model_settings=ModelSettings(
368368
retry=ModelRetrySettings(
369369
max_retries=4,

docs/results.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ In practice:
5959

6060
Unlike the JavaScript SDK, Python does not expose a separate `output` property for the model-shaped delta only. Use `new_items` when you need SDK metadata, or inspect `raw_responses` when you need the raw model payloads.
6161

62-
Computer-tool replay follows the raw Responses payload shape. Preview-model `computer_call` items preserve a single `action`, while `gpt-5.4` computer calls can preserve batched `actions[]`. [`to_input_list()`][agents.result.RunResultBase.to_input_list] and [`RunState`][agents.run_state.RunState] keep whichever shape the model produced, so manual replay, pause/resume flows, and stored transcripts continue to work across both preview and GA computer-tool calls. Local execution results still appear as `computer_call_output` items in `new_items`.
62+
Computer-tool replay follows the raw Responses payload shape. Preview-model `computer_call` items preserve a single `action`, while `gpt-5.5` computer calls can preserve batched `actions[]`. [`to_input_list()`][agents.result.RunResultBase.to_input_list] and [`RunState`][agents.run_state.RunState] keep whichever shape the model produced, so manual replay, pause/resume flows, and stored transcripts continue to work across both preview and GA computer-tool calls. Local execution results still appear as `computer_call_output` items in `new_items`.
6363

6464
### New items
6565

docs/sandbox/guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -555,7 +555,7 @@ async def main(model: str, prompt: str) -> None:
555555
if __name__ == "__main__":
556556
asyncio.run(
557557
main(
558-
model="gpt-5.4",
558+
model="gpt-5.5",
559559
prompt=(
560560
"Open `repo/task.md`, use the `$credit-note-fixer` skill, fix the bug, "
561561
f"run `{TARGET_TEST_CMD}`, and summarize the change."

docs/sandbox_agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ def build_agent(model: str) -> SandboxAgent[None]:
7676

7777
async def main() -> None:
7878
result = await Runner.run(
79-
build_agent("gpt-5.4"),
79+
build_agent("gpt-5.5"),
8080
"Open `repo/task.md`, fix the issue, run the targeted test, and summarize the change.",
8181
run_config=RunConfig(
8282
sandbox=SandboxRunConfig(client=UnixLocalSandboxClient()),

docs/scripts/translate_docs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
# logging.basicConfig(level=logging.INFO)
1212
# logging.getLogger("openai").setLevel(logging.DEBUG)
1313

14-
OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-5.4")
14+
OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-5.5")
1515

1616
ENABLE_CODE_SNIPPET_EXCLUSION = True
1717
# gpt-4.5 needed this for better quality

docs/streaming.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Keep consuming `result.stream_events()` until the async iterator finishes. A str
1010

1111
[`RawResponsesStreamEvent`][agents.stream_events.RawResponsesStreamEvent] are raw events passed directly from the LLM. They are in OpenAI Responses API format, which means each event has a type (like `response.created`, `response.output_text.delta`, etc) and data. These events are useful if you want to stream response messages to the user as soon as they are generated.
1212

13-
Computer-tool raw events keep the same preview-vs-GA distinction as stored results. Preview flows stream `computer_call` items with one `action`, while `gpt-5.4` can stream `computer_call` items with batched `actions[]`. The higher-level [`RunItemStreamEvent`][agents.stream_events.RunItemStreamEvent] surface does not add a special computer-only event name for this: both shapes still surface as `tool_called`, and the screenshot result comes back as `tool_output` wrapping a `computer_call_output` item.
13+
Computer-tool raw events keep the same preview-vs-GA distinction as stored results. Preview flows stream `computer_call` items with one `action`, while `gpt-5.5` can stream `computer_call` items with batched `actions[]`. The higher-level [`RunItemStreamEvent`][agents.stream_events.RunItemStreamEvent] surface does not add a special computer-only event name for this: both shapes still surface as `tool_called`, and the screenshot result comes back as `tool_output` wrapping a `computer_call_output` item.
1414

1515
For example, this will output the text generated by the LLM token-by-token.
1616

docs/tools.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ crm_tools = tool_namespace(
9393

9494
agent = Agent(
9595
name="Operations assistant",
96-
model="gpt-5.4",
96+
model="gpt-5.5",
9797
instructions="Load the crm namespace before using CRM tools.",
9898
tools=[*crm_tools, ToolSearchTool()],
9999
)
@@ -134,7 +134,7 @@ csv_skill: ShellToolSkillReference = {
134134

135135
agent = Agent(
136136
name="Container shell agent",
137-
model="gpt-5.4",
137+
model="gpt-5.5",
138138
instructions="Use the mounted skill when helpful.",
139139
tools=[
140140
ShellTool(
@@ -186,20 +186,20 @@ Local runtime tools require you to supply implementations:
186186

187187
`ComputerTool` is still a local harness: you provide a [`Computer`][agents.computer.Computer] or [`AsyncComputer`][agents.computer.AsyncComputer] implementation, and the SDK maps that harness onto the OpenAI Responses API computer surface.
188188

189-
For explicit [`gpt-5.4`](https://developers.openai.com/api/docs/models/gpt-5.4) requests, the SDK sends the GA built-in tool payload `{"type": "computer"}`. The older `computer-use-preview` model keeps the preview payload `{"type": "computer_use_preview", "environment": ..., "display_width": ..., "display_height": ...}`. This mirrors the platform migration described in OpenAI's [Computer use guide](https://developers.openai.com/api/docs/guides/tools-computer-use/):
189+
For explicit [`gpt-5.5`](https://developers.openai.com/api/docs/models/gpt-5.5) requests, the SDK sends the GA built-in tool payload `{"type": "computer"}`. The older `computer-use-preview` model keeps the preview payload `{"type": "computer_use_preview", "environment": ..., "display_width": ..., "display_height": ...}`. This mirrors the platform migration described in OpenAI's [Computer use guide](https://developers.openai.com/api/docs/guides/tools-computer-use/):
190190

191-
- Model: `computer-use-preview` -> `gpt-5.4`
191+
- Model: `computer-use-preview` -> `gpt-5.5`
192192
- Tool selector: `computer_use_preview` -> `computer`
193193
- Computer call shape: one `action` per `computer_call` -> batched `actions[]` on `computer_call`
194194
- Truncation: `ModelSettings(truncation="auto")` required on the preview path -> not required on the GA path
195195

196-
The SDK chooses that wire shape from the effective model on the actual Responses request. If you use a prompt template and the request omits `model` because the prompt owns it, the SDK keeps the preview-compatible computer payload unless you either keep `model="gpt-5.4"` explicit or force the GA selector with `ModelSettings(tool_choice="computer")` or `ModelSettings(tool_choice="computer_use")`.
196+
The SDK chooses that wire shape from the effective model on the actual Responses request. If you use a prompt template and the request omits `model` because the prompt owns it, the SDK keeps the preview-compatible computer payload unless you either keep `model="gpt-5.5"` explicit or force the GA selector with `ModelSettings(tool_choice="computer")` or `ModelSettings(tool_choice="computer_use")`.
197197

198198
When a [`ComputerTool`][agents.tool.ComputerTool] is present, `tool_choice="computer"`, `"computer_use"`, and `"computer_use_preview"` are all accepted and normalized to the built-in selector that matches the effective request model. Without a `ComputerTool`, those strings still behave like ordinary function names.
199199

200200
This distinction matters when `ComputerTool` is backed by a [`ComputerProvider`][agents.tool.ComputerProvider] factory. The GA `computer` payload does not need `environment` or dimensions at serialization time, so unresolved factories are fine. Preview-compatible serialization still needs a resolved `Computer` or `AsyncComputer` instance so the SDK can send `environment`, `display_width`, and `display_height`.
201201

202-
At runtime, both paths still use the same local harness. Preview responses emit `computer_call` items with a single `action`; `gpt-5.4` can emit batched `actions[]`, and the SDK executes them in order before producing a `computer_call_output` screenshot item. See `examples/tools/computer_use.py` for a runnable Playwright-based harness.
202+
At runtime, both paths still use the same local harness. Preview responses emit `computer_call` items with a single `action`; `gpt-5.5` can emit batched `actions[]`, and the SDK executes them in order before producing a `computer_call_output` screenshot item. See `examples/tools/computer_use.py` for a runnable Playwright-based harness.
203203

204204
```python
205205
from agents import Agent, ApplyPatchTool, ShellTool
@@ -784,7 +784,7 @@ agent = Agent(
784784
sandbox_mode="workspace-write",
785785
working_directory="/path/to/repo",
786786
default_thread_options=ThreadOptions(
787-
model="gpt-5.4",
787+
model="gpt-5.5",
788788
model_reasoning_effort="low",
789789
network_access_enabled=True,
790790
web_search_mode="disabled",

docs/voice/quickstart.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,15 +72,15 @@ spanish_agent = Agent(
7272
instructions=prompt_with_handoff_instructions(
7373
"You're speaking to a human, so be polite and concise. Speak in Spanish.",
7474
),
75-
model="gpt-5.4",
75+
model="gpt-5.5",
7676
)
7777

7878
agent = Agent(
7979
name="Assistant",
8080
instructions=prompt_with_handoff_instructions(
8181
"You're speaking to a human, so be polite and concise. If the user speaks in Spanish, handoff to the spanish agent.",
8282
),
83-
model="gpt-5.4",
83+
model="gpt-5.5",
8484
handoffs=[spanish_agent],
8585
tools=[get_weather],
8686
)
@@ -156,15 +156,15 @@ spanish_agent = Agent(
156156
instructions=prompt_with_handoff_instructions(
157157
"You're speaking to a human, so be polite and concise. Speak in Spanish.",
158158
),
159-
model="gpt-5.4",
159+
model="gpt-5.5",
160160
)
161161

162162
agent = Agent(
163163
name="Assistant",
164164
instructions=prompt_with_handoff_instructions(
165165
"You're speaking to a human, so be polite and concise. If the user speaks in Spanish, handoff to the spanish agent.",
166166
),
167-
model="gpt-5.4",
167+
model="gpt-5.5",
168168
handoffs=[spanish_agent],
169169
tools=[get_weather],
170170
)

examples/basic/hello_world_gpt_5.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@
99
# from openai import AsyncOpenAI
1010
# client = AsyncOpenAI()
1111
# from agents import OpenAIChatCompletionsModel
12-
# chat_completions_model = OpenAIChatCompletionsModel(model="gpt-5.4", openai_client=client)
12+
# chat_completions_model = OpenAIChatCompletionsModel(model="gpt-5.5", openai_client=client)
1313

1414

1515
async def main():
1616
agent = Agent(
1717
name="Knowledgable GPT-5 Assistant",
1818
instructions="You're a knowledgable assistant. You always provide an interesting answer.",
19-
model="gpt-5.4",
19+
model="gpt-5.5",
2020
model_settings=ModelSettings(
2121
reasoning=Reasoning(effort="low"), # "none", "low", "medium", "high", "xhigh"
2222
verbosity="low", # "low", "medium", "high"

0 commit comments

Comments
 (0)