|
1 | 1 | # Tools |
2 | 2 |
|
3 | | -Tools let agents take actions: things like fetching data, running code, calling external APIs, and even using a computer. The SDK supports four categories: |
| 3 | +Tools let agents take actions: things like fetching data, running code, calling external APIs, and even using a computer. The SDK supports five categories: |
4 | 4 |
|
5 | 5 | - Hosted OpenAI tools: run alongside the model on OpenAI servers. |
6 | 6 | - Local runtime tools: run in your environment (computer use, shell, apply patch). |
7 | 7 | - Function calling: wrap any Python function as a tool. |
8 | 8 | - Agents as tools: expose an agent as a callable tool without a full handoff. |
| 9 | +- Experimental: Codex tool: run workspace-scoped Codex tasks from a tool call. |
9 | 10 |
|
10 | 11 | ## Hosted tools |
11 | 12 |
|
@@ -464,6 +465,44 @@ Disabled tools are completely hidden from the LLM at runtime, making this useful |
464 | 465 | - A/B testing different tool configurations |
465 | 466 | - Dynamic tool filtering based on runtime state |
466 | 467 |
|
| 468 | +## Experimental: Codex tool |
| 469 | + |
| 470 | +The `codex_tool` wraps the Codex CLI so an agent can run workspace-scoped tasks (shell, file edits, MCP tools) |
| 471 | +during a tool call. This surface is experimental and may change. |
| 472 | + |
| 473 | +```python |
| 474 | +from agents import Agent |
| 475 | +from agents.extensions.experimental.codex import ThreadOptions, codex_tool |
| 476 | + |
| 477 | +agent = Agent( |
| 478 | + name="Codex Agent", |
| 479 | + instructions="Use the codex tool to inspect the workspace and answer the question.", |
| 480 | + tools=[ |
| 481 | + codex_tool( |
| 482 | + sandbox_mode="workspace-write", |
| 483 | + working_directory="/path/to/repo", |
| 484 | + default_thread_options=ThreadOptions( |
| 485 | + model="gpt-5.2-codex", |
| 486 | + network_access_enabled=True, |
| 487 | + web_search_enabled=False, |
| 488 | + ), |
| 489 | + persist_session=True, |
| 490 | + ) |
| 491 | + ], |
| 492 | +) |
| 493 | +``` |
| 494 | + |
| 495 | +What to know: |
| 496 | + |
| 497 | +- Auth: set `CODEX_API_KEY` (preferred) or `OPENAI_API_KEY`, or pass `codex_options={"api_key": "..."}`. |
| 498 | +- Inputs: tool calls must include at least one item in `inputs` with `{ "type": "text", "text": ... }` or `{ "type": "local_image", "path": ... }`. |
| 499 | +- Safety: pair `sandbox_mode` with `working_directory`; set `skip_git_repo_check=True` outside Git repos. |
| 500 | +- Behavior: `persist_session=True` reuses a single Codex thread and returns its `thread_id`. |
| 501 | +- Streaming: `on_stream` receives Codex events (reasoning, command execution, MCP tool calls, file changes, web search). |
| 502 | +- Outputs: results include `response`, `usage`, and `thread_id`; usage is added to `RunContextWrapper.usage`. |
| 503 | +- Structure: `output_schema` enforces structured Codex responses when you need typed outputs. |
| 504 | +- See `examples/tools/codex.py` for a complete runnable sample. |
| 505 | + |
467 | 506 | ## Handling errors in function tools |
468 | 507 |
|
469 | 508 | When you create a function tool via `@function_tool`, you can pass a `failure_error_function`. This is a function that provides an error response to the LLM in case the tool call crashes. |
|
0 commit comments