Skip to content

feat(agent): recover from unknown tool calls via opt-in handler#3402

Closed
adityasingh2400 wants to merge 1 commit into
openai:mainfrom
adityasingh2400:fix-tool-not-found-recover
Closed

feat(agent): recover from unknown tool calls via opt-in handler#3402
adityasingh2400 wants to merge 1 commit into
openai:mainfrom
adityasingh2400:fix-tool-not-found-recover

Conversation

@adityasingh2400
Copy link
Copy Markdown
Contributor

Summary

When the LLM hallucinates a tool name that the agent doesn't expose, turn_resolution.process_model_response raises ModelBehaviorError("Tool <name> not found in agent <agent>") and crashes the entire run. For long-running agents this is painful: a single misbehaving turn forces the caller to handle the exception, recover state manually, and re-run. The maintainer acknowledged this in #325 but no fix had shipped.

This PR adds an opt-in Agent.unknown_tool_behavior: Literal["raise", "respond"] = "raise". The default keeps the existing exception so nothing changes for current users. With "respond", the SDK appends a synthetic function_call_output (or custom_tool_call_output) containing a clear error message and the list of currently available tool names, then lets the loop continue. The model gets to retry with the right tool on the next turn instead of crashing the whole run. The realtime session already handles this case the same way (see src/agents/realtime/session.py:774), so the two paths now agree.

The new field is appended to the end of Agent, validated in __post_init__, and propagated through clone() automatically via dataclasses.replace. The recovery helpers (_append_unknown_function_tool_recovery, _append_unknown_custom_tool_recovery) mirror the existing function_rejection_item / append_approval_error_output patterns. tools_used keeps the unknown tool's name so the run-loop's NextStepRunAgain heuristic still triggers another model turn.

Test plan

  • make format, make lint, make typecheck all pass.
  • make tests -> 4571 passed, 2 skipped (parallel) + 27 passed, 5 skipped (serial), no regressions.
  • New unit tests in tests/test_run_step_processing.py:
    • test_unknown_function_tool_default_still_raises asserts the existing ModelBehaviorError path is preserved.
    • test_unknown_function_tool_respond_appends_recovery_output asserts the synthetic ToolCallItem + ToolCallOutputItem are emitted and functions/handoffs stay empty.
  • New end-to-end tests in tests/test_run.py:
    • test_unknown_tool_default_raises_model_behavior_error confirms Runner.run still raises by default.
    • test_unknown_tool_respond_lets_run_continue runs two fake-model turns and confirms the recovery function_call_output is fed back to the LLM and the run reaches a final output.
  • Docs (docs/agents.md) updated with a new "Recovering from unknown tool calls" section and an entry in the configuration table.

Issue number

Refs #325

When the LLM hallucinates a tool name not registered on the agent,
turn_resolution previously raised ModelBehaviorError and crashed the
entire run. Add an opt-in Agent.unknown_tool_behavior field with
"raise" (default, preserves existing behavior) and "respond" (append a
synthetic tool-call output naming the available tools and let the run
continue so the model can recover). Refs openai#325.
@seratch
Copy link
Copy Markdown
Member

seratch commented May 14, 2026

Thanks for suggesting this, but this is not a customization option we'd like to add at least for now.

@seratch seratch closed this May 14, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c62ff7f1b6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1490 to +1494
available = _available_tool_names_for_recovery(all_tools)
if available:
return (
f"Tool '{tool_name}' is not available on agent '{agent_name}'. "
f"Available tools: {', '.join(available)}."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Include handoff tools in unknown-tool recovery hints

When unknown_tool_behavior="respond" is enabled, the recovery message is built from all_tools only, but valid model-callable names also include handoff tool names from handoff_map. In runs that expose only handoffs (or mostly handoffs), an unknown tool call will produce a misleading message like “No tools are currently available” (or omit valid handoff names), which can prevent the model from self-correcting on the next turn and defeats the purpose of this feature.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants