Feature: Add retry tool call in LLM Agent#302
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0be78593c8
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| tool_call_resp = self.tool_manager.call_tools( | ||
| agent=self, llm_response=current_plan.llm_plan | ||
| ) |
There was a problem hiding this comment.
Merge successful first-pass tool results before returning
When a tool batch has mixed outcomes, this retry loop replaces tool_call_resp with only the retry attempt and the caller later stores/returns that reduced list. That drops successful tool outputs from the first execution even though those tools already ran, so downstream memory/reasoning can miss completed side effects and may issue duplicate actions (the async path has the same overwrite pattern).
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
i have solve this
| selected_tools=current_plan.selected_tools, | ||
| ttl=current_plan.ttl, |
There was a problem hiding this comment.
Preserve selected_tools restrictions during retry
Retry planning relies on current_plan.selected_tools, but this value is often None for plans produced by reasoners that still build Plan without setting selected_tools (for example, CoT/ReWOO paths). In those cases a failed call retries with unrestricted schemas, allowing tools outside the original selected_tools constraint to be called after an error.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
i have solve this issue
| @@ -0,0 +1,23 @@ | |||
| from examples.retry_tool_model import run_demo | |||
There was a problem hiding this comment.
Reference an existing module in retry integration test
This test imports examples.retry_tool_model, but there is no such module in the repository tree under examples/. As written, test collection will raise ModuleNotFoundError before the test can run, so the new integration coverage for retry behavior is effectively broken.
Useful? React with 👍 / 👎.
🚀 Overview
This PR introduces a plan-level retry mechanism for tool execution in "LLMAgent", when tool calls fail.
Previously, failed tool calls were returned as-is and the agent continued execution without recovery. This often led to degraded behavior or wasted simulation steps.
With this change, the agent performs a bounded retry at the executor level, allowing it to recover from tool failures.
✨ Key Features
🧠 Why This Matters
This improves agent reliability in scenarios where:
The result is more stable and realistic agent behavior in multi-step simulations.
📌 Scope
This PR focuses on hard tool failures, including:
⚙️ Behavior & Compatibility
This enhancement makes tool execution more fault-tolerant while keeping the agent reasoning loop efficient and controlled.