Skip to content

Feature: Add retry tool call in LLM Agent#302

Open
niteshver wants to merge 8 commits into
mesa:mainfrom
niteshver:new_tool
Open

Feature: Add retry tool call in LLM Agent#302
niteshver wants to merge 8 commits into
mesa:mainfrom
niteshver:new_tool

Conversation

@niteshver
Copy link
Copy Markdown
Contributor

@niteshver niteshver commented Apr 19, 2026

🚀 Overview

This PR introduces a plan-level retry mechanism for tool execution in "LLMAgent", when tool calls fail.

Previously, failed tool calls were returned as-is and the agent continued execution without recovery. This often led to degraded behavior or wasted simulation steps.

With this change, the agent performs a bounded retry at the executor level, allowing it to recover from tool failures.


✨ Key Features

  • Detects tool execution failures during "apply_plan" and "aapply_plan"
  • Retries failed tool calls once by default ("max_tool_retries=1")
  • Reuses the same selected tools during retry
  • Injects failure context into the retry prompt for better correction
  • Stores retry count and history in agent memory for transparency and debugging

🧠 Why This Matters

This improves agent reliability in scenarios where:

  • Tool calls fail due to invalid arguments or temporary issues
  • A single failure would otherwise disrupt the entire step
  • Recovery is possible without full re-planning

The result is more stable and realistic agent behavior in multi-step simulations.


📌 Scope

This PR focuses on hard tool failures, including:

  • Tool execution errors / exceptions
  • Invalid tool arguments
  • Tool lookup failures
  • Malformed tool responses

⚙️ Behavior & Compatibility

  • Default behavior remains unchanged aside from retry on failure
  • Retry is bounded and configurable via "max_tool_retries"
  • No impact on agents when tool calls succeed

This enhancement makes tool execution more fault-tolerant while keeping the agent reasoning loop efficient and controlled.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 19, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e2b44ef5-0ff4-425b-b31b-8b5e5c817de6

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0be78593c8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread mesa_llm/llm_agent.py Outdated
Comment on lines +229 to +231
tool_call_resp = self.tool_manager.call_tools(
agent=self, llm_response=current_plan.llm_plan
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Merge successful first-pass tool results before returning

When a tool batch has mixed outcomes, this retry loop replaces tool_call_resp with only the retry attempt and the caller later stores/returns that reduced list. That drops successful tool outputs from the first execution even though those tools already ran, so downstream memory/reasoning can miss completed side effects and may issue duplicate actions (the async path has the same overwrite pattern).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i have solve this

Comment thread mesa_llm/llm_agent.py Outdated
Comment on lines +226 to +227
selected_tools=current_plan.selected_tools,
ttl=current_plan.ttl,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve selected_tools restrictions during retry

Retry planning relies on current_plan.selected_tools, but this value is often None for plans produced by reasoners that still build Plan without setting selected_tools (for example, CoT/ReWOO paths). In those cases a failed call retries with unrestricted schemas, allowing tools outside the original selected_tools constraint to be called after an error.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i have solve this issue

@@ -0,0 +1,23 @@
from examples.retry_tool_model import run_demo
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reference an existing module in retry integration test

This test imports examples.retry_tool_model, but there is no such module in the repository tree under examples/. As written, test collection will raise ModuleNotFoundError before the test can run, so the new integration coverage for retry behavior is effectively broken.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant