Increment version to 0.0.84 in pyproject.toml and uv.lock, update…#485
Increment version to 0.0.84 in pyproject.toml and uv.lock, update…#485MervinPraison merged 3 commits intomainfrom
pyproject.toml and uv.lock, update…#485Conversation
… agent functionality in `mcp-sse.py` and `openai-mcp.py` for improved task handling, and enhance timeout management in MCP tools for better performance.
✅ Deploy Preview for praisonai ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Caution Review failedThe pull request is closed. WalkthroughThe changes introduce configurable timeout parameters across MCP-related classes, replacing hardcoded timeouts with flexible values. Agent instantiations and invocations are updated, including renaming and reconfiguring agents, modifying prompts, and updating endpoint URLs. The project version is incremented, and some agent instruction strings are simplified. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent
participant MCP
participant SSEMCPClient
participant SSEMCPTool
User->>Agent: Start agent with prompt
Agent->>MCP: Initialize with timeout
MCP->>SSEMCPClient: Initialize (timeout configurable)
SSEMCPClient-->>MCP: Initialization complete
Agent->>MCP: Call tool (with timeout)
MCP->>SSEMCPTool: Execute tool (timeout configurable)
SSEMCPTool-->>MCP: Tool result
MCP-->>Agent: Tool response
Agent-->>User: Formatted output
Suggested labels
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. 📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (11)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here with a summary of this pull request from @MervinPraison. This PR focuses on updating the project version, refining the functionality of example agents using the MCP (Multi-tool Coordination Protocol), and significantly enhancing timeout management within the MCP tool integration to improve robustness and performance.
Highlights
- Version Increment: The project version has been updated from
0.0.83to0.0.84in bothpyproject.tomlanduv.lock. - Example Agent Updates: The example agents in
mcp-sse.py,openai-mcp.py, andsimple-mcp-server.pyhave been updated with revised instructions and starting tasks. The explicitllmparameter has been removed from the agent definitions inmcp-sse.pyandopenai-mcp.py. - Enhanced Timeout Management for MCP Tools: Timeout handling has been made configurable and more robust within the
MCPToolRunnerandSSEMCPClient. Atimeoutparameter (defaulting to 60 seconds) has been added to the relevant classes and methods, replacing hardcoded 30-second timeouts for initialization waits and tool call results. Error messages now include the specific timeout duration.
Changelog
Click here to see the changelog
- src/praisonai-agents/mcp-sse.py
- Renamed
qa_agenttotweet_agent. - Updated agent instructions to 'You are a Tweet Formatter Agent.'.
- Changed the MCP server URL from
http://localhost:8080/agents/ssetohttp://localhost:8080/sse. - Updated the starting task from 'AI in 2025' to 'AI in Healthcare'.
- Removed the explicit
llmparameter (openai/gpt-4o-mini) from the agent definition.
- Renamed
- src/praisonai-agents/openai-mcp.py
- Updated agent instructions from 'You help book apartments on Airbnb.' to 'Search apartment in Paris for 2 nights. 07/28 - 07/30 for 2 adults'.
- Changed the starting task from 'I want to book an apartment in Paris for 2 nights. 03/28 - 03/30 for 2 adults' to 'Search apartment in Paris for 2 nights. 07/28 - 07/30 for 2 adults'.
- Removed the explicit
llmparameter (openai/gpt-4o-mini) from the agent definition.
- src/praisonai-agents/praisonaiagents/mcp/mcp.py
- Added a
timeoutparameter (defaulting to 60) to theMCPToolRunner.__init__method (L19). - Used
self.timeoutinstead of a hardcoded30seconds for theinitialized.waittimeout in thecall_toolmethod (L78). - Updated the timeout error message in
call_toolto include the specific timeout duration (L80). - Passed the
timeoutparameter to theSSEMCPClientinitialization within theMCP.__init__method (L193). - Passed the
timeoutparameter to theMCPToolRunnerinitialization within theMCP.__init__method (L220). - Used
self.timeoutinstead of a hardcoded30seconds for theinitialized.waittimeout in theMCP.__init__method (L223). - Updated the timeout warning message in
MCP.__init__to include the specific timeout duration (L224).
- Added a
- src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py
- Added a
timeoutparameter (defaulting to 60) to theSSEMCPTool.__init__method (L34). - Used
self.timeoutinstead of a hardcoded30seconds for thefuture.resulttimeout in the__call__method (L70). - Added a
timeoutparameter (defaulting to 60) to theSSEMCPClient.__init__method (L106). - Updated the docstring for
SSEMCPClient.__init__to include the newtimeoutparameter (L113). - Used
self.timeoutinstead of a hardcoded30seconds for thefuture.resulttimeout in therun_event_loopmethod (L145). - Passed the
timeoutparameter to theSSEMCPToolinitialization within the_async_initializemethod (L176).
- Added a
- src/praisonai-agents/pyproject.toml
- Incremented the project version from
0.0.83to0.0.84(L7).
- Incremented the project version from
- src/praisonai-agents/simple-mcp-server.py
- Removed the explicit
nameparameter (TweetAgent) from the agent definition. - Updated agent instructions from 'Create a Tweet based on the topic provided' to 'Create a Tweet'.
- Removed the explicit
- src/praisonai-agents/uv.lock
- Incremented the locked version for
praisonaiagentsfrom0.0.83to0.0.84(L1460).
- Incremented the locked version for
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Agent starts its task,
Waiting for the tool's reply,
Timeout, sixty ticks.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request primarily focuses on enhancing timeout management within the MCP components, making it more configurable and robust. It also includes a version bump to 0.0.84. Additionally, example agent scripts (mcp-sse.py and openai-mcp.py) have been updated, notably by removing explicit LLM configurations. While the timeout enhancements are solid improvements, the changes to LLM specification in the examples warrant clarification to ensure users understand the resulting behavior and configuration.
Summary of Findings
- Change in LLM Configuration and Interaction in Examples: Explicit LLM configurations (
llm="openai/gpt-4o-mini") were removed from agent examples inmcp-sse.pyandopenai-mcp.py. This causes the examples to switch from using thepraisonaiagents.llm.LLM(LiteLLM wrapper) to a directopenaiclient call with a default model (gpt-4oorOPENAI_MODEL_NAME). This change could significantly impact example behavior, cost, provider compatibility, and clarity for users. This was commented on with high severity. - Timeout Management Enhancement: Timeout handling for MCP operations has been made configurable (defaulting to 60 seconds) and more robust across
MCPToolRunner,MCP, andSSEMCPClient. Error messages for timeouts are now more informative. This is a positive improvement for reliability. (Not commented on as it's an improvement and not an issue meeting severity criteria for comments). - Version Update: Project version was correctly incremented to 0.0.84 in
pyproject.tomlanduv.lock. (Not commented on as it's a standard change).
Merge Readiness
The enhancements to timeout management are well-implemented and improve the robustness of MCP interactions. However, the removal of explicit LLM configurations in the example agent files (mcp-sse.py and openai-mcp.py)—and the resulting shift in LLM interaction mechanism—raises significant concerns regarding clarity, potential changes in behavior, and cost implications for users running these examples. These points have been flagged with high severity and should be addressed before merging to ensure the examples remain clear, representative, and do not lead to unexpected outcomes for users. I am unable to approve the pull request directly; please ensure these concerns are resolved and consider further review before merging.
| tweet_agent = Agent( | ||
| instructions="""You are a Tweet Formatter Agent.""", | ||
| tools=MCP("http://localhost:8080/sse") | ||
| ) |
There was a problem hiding this comment.
The llm="openai/gpt-4o-mini" parameter has been removed, and the agent name has changed. Previously, specifying the LLM this way (with a /) would engage the praisonaiagents.llm.LLM class, which serves as a wrapper around LiteLLM, for LLM interactions. With this parameter's removal, the Agent now defaults to using os.getenv('OPENAI_MODEL_NAME', 'gpt-4o') and interacts with the OpenAI API directly via the openai client, bypassing the LLM wrapper.
Could you clarify if this change in the example's LLM interaction mechanism (from LLM wrapper/LiteLLM to direct openai client) and model (from gpt-4o-mini to gpt-4o or an environment variable) is intentional? Also, why was the agent name changed from qa_agent to tweet_agent?
This change could lead to:
- Different LLM provider behavior or compatibility if users were relying on LiteLLM's broader support through the previous setup.
- Altered behavior or performance due to the change in model.
- Different cost implications for users running this example.
- Reduced clarity for users seeking to understand how to use the
LLMwrapper for non-OpenAI models or advanced LiteLLM configurations through these examples.
If the intention is to simplify examples to use a direct OpenAI default, perhaps adding a comment in the example to explain this and to guide users on how to use the LLM class for other providers would be beneficial. Alternatively, if gpt-4o-mini via the LLM wrapper was specifically chosen for this example previously, it might be worth considering if that configuration should be retained or updated to reflect the new preferred method of specification.
| search_agent = Agent( | ||
| instructions="""You help book apartments on Airbnb.""", | ||
| llm="openai/gpt-4o-mini", | ||
| tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt") | ||
| ) |
There was a problem hiding this comment.
Similar to the change in mcp-sse.py, the llm="openai/gpt-4o-mini" parameter has been removed. This means the agent, which previously would have used the praisonaiagents.llm.LLM class (wrapping LiteLLM) due to the "provider/model" format, will now default to using os.getenv('OPENAI_MODEL_NAME', 'gpt-4o') and interact directly with the OpenAI API via the openai client.
Is this shift in the example's LLM interaction pathway and the specific model used intentional?
This modification could impact:
- The example's behavior and performance characteristics.
- Cost for users running the example.
- The example's utility in demonstrating how to configure diverse LLMs or leverage LiteLLM features via the
LLMwrapper.
Clarifying the rationale behind this change would be helpful. If the goal is to showcase a simpler default OpenAI setup, consider adding comments to guide users on more advanced configurations using the LLM class.
| if not self.initialized.is_set(): | ||
| return "Error: MCP initialization timed out" | ||
| return f"Error: MCP initialization timed out after {self.timeout} seconds" |
…files, and update `praisonaiagents` version to 0.0.85 across relevant files for improved functionality and compatibility.
Increment version to 0.0.84 in `pyproject.toml` and `uv.lock`, update…
… agent functionality in
mcp-sse.pyandopenai-mcp.pyfor improved task handling, and enhance timeout management in MCP tools for better performance.Summary by CodeRabbit