Skip to content

Increment version to 0.0.84 in pyproject.toml and uv.lock, update…#485

Merged
MervinPraison merged 3 commits intomainfrom
develop
May 23, 2025
Merged

Increment version to 0.0.84 in pyproject.toml and uv.lock, update…#485
MervinPraison merged 3 commits intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented May 22, 2025

… agent functionality in mcp-sse.py and openai-mcp.py for improved task handling, and enhance timeout management in MCP tools for better performance.

Summary by CodeRabbit

  • New Features
    • Added support for configurable timeout settings (default 60 seconds) for agent tool calls and server initialization, allowing for more flexible operation and improved reliability.
  • Improvements
    • Updated agent configurations and prompts for more relevant use cases and streamlined instructions.
    • Updated Dockerfiles and documentation to use the latest PraisonAI package version 2.2.1.
  • Chores
    • Incremented the project version to 0.0.85.

… agent functionality in `mcp-sse.py` and `openai-mcp.py` for improved task handling, and enhance timeout management in MCP tools for better performance.
@netlify
Copy link
Copy Markdown

netlify bot commented May 22, 2025

Deploy Preview for praisonai ready!

Name Link
🔨 Latest commit 468e36f
🔍 Latest deploy log https://app.netlify.com/projects/praisonai/deploys/6830eff08c5ea100089a2ec8
😎 Deploy Preview https://deploy-preview-485--praisonai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 22, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

The changes introduce configurable timeout parameters across MCP-related classes, replacing hardcoded timeouts with flexible values. Agent instantiations and invocations are updated, including renaming and reconfiguring agents, modifying prompts, and updating endpoint URLs. The project version is incremented, and some agent instruction strings are simplified.

Changes

File(s) Change Summary
src/praisonai-agents/mcp-sse.py Replaces qa_agent ("Question Answering Agent" using GPT-4o-mini at /agents/sse) with tweet_agent ("Tweet Formatter Agent" at /sse), updates agent configuration, prompt, and endpoint.
src/praisonai-agents/openai-mcp.py Removes explicit LLM parameter from Agent, updates booking prompt dates and phrasing.
src/praisonai-agents/praisonaiagents/mcp/mcp.py Adds configurable timeout parameter to MCPToolRunner and MCP classes (default 60s), updates method signatures, passes timeout to dependent classes, and replaces hardcoded timeouts.
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py Adds configurable timeout parameter (default 60s) to SSEMCPTool and SSEMCPClient, updates constructors and usage to replace hardcoded timeouts.
src/praisonai-agents/pyproject.toml Increments project version from 0.0.83 to 0.0.85.
src/praisonai-agents/simple-mcp-server.py Removes name parameter from Agent instantiation and simplifies the instructions string.
docker/Dockerfile, docker/Dockerfile.chat, docker/Dockerfile.dev, docker/Dockerfile.ui Updates praisonai package version from 2.2.0 to 2.2.1 in installation commands.
docs/api/praisonai/deploy.html Updates praisonai package version from 2.2.0 to 2.2.1 in Dockerfile creation method.
docs/developers/local-development.mdx, docs/ui/chat.mdx, docs/ui/code.mdx Updates praisonai package version from 2.2.0 to 2.2.1 in Dockerfile development setup instructions.
praisoniai/deploy.py Updates praisonai package version from 2.2.0 to 2.2.1 in Dockerfile creation method.
pyproject.toml Increments project version from 2.2.0 to 2.2.1 and updates dependency on praisonaiagents from >=0.0.83 to >=0.0.85.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Agent
    participant MCP
    participant SSEMCPClient
    participant SSEMCPTool

    User->>Agent: Start agent with prompt
    Agent->>MCP: Initialize with timeout
    MCP->>SSEMCPClient: Initialize (timeout configurable)
    SSEMCPClient-->>MCP: Initialization complete
    Agent->>MCP: Call tool (with timeout)
    MCP->>SSEMCPTool: Execute tool (timeout configurable)
    SSEMCPTool-->>MCP: Tool result
    MCP-->>Agent: Tool response
    Agent-->>User: Formatted output
Loading

Suggested labels

Review effort 2/5

Poem

In the warren, time ticks slow or fast,
Now our agents’ patience will last!
With timeouts set and prompts anew,
Version bumped—oh what a view!
A tweet or two, a booking made,
🐰 Cheers for every change we’ve laid!

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2bec46 and 468e36f.

⛔ Files ignored due to path filters (2)
  • src/praisonai-agents/uv.lock is excluded by !**/*.lock
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • docker/Dockerfile (1 hunks)
  • docker/Dockerfile.chat (1 hunks)
  • docker/Dockerfile.dev (1 hunks)
  • docker/Dockerfile.ui (1 hunks)
  • docs/api/praisonai/deploy.html (1 hunks)
  • docs/developers/local-development.mdx (1 hunks)
  • docs/ui/chat.mdx (1 hunks)
  • docs/ui/code.mdx (1 hunks)
  • praisonai/deploy.py (1 hunks)
  • pyproject.toml (4 hunks)
  • src/praisonai-agents/pyproject.toml (1 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, gemini-code-assist here with a summary of this pull request from @MervinPraison. This PR focuses on updating the project version, refining the functionality of example agents using the MCP (Multi-tool Coordination Protocol), and significantly enhancing timeout management within the MCP tool integration to improve robustness and performance.

Highlights

  • Version Increment: The project version has been updated from 0.0.83 to 0.0.84 in both pyproject.toml and uv.lock.
  • Example Agent Updates: The example agents in mcp-sse.py, openai-mcp.py, and simple-mcp-server.py have been updated with revised instructions and starting tasks. The explicit llm parameter has been removed from the agent definitions in mcp-sse.py and openai-mcp.py.
  • Enhanced Timeout Management for MCP Tools: Timeout handling has been made configurable and more robust within the MCPToolRunner and SSEMCPClient. A timeout parameter (defaulting to 60 seconds) has been added to the relevant classes and methods, replacing hardcoded 30-second timeouts for initialization waits and tool call results. Error messages now include the specific timeout duration.

Changelog

Click here to see the changelog
  • src/praisonai-agents/mcp-sse.py
    • Renamed qa_agent to tweet_agent.
    • Updated agent instructions to 'You are a Tweet Formatter Agent.'.
    • Changed the MCP server URL from http://localhost:8080/agents/sse to http://localhost:8080/sse.
    • Updated the starting task from 'AI in 2025' to 'AI in Healthcare'.
    • Removed the explicit llm parameter (openai/gpt-4o-mini) from the agent definition.
  • src/praisonai-agents/openai-mcp.py
    • Updated agent instructions from 'You help book apartments on Airbnb.' to 'Search apartment in Paris for 2 nights. 07/28 - 07/30 for 2 adults'.
    • Changed the starting task from 'I want to book an apartment in Paris for 2 nights. 03/28 - 03/30 for 2 adults' to 'Search apartment in Paris for 2 nights. 07/28 - 07/30 for 2 adults'.
    • Removed the explicit llm parameter (openai/gpt-4o-mini) from the agent definition.
  • src/praisonai-agents/praisonaiagents/mcp/mcp.py
    • Added a timeout parameter (defaulting to 60) to the MCPToolRunner.__init__ method (L19).
    • Used self.timeout instead of a hardcoded 30 seconds for the initialized.wait timeout in the call_tool method (L78).
    • Updated the timeout error message in call_tool to include the specific timeout duration (L80).
    • Passed the timeout parameter to the SSEMCPClient initialization within the MCP.__init__ method (L193).
    • Passed the timeout parameter to the MCPToolRunner initialization within the MCP.__init__ method (L220).
    • Used self.timeout instead of a hardcoded 30 seconds for the initialized.wait timeout in the MCP.__init__ method (L223).
    • Updated the timeout warning message in MCP.__init__ to include the specific timeout duration (L224).
  • src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py
    • Added a timeout parameter (defaulting to 60) to the SSEMCPTool.__init__ method (L34).
    • Used self.timeout instead of a hardcoded 30 seconds for the future.result timeout in the __call__ method (L70).
    • Added a timeout parameter (defaulting to 60) to the SSEMCPClient.__init__ method (L106).
    • Updated the docstring for SSEMCPClient.__init__ to include the new timeout parameter (L113).
    • Used self.timeout instead of a hardcoded 30 seconds for the future.result timeout in the run_event_loop method (L145).
    • Passed the timeout parameter to the SSEMCPTool initialization within the _async_initialize method (L176).
  • src/praisonai-agents/pyproject.toml
    • Incremented the project version from 0.0.83 to 0.0.84 (L7).
  • src/praisonai-agents/simple-mcp-server.py
    • Removed the explicit name parameter (TweetAgent) from the agent definition.
    • Updated agent instructions from 'Create a Tweet based on the topic provided' to 'Create a Tweet'.
  • src/praisonai-agents/uv.lock
    • Incremented the locked version for praisonaiagents from 0.0.83 to 0.0.84 (L1460).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Agent starts its task,
Waiting for the tool's reply,
Timeout, sixty ticks.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily focuses on enhancing timeout management within the MCP components, making it more configurable and robust. It also includes a version bump to 0.0.84. Additionally, example agent scripts (mcp-sse.py and openai-mcp.py) have been updated, notably by removing explicit LLM configurations. While the timeout enhancements are solid improvements, the changes to LLM specification in the examples warrant clarification to ensure users understand the resulting behavior and configuration.

Summary of Findings

  • Change in LLM Configuration and Interaction in Examples: Explicit LLM configurations (llm="openai/gpt-4o-mini") were removed from agent examples in mcp-sse.py and openai-mcp.py. This causes the examples to switch from using the praisonaiagents.llm.LLM (LiteLLM wrapper) to a direct openai client call with a default model (gpt-4o or OPENAI_MODEL_NAME). This change could significantly impact example behavior, cost, provider compatibility, and clarity for users. This was commented on with high severity.
  • Timeout Management Enhancement: Timeout handling for MCP operations has been made configurable (defaulting to 60 seconds) and more robust across MCPToolRunner, MCP, and SSEMCPClient. Error messages for timeouts are now more informative. This is a positive improvement for reliability. (Not commented on as it's an improvement and not an issue meeting severity criteria for comments).
  • Version Update: Project version was correctly incremented to 0.0.84 in pyproject.toml and uv.lock. (Not commented on as it's a standard change).

Merge Readiness

The enhancements to timeout management are well-implemented and improve the robustness of MCP interactions. However, the removal of explicit LLM configurations in the example agent files (mcp-sse.py and openai-mcp.py)—and the resulting shift in LLM interaction mechanism—raises significant concerns regarding clarity, potential changes in behavior, and cost implications for users running these examples. These points have been flagged with high severity and should be addressed before merging to ensure the examples remain clear, representative, and do not lead to unexpected outcomes for users. I am unable to approve the pull request directly; please ensure these concerns are resolved and consider further review before merging.

Comment on lines +3 to 6
tweet_agent = Agent(
instructions="""You are a Tweet Formatter Agent.""",
tools=MCP("http://localhost:8080/sse")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The llm="openai/gpt-4o-mini" parameter has been removed, and the agent name has changed. Previously, specifying the LLM this way (with a /) would engage the praisonaiagents.llm.LLM class, which serves as a wrapper around LiteLLM, for LLM interactions. With this parameter's removal, the Agent now defaults to using os.getenv('OPENAI_MODEL_NAME', 'gpt-4o') and interacts with the OpenAI API directly via the openai client, bypassing the LLM wrapper.

Could you clarify if this change in the example's LLM interaction mechanism (from LLM wrapper/LiteLLM to direct openai client) and model (from gpt-4o-mini to gpt-4o or an environment variable) is intentional? Also, why was the agent name changed from qa_agent to tweet_agent?

This change could lead to:

  1. Different LLM provider behavior or compatibility if users were relying on LiteLLM's broader support through the previous setup.
  2. Altered behavior or performance due to the change in model.
  3. Different cost implications for users running this example.
  4. Reduced clarity for users seeking to understand how to use the LLM wrapper for non-OpenAI models or advanced LiteLLM configurations through these examples.

If the intention is to simplify examples to use a direct OpenAI default, perhaps adding a comment in the example to explain this and to guide users on how to use the LLM class for other providers would be beneficial. Alternatively, if gpt-4o-mini via the LLM wrapper was specifically chosen for this example previously, it might be worth considering if that configuration should be retained or updated to reflect the new preferred method of specification.

Comment on lines 3 to 6
search_agent = Agent(
instructions="""You help book apartments on Airbnb.""",
llm="openai/gpt-4o-mini",
tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the change in mcp-sse.py, the llm="openai/gpt-4o-mini" parameter has been removed. This means the agent, which previously would have used the praisonaiagents.llm.LLM class (wrapping LiteLLM) due to the "provider/model" format, will now default to using os.getenv('OPENAI_MODEL_NAME', 'gpt-4o') and interact directly with the OpenAI API via the openai client.

Is this shift in the example's LLM interaction pathway and the specific model used intentional?

This modification could impact:

  1. The example's behavior and performance characteristics.
  2. Cost for users running the example.
  3. The example's utility in demonstrating how to configure diverse LLMs or leverage LiteLLM features via the LLM wrapper.

Clarifying the rationale behind this change would be helpful. If the goal is to showcase a simpler default OpenAI setup, consider adding comments to guide users on more advanced configurations using the LLM class.

Comment on lines 79 to +80
if not self.initialized.is_set():
return "Error: MCP initialization timed out"
return f"Error: MCP initialization timed out after {self.timeout} seconds"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a more descriptive error message that includes the specific tool that timed out, to aid in debugging.

return f"Error: MCP initialization timed out after {self.timeout} seconds for tool {tool_name}"

Style Guide References

…files, and update `praisonaiagents` version to 0.0.85 across relevant files for improved functionality and compatibility.
@MervinPraison MervinPraison merged commit e9a497f into main May 23, 2025
3 of 7 checks passed
shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
Increment version to 0.0.84 in `pyproject.toml` and `uv.lock`, update…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant