Skip to content

Increment version to 0.0.76 in pyproject.toml and update `markitdow…#456

Merged
MervinPraison merged 5 commits intomainfrom
develop
May 13, 2025
Merged

Increment version to 0.0.76 in pyproject.toml and update `markitdow…#456
MervinPraison merged 5 commits intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Apr 8, 2025

User description

…ndependency to include all extras inuv.lock` for enhanced functionality.


PR Type

enhancement, dependencies


Description

  • Incremented project version to 0.0.76 in pyproject.toml.

  • Updated markitdown dependency to include all extras (markitdown[all]).

  • Improved dependency management for enhanced functionality.


Changes walkthrough 📝

Relevant files
Dependencies
pyproject.toml
Increment version and enhance `markitdown` dependency       

src/praisonai-agents/pyproject.toml

  • Updated project version from 0.0.75 to 0.0.76.
  • Changed markitdown dependency to markitdown[all] for extended
    functionality.
  • +2/-2     

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features
      • Introduced a new Server-Sent Events (SSE) based MCP server and client for real-time tool interaction.
      • Added a weather information agent utilizing the SSE MCP server.
      • Enabled multiple MCP client types support including SSE and stdio clients.
    • Bug Fixes
      • Improved handling of multiple tools returned from MCP clients for smoother integration.
    • Documentation
      • Added comprehensive README detailing the SSE MCP server and client setup and usage.
      • Added detailed MCP SSE integration guide with examples for server, clients, alternative LLMs, and Gradio UI.
    • Chores
      • Updated the project version to reflect the latest release.
      • Modified a dependency to now include all optional features for enhanced functionality.
      • Updated ignore patterns to exclude .qodo files.

    …n` dependency to include all extras in `uv.lock` for enhanced functionality.
    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    Summary of Changes

    This pull request increments the version of the praisonaiagents package from 0.0.75 to 0.0.76 in pyproject.toml and uv.lock. It also updates the markitdown dependency in pyproject.toml to include all extras (markitdown[all]) and reflects this change in uv.lock to ensure all markitdown features are available.

    Highlights

    • Version Increment: The package version has been bumped from 0.0.75 to 0.0.76 in both pyproject.toml and uv.lock.
    • Dependency Update: The markitdown dependency is updated to include all extras (markitdown[all]) in pyproject.toml and uv.lock.

    Changelog

    • src/praisonai-agents/pyproject.toml
      • Incremented package version from 0.0.75 to 0.0.76 (line 7).
      • Updated markitdown dependency to include all extras: markitdown[all] (line 31).
    • src/praisonai-agents/uv.lock
      • Incremented package version from 0.0.75 to 0.0.76 (line 1886).
      • Updated markitdown dependency to include all extras (line 1928).
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


    A version ascends,
    Features bloom, dependencies mend,
    Code's garden thrives.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-code-review
    Copy link
    Copy Markdown

    qodo-code-review bot commented Apr 8, 2025

    PR Reviewer Guide 🔍

    (Review updated until commit c332346)

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ No major issues detected

    @qodo-code-review
    Copy link
    Copy Markdown

    qodo-code-review bot commented Apr 8, 2025

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    @netlify
    Copy link
    Copy Markdown

    netlify bot commented Apr 8, 2025

    Deploy Preview for praisonai ready!

    Name Link
    🔨 Latest commit dacaa7f
    🔍 Latest deploy log https://app.netlify.com/sites/praisonai/deploys/68234d283175110008fc121c
    😎 Deploy Preview https://deploy-preview-456--praisonai.netlify.app
    📱 Preview on mobile
    Toggle QR Code...

    QR Code

    Use your smartphone camera to open QR code link.

    To edit notification comments on pull requests, go to your Netlify site configuration.

    @coderabbitai
    Copy link
    Copy Markdown
    Contributor

    coderabbitai bot commented Apr 8, 2025

    Caution

    Review failed

    The pull request is closed.

    Walkthrough

    This update introduces a new MCP SSE server and client implementation with asynchronous SSE communication, including new server, client, and agent scripts. The MCP class and agent execution logic were enhanced to support SSE-based MCP clients alongside stdio clients. Documentation, configuration, and dependency files were updated accordingly.

    Changes

    File Change Summary
    src/praisonai-agents/pyproject.toml Version bumped from "0.0.75" to "0.0.77"; dependency changed from "markitdown" to "markitdown[all]"
    src/praisonai-agents/.gitignore Added .qodo entry to ignore list
    src/praisonai-agents/README.md Added new README documenting MCP SSE Server and Client implementation, usage, and extension instructions
    src/praisonai-agents/mcp-sse-direct-server.py Added MCP SSE server implementation with greeting and weather tools, Starlette app, and Uvicorn server setup
    src/praisonai-agents/mcp-sse-direct-client.py Added asynchronous MCP SSE client with interactive chat loop handling greeting and weather queries
    src/praisonai-agents/mcp-sse-weather.py Added weather information agent using MCP SSE client and OpenAI GPT-4o-mini model
    src/praisonai-agents/mcp-sse-direct-agent.py Added MCP SSE client agent implementation with SSEMCPClient and SSEMCPTool classes, async event loop, and example usage
    src/praisonai-agents/praisonaiagents/mcp/mcp.py Extended MCP class to support SSE URL initialization, SSE client integration, and multi-tool OpenAI tool conversion
    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py Added SSE MCP client module implementing SSEMCPClient and SSEMCPTool classes with async SSE connection and tool wrapping
    src/praisonai-agents/praisonaiagents/agent/agent.py Enhanced execute_tool to support SSE MCP clients; improved tool conversion logic in chat method to handle multiple tools
    src/praisonai-agents/praisonaiagents/llm/llm.py Extended get_response to handle lists of tools for OpenAI formatting
    src/praisonai-agents/praisonaiagents/mcp/init.py Expanded module docstring describing MCP package purpose and transport support
    docs/mcp/sse.mdx Added comprehensive documentation for MCP SSE integration, including examples for server, client, agent, alternative LLMs, and Gradio UI
    docs/mint.json Updated navigation to include new MCP SSE documentation page

    Poem

    🐇 Hopping through code, so fresh and new,
    SSE streams and tools in view.
    Greetings and weather, clients that chat,
    Async calls where the servers are at.
    Version up, dependencies tall,
    In this code warren, we leap and sprawl!
    🌿✨🐰

    Tip

    ⚡️ Faster reviews with caching
    • CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.

    Enjoy the performance boost—your workflow just got faster.


    📜 Recent review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between b91a689 and dacaa7f.

    📒 Files selected for processing (2)
    • docs/mcp/sse.mdx (1 hunks)
    • docs/mint.json (1 hunks)

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    @qodo-code-review
    Copy link
    Copy Markdown

    qodo-code-review bot commented Apr 8, 2025

    CI Feedback 🧐

    (Feedback updated until commit dacaa7f)

    A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

    Action: test

    Failed stage: Run specific unittest [❌]

    Failed test name: TestExamples.test_advanced_example, TestExamples.test_auto_example, TestExamples.test_basic_example

    Failure summary:

    The action failed because all three tests in the TestExamples class failed:

    1. test_advanced_example (line 64): Failed assertion because result was unexpectedly None
    2.
    test_auto_example (line 69): Failed assertion because result was unexpectedly None
    3.
    test_basic_example (line 59): Failed assertion because result was unexpectedly None

    The root cause appears to be an invalid API key issue. Multiple errors show "Incorrect API key
    provided" (lines 880-884, 897-901, 913-917) with error code 401, which prevented the API calls from
    returning valid responses, resulting in the 'NoneType' object has no attribute 'choices' errors.

    Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    864:  INFO     [13:47:06] config.py:368 INFO Created default  config.py:368
    865:  translation file at                                         
    866:  /home/runner/.praison/.chainlit/translations/m              
    867:  r.json                                                      
    868:  [13:47:08] INFO     [13:47:08] _client.py:1025 INFO HTTP         _client.py:1025
    869:  Request: GET                                                
    870:  https://api.gradio.app/gradio-messaging/en                  
    871:  "HTTP/1.1 200 OK"                                           
    872:  <praisonai.cli.PraisonAI object at 0x7fd6e0b344d0>
    873:  ╭─ Agent Info ─────────────────────────────────────────────────────────────────╮
    874:  │                                                                              │
    875:  │  👤 Agent: DirectAgent                                                       │
    876:  │  Role: Assistant                                                             │
    877:  │                                                                              │
    878:  ╰──────────────────────────────────────────────────────────────────────────────╯
    879:  ╭─────────────────────────────────── Error ────────────────────────────────────╮
    880:  │ Error in stream processing: Error code: 401 - {'error': {'message':          │
    881:  │ 'Incorrect API key provided:                                                 │
    882:  │ gsk_PYPR********************************************QBu2. You can find your  │
    883:  │ API key at https://platform.openai.com/account/api-keys.', 'type':           │
    884:  │ 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}          │
    885:  ╰──────────────────────────────────────────────────────────────────────────────╯
    886:  ╭──────────────────────────────── Error ─────────────────────────────────╮
    887:  │ Error in chat completion: 'NoneType' object has no attribute 'choices' │
    888:  ╰────────────────────────────────────────────────────────────────────────╯
    889:  autogen
    890:  ╭─ Agent Info ─────────────────────────────────────────────────────────────────╮
    891:  │                                                                              │
    892:  │  👤 Agent: DirectAgent                                                       │
    893:  │  Role: Assistant                                                             │
    894:  │                                                                              │
    895:  ╰──────────────────────────────────────────────────────────────────────────────╯
    896:  ╭─────────────────────────────────── Error ────────────────────────────────────╮
    897:  │ Error in stream processing: Error code: 401 - {'error': {'message':          │
    898:  │ 'Incorrect API key provided:                                                 │
    899:  │ gsk_PYPR********************************************QBu2. You can find your  │
    900:  │ API key at https://platform.openai.com/account/api-keys.', 'type':           │
    901:  │ 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}          │
    902:  ╰──────────────────────────────────────────────────────────────────────────────╯
    903:  ╭──────────────────────────────── Error ─────────────────────────────────╮
    904:  │ Error in chat completion: 'NoneType' object has no attribute 'choices' │
    905:  ╰────────────────────────────────────────────────────────────────────────╯
    906:  ╭─ Agent Info ─────────────────────────────────────────────────────────────────╮
    907:  │                                                                              │
    908:  │  👤 Agent: DirectAgent                                                       │
    909:  │  Role: Assistant                                                             │
    910:  │                                                                              │
    911:  ╰──────────────────────────────────────────────────────────────────────────────╯
    912:  ╭─────────────────────────────────── Error ────────────────────────────────────╮
    913:  │ Error in stream processing: Error code: 401 - {'error': {'message':          │
    914:  │ 'Incorrect API key provided:                                                 │
    915:  │ gsk_PYPR********************************************QBu2. You can find your  │
    916:  │ API key at https://platform.openai.com/account/api-keys.', 'type':           │
    917:  │ 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}          │
    918:  ╰──────────────────────────────────────────────────────────────────────────────╯
    919:  ╭──────────────────────────────── Error ─────────────────────────────────╮
    920:  │ Error in chat completion: 'NoneType' object has no attribute 'choices' │
    921:  ╰────────────────────────────────────────────────────────────────────────╯
    922:  FFF
    923:  ======================================================================
    924:  FAIL: test_advanced_example (tests.test.TestExamples.test_advanced_example)
    925:  ----------------------------------------------------------------------
    926:  Traceback (most recent call last):
    927:  File "/home/runner/work/PraisonAI/PraisonAI/tests/test.py", line 64, in test_advanced_example
    928:  self.assertIsNotNone(result)  # Adjust this assertion as needed
    929:  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    930:  AssertionError: unexpectedly None
    931:  ======================================================================
    932:  FAIL: test_auto_example (tests.test.TestExamples.test_auto_example)
    933:  ----------------------------------------------------------------------
    934:  Traceback (most recent call last):
    935:  File "/home/runner/work/PraisonAI/PraisonAI/tests/test.py", line 69, in test_auto_example
    936:  self.assertIsNotNone(result)  # Adjust this assertion according to what auto() is expected to do
    937:  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    938:  AssertionError: unexpectedly None
    939:  ======================================================================
    940:  FAIL: test_basic_example (tests.test.TestExamples.test_basic_example)
    941:  ----------------------------------------------------------------------
    942:  Traceback (most recent call last):
    943:  File "/home/runner/work/PraisonAI/PraisonAI/tests/test.py", line 59, in test_basic_example
    944:  self.assertIsNotNone(result)  # Adjust this assertion based on the expected outcome of main()
    945:  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    946:  AssertionError: unexpectedly None
    947:  ----------------------------------------------------------------------
    948:  Ran 3 tests in 0.278s
    949:  FAILED (failures=3)
    950:  ##[error]Process completed with exit code 1.
    951:  Post job cleanup.
    

    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    The pull request increments the project version and updates a dependency to include all extras. The changes seem straightforward and well-documented in the PR description.

    Summary of Findings

    • Dependency Update: The markitdown dependency was updated to include all extras (markitdown[all]). This enhancement should be verified to ensure it doesn't introduce any unexpected side effects or conflicts with other dependencies.
    • Version Increment: The project version was incremented from 0.0.75 to 0.0.76. Ensure that this version increment aligns with the project's versioning strategy and release process.

    Merge Readiness

    The pull request appears to be in good shape for merging, but it's crucial to verify that the dependency update doesn't introduce any regressions or conflicts. I am unable to approve this pull request, and recommend that others review and approve this code before merging. At a minimum, the high severity issue should be addressed before merging.

    { name = "chromadb", marker = "extra == 'memory'", specifier = ">=0.5.23" },
    { name = "litellm", marker = "extra == 'llm'", specifier = ">=1.50.0" },
    { name = "markitdown", marker = "extra == 'knowledge'" },
    { name = "markitdown", extras = ["all"], marker = "extra == 'knowledge'" },
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    It's good to see the uv.lock file was updated, but it's important to verify that all the necessary dependencies and sub-dependencies were correctly resolved and locked. Consider running tests to ensure that the application functions as expected with the updated lockfile.

        { name = "markitdown", extras = ["all"], marker = "extra == 'knowledge'" },
    
    

    "mem0ai>=0.1.0",
    "chromadb==0.5.23",
    "markitdown",
    "markitdown[all]",
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    Consider adding a comment explaining why all extras are needed for the markitdown dependency. This will help future developers understand the purpose of this change.

        "markitdown[all]",  # Includes all extras for enhanced markdown functionality
    
    

    @MervinPraison
    Copy link
    Copy Markdown
    Owner Author

    /ask what are the key changes

    @qodo-code-review
    Copy link
    Copy Markdown

    Ask

    what are the key changes

    Answer:

    The key changes in this PR are:

    1. Version increment: The project version in pyproject.toml has been updated from 0.0.75 to 0.0.76.

    2. Dependency enhancement: The markitdown dependency has been upgraded to markitdown[all], which includes all extras/optional features of the markitdown package.

    These changes appear to be focused on version management and enhancing functionality by including all available features of the markitdown dependency.

    @MervinPraison
    Copy link
    Copy Markdown
    Owner Author

    /review pyproject.toml and requirements.txt file

    @qodo-code-review
    Copy link
    Copy Markdown

    Persistent review updated to latest commit c332346

    Copy link
    Copy Markdown
    Contributor

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 4

    🧹 Nitpick comments (6)
    src/praisonai-agents/mcp-sse-direct-server.py (3)

    2-3: Remove unused imports to keep the module lean

    typing.Any, httpx, and inspect are all imported but never used. Eliminating them avoids unnecessary byte-code generation and quiets Ruff F401 warnings.

    -from typing import Any
    -import httpx
    -...
    -import inspect

    Also applies to: 14-14

    🧰 Tools
    🪛 Ruff (0.8.2)

    2-2: typing.Any imported but unused

    Remove unused import: typing.Any

    (F401)


    3-3: httpx imported but unused

    Remove unused import: httpx

    (F401)


    79-80: Avoid accessing the private attribute mcp._mcp_server

    Underscore-prefixed attributes are considered implementation details and may vanish without notice.
    Expose the server via a public property or helper in the FastMCP library, or contribute such an accessor upstream.


    88-90: Derive the tool list dynamically instead of hard-coding

    Hard-coding tool names risks the list drifting from the actual exported tools.

    -# Hardcode the tool names since we know what they are
    -tool_names = ["get_greeting", "get_weather"]
    +tool_names = [t.__name__ for t in mcp.tools]  # type: ignore[attr-defined]

    This guarantees the CLI always prints the real set of registered tools.

    src/praisonai-agents/praisonaiagents/mcp/mcp.py (2)

    169-177: SSE endpoint detection could be too permissive

    re.match(r'^https?://', …) treats any HTTP URL as an SSE endpoint.
    Accidentally passing a non-SSE REST URL will silently route through the SSE client and fail later. Consider:

    if isinstance(command_or_string, str) and command_or_string.endswith("/sse"):
        ...

    or a stricter regex like r'^https?://.*?/sse$'.


    340-347: Update the docstring to reflect the actual return type

    to_openai_tool now returns list for both stdio and SSE paths, yet the docstring still mentions dict or list. Harmonising the return contract avoids confusion for downstream users.

    -        """Convert the MCP tool to an OpenAI-compatible tool definition.
    +        """Convert the MCP tools to OpenAI-compatible definitions.
    
    ...
    -            dict or list: OpenAI-compatible tool definition(s)
    +            list: List of OpenAI-compatible tool definitions
    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (1)

    11-12: Prune unused imports to silence Ruff F401

    json, List, Callable, and Iterable are imported but not referenced.

    -import json
    -from typing import List, Dict, Any, Optional, Callable, Iterable
    +from typing import Dict, Any, Optional
    🧰 Tools
    🪛 Ruff (0.8.2)

    11-11: json imported but unused

    Remove unused import: json

    (F401)


    12-12: typing.List imported but unused

    Remove unused import

    (F401)


    12-12: typing.Callable imported but unused

    Remove unused import

    (F401)


    12-12: typing.Iterable imported but unused

    Remove unused import

    (F401)

    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between c332346 and 55f43be.

    📒 Files selected for processing (11)
    • src/praisonai-agents/.gitignore (1 hunks)
    • src/praisonai-agents/README.md (1 hunks)
    • src/praisonai-agents/mcp-sse-direct-agent.py (1 hunks)
    • src/praisonai-agents/mcp-sse-direct-client.py (1 hunks)
    • src/praisonai-agents/mcp-sse-direct-server.py (1 hunks)
    • src/praisonai-agents/mcp-sse-weather.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/agent/agent.py (2 hunks)
    • src/praisonai-agents/praisonaiagents/llm/llm.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/mcp/__init__.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/mcp/mcp.py (7 hunks)
    • src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (1 hunks)
    ✅ Files skipped from review due to trivial changes (5)
    • src/praisonai-agents/.gitignore
    • src/praisonai-agents/README.md
    • src/praisonai-agents/mcp-sse-direct-agent.py
    • src/praisonai-agents/praisonaiagents/mcp/init.py
    • src/praisonai-agents/mcp-sse-weather.py
    🧰 Additional context used
    🪛 Ruff (0.8.2)
    src/praisonai-agents/mcp-sse-direct-server.py

    2-2: typing.Any imported but unused

    Remove unused import: typing.Any

    (F401)


    3-3: httpx imported but unused

    Remove unused import: httpx

    (F401)


    14-14: inspect imported but unused

    Remove unused import: inspect

    (F401)

    src/praisonai-agents/mcp-sse-direct-client.py

    4-4: os imported but unused

    Remove unused import: os

    (F401)

    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py

    11-11: json imported but unused

    Remove unused import: json

    (F401)


    12-12: typing.List imported but unused

    Remove unused import

    (F401)


    12-12: typing.Callable imported but unused

    Remove unused import

    (F401)


    12-12: typing.Iterable imported but unused

    Remove unused import

    (F401)

    🔇 Additional comments (5)
    src/praisonai-agents/praisonaiagents/llm/llm.py (1)

    296-301: Good enhancement for handling tools provided as lists

    This addition properly extends the get_response method to handle tools that are provided as lists (e.g., those returned by MCP.to_openai_tool()). The code correctly iterates through each subtool, checks if it's properly formatted as an OpenAI function tool, and adds it to the formatted_tools list with appropriate debug logging.

    src/praisonai-agents/mcp-sse-direct-client.py (1)

    1-120: Well-structured SSE client implementation

    The MCP SSE client implementation is well-organized and follows good practices:

    • Proper use of async context managers for resource management
    • Clear error handling with try/except blocks
    • Good separation of concerns between connection, tool discovery, and query processing

    The client successfully demonstrates how to connect to an MCP server over SSE, discover available tools, and process different types of queries by calling the appropriate tools.

    🧰 Tools
    🪛 Ruff (0.8.2)

    4-4: os imported but unused

    Remove unused import: os

    (F401)

    src/praisonai-agents/praisonaiagents/agent/agent.py (2)

    534-547: Good enhancement to support multiple MCP client types

    The code now properly handles both SSE MCP clients and stdio MCP clients by checking:

    1. If self.tools.is_sse is True, it looks for tools in self.tools.sse_client.tools
    2. For traditional stdio MCP clients, it looks in self.tools.runner.tools

    This change ensures that the Agent class can work seamlessly with both client types, improving backward compatibility.


    828-833: Good handling of multiple tool formats

    The code now correctly handles both single tools and lists of tools returned by to_openai_tool(). This is a necessary enhancement to support the new SSE MCP client implementation that can return multiple OpenAI-compatible tools.

    src/praisonai-agents/mcp-sse-direct-server.py (1)

    57-64:

    Details

    ❌ Incorrect review comment

    Relying on Starlette’s private attribute _send is brittle

    request._send is an internal detail; Starlette makes no stability guarantees for it.
    If the implementation changes, the server will break at runtime. Prefer the officially-documented pattern of delegating the ASGI send callable that’s already available in the Request’s scope:

    -        async with sse.connect_sse(
    -                request.scope,
    -                request.receive,
    -                request._send,  # noqa: SLF001
    +        async with sse.connect_sse(
    +                request.scope,
    +                request.receive,
    +                request.scope["send"],
             ) as (read_stream, write_stream):

    Verify that request.scope["send"] (or the helper exposed by your Starlette version) is supported; otherwise inject the send callable via a small ASGI middleware instead of touching private attributes.


    🏁 Script executed:

    #!/bin/bash
    # Locate the handle_sse definition to inspect imports and context
    rg "async def handle_sse" -n src

    Length of output: 135


    🏁 Script executed:

    #!/bin/bash
    # Display the import section and relevant context to identify sse and Request usage
    sed -n '1,100p' src/praisonai-agents/mcp-sse-direct-server.py

    Length of output: 3325


    Incorrect suggestion: request.scope["send"] is not available

    Starlette’s ASGI scope does not include a "send" key—so request.scope["send"] will always raise a KeyError. The only way to get the ASGI send callable from a Request is via its private request._send, and Starlette does not expose a public equivalent.

    If you want to avoid directly accessing a private attribute, consider one of the following approaches:

    • Define your SSE handler as a raw ASGI application instead of a Request-based endpoint:
      async def handle_sse(scope, receive, send):
          logger.debug(f"SSE connection from {scope['client']}")
          async with sse.connect_sse(scope, receive, send) as (read_stream, write_stream):
              await mcp_server.run(
                  read_stream,
                  write_stream,
                  mcp_server.create_initialization_options(),
              )
      and mount it via
      Route(
          "/sse",
          endpoint=handle_sse,
          methods=["GET"],
          include_in_schema=False,
      )
    • Continue using request._send but track Starlette’s changelogs to catch any breaking changes early.

    Likely an incorrect or invalid review comment.

    # python mcp-sse-direct-client.py http://0.0.0.0:8080/sse
    import asyncio
    import json
    import os
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    💡 Verification agent

    🧩 Analysis chain

    Remove unused import

    The 'os' module is imported but never used in the code.

    -import os

    🏁 Script executed:

    #!/bin/bash
    # Check if the os import is actually used
    grep -n "os\." src/praisonai-agents/mcp-sse-direct-client.py

    Length of output: 62


    🏁 Script executed:

    #!/bin/bash
    python3 - <<'EOF'
    import ast, sys
    path = 'src/praisonai-agents/mcp-sse-direct-client.py'
    try:
        source = open(path).read()
    except FileNotFoundError:
        print("FILE_NOT_FOUND")
        sys.exit(1)
    tree = ast.parse(source)
    class OsVisitor(ast.NodeVisitor):
        def __init__(self):
            self.used = False
        def visit_Name(self, node):
            if node.id == 'os':
                self.used = True
            self.generic_visit(node)
    visitor = OsVisitor()
    visitor.visit(tree)
    print(visitor.used)
    EOF

    Length of output: 17


    Remove unused os import

    The os module is imported in src/praisonai-agents/mcp-sse-direct-client.py but never referenced. It can be safely removed.

    • File: src/praisonai-agents/mcp-sse-direct-client.py

    - import os
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    import os
    🧰 Tools
    🪛 Ruff (0.8.2)

    4-4: os imported but unused

    Remove unused import: os

    (F401)

    Comment on lines +145 to +174
    # Create SSE client
    self._streams_context = sse_client(url=self.server_url)
    streams = await self._streams_context.__aenter__()

    self._session_context = ClientSession(*streams)
    self.session = await self._session_context.__aenter__()

    # Initialize
    await self.session.initialize()

    # List available tools
    logger.debug("Listing tools...")
    response = await self.session.list_tools()
    tools_data = response.tools
    logger.debug(f"Found {len(tools_data)} tools: {[tool.name for tool in tools_data]}")

    # Create tool wrappers
    tools = []
    for tool in tools_data:
    input_schema = tool.inputSchema if hasattr(tool, 'inputSchema') else None
    wrapper = SSEMCPTool(
    name=tool.name,
    description=tool.description if hasattr(tool, 'description') else f"Call the {tool.name} tool",
    session=self.session,
    input_schema=input_schema
    )
    tools.append(wrapper)

    return tools

    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Missing clean-up of async contexts leaks network resources

    _streams_context.__aenter__() and _session_context.__aenter__() are called, but their corresponding __aexit__() are never invoked. Provide an explicit close() (and call it from __del__) to shut down the SSE connection gracefully.

    +    async def _async_close(self):
    +        if self.session:
    +            await self.session.__aexit__(None, None, None)
    +        if hasattr(self, "_streams_context"):
    +            await self._streams_context.__aexit__(None, None, None)
    +
    +    def close(self):
    +        loop = get_event_loop()
    +        asyncio.run_coroutine_threadsafe(self._async_close(), loop).result(timeout=10)
    +
    +    def __del__(self):
    +        try:
    +            self.close()
    +        except Exception:  # noqa: BLE001
    +            pass

    Neglecting to close can leave sockets open and stall interpreter shutdown.

    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    # Create SSE client
    self._streams_context = sse_client(url=self.server_url)
    streams = await self._streams_context.__aenter__()
    self._session_context = ClientSession(*streams)
    self.session = await self._session_context.__aenter__()
    # Initialize
    await self.session.initialize()
    # List available tools
    logger.debug("Listing tools...")
    response = await self.session.list_tools()
    tools_data = response.tools
    logger.debug(f"Found {len(tools_data)} tools: {[tool.name for tool in tools_data]}")
    # Create tool wrappers
    tools = []
    for tool in tools_data:
    input_schema = tool.inputSchema if hasattr(tool, 'inputSchema') else None
    wrapper = SSEMCPTool(
    name=tool.name,
    description=tool.description if hasattr(tool, 'description') else f"Call the {tool.name} tool",
    session=self.session,
    input_schema=input_schema
    )
    tools.append(wrapper)
    return tools
    # Create SSE client
    self._streams_context = sse_client(url=self.server_url)
    streams = await self._streams_context.__aenter__()
    self._session_context = ClientSession(*streams)
    self.session = await self._session_context.__aenter__()
    # Initialize
    await self.session.initialize()
    # List available tools
    logger.debug("Listing tools...")
    response = await self.session.list_tools()
    tools_data = response.tools
    logger.debug(f"Found {len(tools_data)} tools: {[tool.name for tool in tools_data]}")
    # Create tool wrappers
    tools = []
    for tool in tools_data:
    input_schema = tool.inputSchema if hasattr(tool, 'inputSchema') else None
    wrapper = SSEMCPTool(
    name=tool.name,
    description=tool.description if hasattr(tool, 'description') else f"Call the {tool.name} tool",
    session=self.session,
    input_schema=input_schema
    )
    tools.append(wrapper)
    return tools
    async def _async_close(self):
    if self.session:
    await self.session.__aexit__(None, None, None)
    if hasattr(self, "_streams_context"):
    await self._streams_context.__aexit__(None, None, None)
    def close(self):
    loop = get_event_loop()
    asyncio.run_coroutine_threadsafe(self._async_close(), loop).result(timeout=10)
    def __del__(self):
    try:
    self.close()
    except Exception: # noqa: BLE001
    pass

    Comment on lines +58 to +70
    def __call__(self, **kwargs):
    """Synchronous wrapper for the async call."""
    logger.debug(f"Tool {self.name} called with args: {kwargs}")

    # Use the global event loop
    loop = get_event_loop()

    # Run the async call in the event loop
    future = asyncio.run_coroutine_threadsafe(self._async_call(**kwargs), loop)
    try:
    # Wait for the result with a timeout
    return future.result(timeout=30)
    except Exception as e:
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Support positional arguments in SSEMCPTool.__call__

    __call__ only accepts **kwargs; any positional use (tool("paris")) will raise a TypeError. Map positional arguments to parameter names, mirroring the stdio wrapper.

    -    def __call__(self, **kwargs):
    +    def __call__(self, *args, **kwargs):
             """Synchronous wrapper for the async call."""
             logger.debug(f"Tool {self.name} called with args: {kwargs}")
     
    +        # Map positional args to the schema-defined names
    +        if args:
    +            positional_map = dict(zip(self.__signature__.parameters, args))
    +            kwargs = {**positional_map, **kwargs}
    +
             # Use the global event loop
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    def __call__(self, **kwargs):
    """Synchronous wrapper for the async call."""
    logger.debug(f"Tool {self.name} called with args: {kwargs}")
    # Use the global event loop
    loop = get_event_loop()
    # Run the async call in the event loop
    future = asyncio.run_coroutine_threadsafe(self._async_call(**kwargs), loop)
    try:
    # Wait for the result with a timeout
    return future.result(timeout=30)
    except Exception as e:
    def __call__(self, *args, **kwargs):
    """Synchronous wrapper for the async call."""
    logger.debug(f"Tool {self.name} called with args: {kwargs}")
    # Map positional args to the schema-defined names
    if args:
    positional_map = dict(zip(self.__signature__.parameters, args))
    kwargs = {**positional_map, **kwargs}
    # Use the global event loop
    loop = get_event_loop()
    # Run the async call in the event loop
    future = asyncio.run_coroutine_threadsafe(self._async_call(**kwargs), loop)
    try:
    # Wait for the result with a timeout
    return future.result(timeout=30)
    except Exception as e:

    Comment on lines +120 to +138
    logger.setLevel(logging.DEBUG)

    self._initialize()

    def _initialize(self):
    """Initialize the connection and tools."""
    # Use the global event loop
    loop = get_event_loop()

    # Start a background thread to run the event loop
    def run_event_loop():
    asyncio.set_event_loop(loop)
    loop.run_forever()

    self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    self.loop_thread.start()

    # Run the initialization in the event loop
    future = asyncio.run_coroutine_threadsafe(self._async_initialize(), loop)
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Starting a new thread per client with a shared event loop causes runtime errors

    get_event_loop() returns a singleton loop. Creating multiple SSEMCPClient instances will attempt to start another thread running the same loop, triggering RuntimeError: This event loop is already running.

    Guard the thread start:

    -        self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    -        self.loop_thread.start()
    +        if not loop.is_running():
    +            self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    +            self.loop_thread.start()

    Alternatively, create a dedicated event loop per client instead of sharing a global one.

    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    logger.setLevel(logging.DEBUG)
    self._initialize()
    def _initialize(self):
    """Initialize the connection and tools."""
    # Use the global event loop
    loop = get_event_loop()
    # Start a background thread to run the event loop
    def run_event_loop():
    asyncio.set_event_loop(loop)
    loop.run_forever()
    self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    self.loop_thread.start()
    # Run the initialization in the event loop
    future = asyncio.run_coroutine_threadsafe(self._async_initialize(), loop)
    def _initialize(self):
    """Initialize the connection and tools."""
    # Use the global event loop
    loop = get_event_loop()
    # Start a background thread to run the event loop
    def run_event_loop():
    asyncio.set_event_loop(loop)
    loop.run_forever()
    if not loop.is_running():
    self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    self.loop_thread.start()
    # Run the initialization in the event loop
    future = asyncio.run_coroutine_threadsafe(self._async_initialize(), loop)

    …ance logging configuration, and increment version to 0.0.77 in `pyproject.toml` and `uv.lock` for improved functionality.
    Copy link
    Copy Markdown
    Contributor

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 1

    ♻️ Duplicate comments (6)
    src/praisonai-agents/mcp-sse-direct-agent.py (3)

    70-84: 🛠️ Refactor suggestion

    Support positional arguments in __call__.

    Currently, the method only accepts keyword arguments. To improve usability, support positional arguments by mapping them to parameter names based on the signature.

    -    def __call__(self, **kwargs):
    +    def __call__(self, *args, **kwargs):
             """Synchronous wrapper for the async call."""
             logger.debug(f"Tool {self.name} called with args: {kwargs}")
             
    +        # Map positional args to the schema-defined names
    +        if args:
    +            positional_map = dict(zip(self.__signature__.parameters, args))
    +            kwargs = {**positional_map, **kwargs}
    +
             # Use the global event loop
             loop = get_event_loop()

    114-122: ⚠️ Potential issue

    Add a proper cleanup mechanism to prevent resource leaks.

    The client creates asynchronous context managers but never calls their __aexit__ methods, which can lead to resource leaks.

     class SSEMCPClient:
         """A client for connecting to an MCP server over SSE."""
         
         def __init__(self, server_url: str):
             self.server_url = server_url
             self.session = None
             self.tools = []
             self._initialize()
    +        
    +    def __del__(self):
    +        try:
    +            self.close()
    +        except Exception:
    +            pass
    +        
    +    def close(self):
    +        """Close the connection and clean up resources."""
    +        loop = get_event_loop()
    +        future = asyncio.run_coroutine_threadsafe(self._async_close(), loop)
    +        future.result(timeout=10)
    +        
    +    async def _async_close(self):
    +        """Asynchronously close the connection and clean up resources."""
    +        if self.session:
    +            await self._session_context.__aexit__(None, None, None)
    +        if hasattr(self, "_streams_context"):
    +            await self._streams_context.__aexit__(None, None, None)

    123-139: ⚠️ Potential issue

    Guard against runtime errors when creating multiple clients.

    Starting a new thread per client with a shared event loop can cause RuntimeError: This event loop is already running when multiple clients are created.

         def _initialize(self):
             """Initialize the connection and tools."""
             # Use the global event loop
             loop = get_event_loop()
             
             # Start a background thread to run the event loop
             def run_event_loop():
                 asyncio.set_event_loop(loop)
                 loop.run_forever()
             
             import threading
    -        self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    -        self.loop_thread.start()
    +        # Only start the thread if the loop is not already running
    +        if not loop.is_running():
    +            self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    +            self.loop_thread.start()

    Additionally, import threading at the module level rather than inside this method.

    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (3)

    58-73: 🛠️ Refactor suggestion

    Support positional arguments in __call__.

    Currently, the method only accepts keyword arguments. To improve usability, support positional arguments by mapping them to parameter names based on the signature.

    -    def __call__(self, **kwargs):
    +    def __call__(self, *args, **kwargs):
             """Synchronous wrapper for the async call."""
             logger.debug(f"Tool {self.name} called with args: {kwargs}")
             
    +        # Map positional args to the schema-defined names
    +        if args:
    +            positional_map = dict(zip(self.__signature__.parameters, args))
    +            kwargs = {**positional_map, **kwargs}
    +
             # Use the global event loop
             loop = get_event_loop()

    127-143: ⚠️ Potential issue

    Guard against runtime errors when creating multiple clients.

    Starting a new thread per client with a shared event loop can cause RuntimeError: This event loop is already running when multiple clients are created.

         def _initialize(self):
             """Initialize the connection and tools."""
             # Use the global event loop
             loop = get_event_loop()
             
             # Start a background thread to run the event loop
             def run_event_loop():
                 asyncio.set_event_loop(loop)
                 loop.run_forever()
             
    -        self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    -        self.loop_thread.start()
    +        # Only start the thread if the loop is not already running
    +        if not loop.is_running():
    +            self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
    +            self.loop_thread.start()

    144-177: ⚠️ Potential issue

    Missing clean-up of async contexts leaks network resources.

    _streams_context.__aenter__() and _session_context.__aenter__() are called, but their corresponding __aexit__() are never invoked. Provide an explicit close() (and call it from __del__) to shut down the SSE connection gracefully.

    +    async def _async_close(self):
    +        """Asynchronously close the connection and clean up resources."""
    +        if self.session:
    +            await self._session_context.__aexit__(None, None, None)
    +        if hasattr(self, "_streams_context"):
    +            await self._streams_context.__aexit__(None, None, None)
    +
    +    def close(self):
    +        """Close the connection and clean up resources."""
    +        loop = get_event_loop()
    +        future = asyncio.run_coroutine_threadsafe(self._async_close(), loop)
    +        try:
    +            future.result(timeout=10)
    +        except Exception as e:
    +            logger.error(f"Error closing SSE connection: {e}")
    +
    +    def __del__(self):
    +        """Clean up resources when the object is garbage collected."""
    +        try:
    +            self.close()
    +        except Exception:
    +            pass

    Neglecting to close can leave sockets open and stall interpreter shutdown.

    🧹 Nitpick comments (3)
    src/praisonai-agents/mcp-sse-direct-agent.py (2)

    1-20: Remove unused imports.

    The following imports are not used in the file:

     import os
     import logging
     import asyncio
    -import time
    -import sys
     import inspect
     import json
    -from typing import List, Dict, Any, Optional, Callable
    -from contextlib import AsyncExitStack
    +from typing import Dict, Any, Optional
    🧰 Tools
    🪛 Ruff (0.8.2)

    4-4: time imported but unused

    Remove unused import: time

    (F401)


    5-5: sys imported but unused

    Remove unused import: sys

    (F401)


    8-8: typing.List imported but unused

    Remove unused import

    (F401)


    8-8: typing.Callable imported but unused

    Remove unused import

    (F401)


    9-9: contextlib.AsyncExitStack imported but unused

    Remove unused import: contextlib.AsyncExitStack

    (F401)


    215-219: Import traceback at the module level instead of inside the exception handler.

    +import traceback
    +
     def main():
    @@ ... @@
         except Exception as e:
             logger.error(f"Error: {e}")
    -        import traceback
             traceback.print_exc()
    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (1)

    1-17: Remove unused imports.

    Several imported modules are not used in the code.

     import asyncio
     import logging
     import threading
     import inspect
    -import json
    -from typing import List, Dict, Any, Optional, Callable, Iterable
    +from typing import Dict, Any, Optional
    🧰 Tools
    🪛 Ruff (0.8.2)

    11-11: json imported but unused

    Remove unused import: json

    (F401)


    12-12: typing.List imported but unused

    Remove unused import

    (F401)


    12-12: typing.Callable imported but unused

    Remove unused import

    (F401)


    12-12: typing.Iterable imported but unused

    Remove unused import

    (F401)

    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 55f43be and b91a689.

    ⛔ Files ignored due to path filters (1)
    • src/praisonai-agents/uv.lock is excluded by !**/*.lock
    📒 Files selected for processing (4)
    • src/praisonai-agents/mcp-sse-direct-agent.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/mcp/mcp.py (7 hunks)
    • src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (1 hunks)
    • src/praisonai-agents/pyproject.toml (2 hunks)
    🚧 Files skipped from review as they are similar to previous changes (2)
    • src/praisonai-agents/pyproject.toml
    • src/praisonai-agents/praisonaiagents/mcp/mcp.py
    🧰 Additional context used
    🧬 Code Graph Analysis (2)
    src/praisonai-agents/mcp-sse-direct-agent.py (2)
    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (8)
    • get_event_loop (22-28)
    • SSEMCPTool (31-99)
    • _async_call (74-88)
    • to_openai_tool (90-99)
    • SSEMCPClient (102-184)
    • _initialize (127-142)
    • run_event_loop (133-135)
    • _async_initialize (144-176)
    src/praisonai-agents/praisonaiagents/mcp/mcp.py (3)
    • call_tool (74-94)
    • to_openai_tool (358-401)
    • wrapper (313-324)
    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (2)
    src/praisonai-agents/mcp-sse-direct-agent.py (8)
    • get_event_loop (35-41)
    • SSEMCPTool (43-111)
    • _async_call (86-100)
    • to_openai_tool (102-111)
    • SSEMCPClient (114-177)
    • _initialize (123-139)
    • run_event_loop (129-131)
    • _async_initialize (141-173)
    src/praisonai-agents/praisonaiagents/mcp/mcp.py (3)
    • call_tool (74-94)
    • to_openai_tool (358-401)
    • wrapper (313-324)
    🪛 Ruff (0.8.2)
    src/praisonai-agents/mcp-sse-direct-agent.py

    4-4: time imported but unused

    Remove unused import: time

    (F401)


    5-5: sys imported but unused

    Remove unused import: sys

    (F401)


    8-8: typing.List imported but unused

    Remove unused import

    (F401)


    8-8: typing.Callable imported but unused

    Remove unused import

    (F401)


    9-9: contextlib.AsyncExitStack imported but unused

    Remove unused import: contextlib.AsyncExitStack

    (F401)

    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py

    11-11: json imported but unused

    Remove unused import: json

    (F401)


    12-12: typing.List imported but unused

    Remove unused import

    (F401)


    12-12: typing.Callable imported but unused

    Remove unused import

    (F401)


    12-12: typing.Iterable imported but unused

    Remove unused import

    (F401)

    🔇 Additional comments (8)
    src/praisonai-agents/mcp-sse-direct-agent.py (3)

    35-41: LGTM! Well-implemented event loop management.

    The function correctly handles the case when the event loop is None or closed, providing a stable way to access the global event loop.


    43-69: The class signature setup looks good.

    The implementation properly sets up special attributes needed for tool integration with the Agent framework and builds a dynamic signature based on the input schema.


    86-101: LGTM! Well-structured asynchronous tool call implementation.

    The asynchronous implementation handles exceptions appropriately and includes proper result extraction.

    src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (5)

    22-28: LGTM! Well-implemented event loop management.

    The function correctly handles the case when the event loop is None or closed, providing a stable way to access the global event loop.


    31-57: The class signature setup looks good.

    The implementation properly sets up special attributes needed for tool integration with the Agent framework and builds a dynamic signature based on the input schema.


    74-99: LGTM! Well-structured asynchronous tool call implementation.

    The asynchronous implementation handles exceptions appropriately and includes proper result extraction.


    105-126: Good improvements with debug parameter and logging levels!

    The addition of a debug parameter to control logging levels is a nice improvement over the previous version.


    178-184: Well-implemented iterability and OpenAI tools conversion.

    The methods to support iteration over tools and conversion to OpenAI format are well-implemented.

    Comment on lines +180 to +213
    def main():
    # Server URL
    server_url = "http://0.0.0.0:8080/sse"

    try:
    # Connect to the MCP server
    client = SSEMCPClient(server_url)

    if not client.tools:
    logger.error("No tools found on the server")
    return

    logger.info(f"Connected to server with {len(client.tools)} tools: {[tool.name for tool in client.tools]}")

    # Create OpenAI-compatible tool definitions
    openai_tools = [tool.to_openai_tool() for tool in client.tools]
    logger.debug(f"OpenAI tools: {json.dumps(openai_tools, indent=2)}")

    # Create an agent with the tools
    assistant_agent = Agent(
    instructions=system_prompt,
    llm="openai/gpt-4o-mini",
    tools=client.tools,
    verbose=True
    )

    # Start the agent with a query
    logger.info("Starting agent with query about weather in Paris")
    result = assistant_agent.chat(
    "Hello! Can you tell me what the weather is like in Paris today?",
    tools=openai_tools
    )

    logger.info(f"Agent response: {result}")
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Avoid hardcoded server URL and improve error handling.

    The hardcoded server URL may cause issues in different environments. Consider extracting it to a configuration variable or command-line argument.

     def main():
    -    # Server URL
    -    server_url = "http://0.0.0.0:8080/sse"
    +    # Get server URL from environment variable or use default
    +    server_url = os.environ.get("MCP_SERVER_URL", "http://0.0.0.0:8080/sse")

    Also, consider adding a proper cleanup in a finally block:

         try:
             # Connect to the MCP server
             client = SSEMCPClient(server_url)
    -        
    +        try:
                 if not client.tools:
                     logger.error("No tools found on the server")
                     return
                     
    @@ ... @@
     
                 logger.info(f"Agent response: {result}")
    -        
    +        finally:
    +            # Clean up resources
    +            client.close()
         except Exception as e:
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    def main():
    # Server URL
    server_url = "http://0.0.0.0:8080/sse"
    try:
    # Connect to the MCP server
    client = SSEMCPClient(server_url)
    if not client.tools:
    logger.error("No tools found on the server")
    return
    logger.info(f"Connected to server with {len(client.tools)} tools: {[tool.name for tool in client.tools]}")
    # Create OpenAI-compatible tool definitions
    openai_tools = [tool.to_openai_tool() for tool in client.tools]
    logger.debug(f"OpenAI tools: {json.dumps(openai_tools, indent=2)}")
    # Create an agent with the tools
    assistant_agent = Agent(
    instructions=system_prompt,
    llm="openai/gpt-4o-mini",
    tools=client.tools,
    verbose=True
    )
    # Start the agent with a query
    logger.info("Starting agent with query about weather in Paris")
    result = assistant_agent.chat(
    "Hello! Can you tell me what the weather is like in Paris today?",
    tools=openai_tools
    )
    logger.info(f"Agent response: {result}")
    def main():
    # Get server URL from environment variable or use default
    server_url = os.environ.get("MCP_SERVER_URL", "http://0.0.0.0:8080/sse")
    try:
    # Connect to the MCP server
    client = SSEMCPClient(server_url)
    try:
    if not client.tools:
    logger.error("No tools found on the server")
    return
    logger.info(f"Connected to server with {len(client.tools)} tools: {[tool.name for tool in client.tools]}")
    # Create OpenAI-compatible tool definitions
    openai_tools = [tool.to_openai_tool() for tool in client.tools]
    logger.debug(f"OpenAI tools: {json.dumps(openai_tools, indent=2)}")
    # Create an agent with the tools
    assistant_agent = Agent(
    instructions=system_prompt,
    llm="openai/gpt-4o-mini",
    tools=client.tools,
    verbose=True
    )
    # Start the agent with a query
    logger.info("Starting agent with query about weather in Paris")
    result = assistant_agent.chat(
    "Hello! Can you tell me what the weather is like in Paris today?",
    tools=openai_tools
    )
    logger.info(f"Agent response: {result}")
    finally:
    # Clean up resources
    client.close()
    except Exception as e:
    logger.exception("Unhandled exception in main")

    @MervinPraison MervinPraison merged commit 1898937 into main May 13, 2025
    7 of 11 checks passed
    shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
    Increment version to 0.0.77 in `pyproject.toml` and update `markitdow…
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    1 participant