Increment version to 0.0.76 in pyproject.toml and update `markitdow…#456
Increment version to 0.0.76 in pyproject.toml and update `markitdow…#456MervinPraison merged 5 commits intomainfrom
pyproject.toml and update `markitdow…#456Conversation
…n` dependency to include all extras in `uv.lock` for enhanced functionality.
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request increments the version of the praisonaiagents package from 0.0.75 to 0.0.76 in pyproject.toml and uv.lock. It also updates the markitdown dependency in pyproject.toml to include all extras (markitdown[all]) and reflects this change in uv.lock to ensure all markitdown features are available.
Highlights
- Version Increment: The package version has been bumped from 0.0.75 to 0.0.76 in both
pyproject.tomlanduv.lock. - Dependency Update: The
markitdowndependency is updated to include all extras (markitdown[all]) inpyproject.tomlanduv.lock.
Changelog
- src/praisonai-agents/pyproject.toml
- Incremented package version from 0.0.75 to 0.0.76 (line 7).
- Updated
markitdowndependency to include all extras:markitdown[all](line 31).
- src/praisonai-agents/uv.lock
- Incremented package version from 0.0.75 to 0.0.76 (line 1886).
- Updated
markitdowndependency to include all extras (line 1928).
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
A version ascends,
Features bloom, dependencies mend,
Code's garden thrives.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Reviewer Guide 🔍(Review updated until commit c332346)Here are some key observations to aid the review process:
|
PR Code Suggestions ✨No code suggestions found for the PR. |
✅ Deploy Preview for praisonai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
|
Caution Review failedThe pull request is closed. WalkthroughThis update introduces a new MCP SSE server and client implementation with asynchronous SSE communication, including new server, client, and agent scripts. The MCP class and agent execution logic were enhanced to support SSE-based MCP clients alongside stdio clients. Documentation, configuration, and dependency files were updated accordingly. Changes
Poem
Tip ⚡️ Faster reviews with caching
Enjoy the performance boost—your workflow just got faster. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
CI Feedback 🧐(Feedback updated until commit dacaa7f)A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
There was a problem hiding this comment.
Code Review
The pull request increments the project version and updates a dependency to include all extras. The changes seem straightforward and well-documented in the PR description.
Summary of Findings
- Dependency Update: The
markitdowndependency was updated to include all extras (markitdown[all]). This enhancement should be verified to ensure it doesn't introduce any unexpected side effects or conflicts with other dependencies. - Version Increment: The project version was incremented from
0.0.75to0.0.76. Ensure that this version increment aligns with the project's versioning strategy and release process.
Merge Readiness
The pull request appears to be in good shape for merging, but it's crucial to verify that the dependency update doesn't introduce any regressions or conflicts. I am unable to approve this pull request, and recommend that others review and approve this code before merging. At a minimum, the high severity issue should be addressed before merging.
| { name = "chromadb", marker = "extra == 'memory'", specifier = ">=0.5.23" }, | ||
| { name = "litellm", marker = "extra == 'llm'", specifier = ">=1.50.0" }, | ||
| { name = "markitdown", marker = "extra == 'knowledge'" }, | ||
| { name = "markitdown", extras = ["all"], marker = "extra == 'knowledge'" }, |
There was a problem hiding this comment.
It's good to see the uv.lock file was updated, but it's important to verify that all the necessary dependencies and sub-dependencies were correctly resolved and locked. Consider running tests to ensure that the application functions as expected with the updated lockfile.
{ name = "markitdown", extras = ["all"], marker = "extra == 'knowledge'" },
| "mem0ai>=0.1.0", | ||
| "chromadb==0.5.23", | ||
| "markitdown", | ||
| "markitdown[all]", |
|
/ask what are the key changes |
Ask❓what are the key changes Answer:The key changes in this PR are:
These changes appear to be focused on version management and enhancing functionality by including all available features of the markitdown dependency. |
|
/review pyproject.toml and requirements.txt file |
|
Persistent review updated to latest commit c332346 |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (6)
src/praisonai-agents/mcp-sse-direct-server.py (3)
2-3: Remove unused imports to keep the module lean
typing.Any,httpx, andinspectare all imported but never used. Eliminating them avoids unnecessary byte-code generation and quiets Ruff F401 warnings.-from typing import Any -import httpx -... -import inspectAlso applies to: 14-14
🧰 Tools
🪛 Ruff (0.8.2)
2-2:
typing.Anyimported but unusedRemove unused import:
typing.Any(F401)
3-3:
httpximported but unusedRemove unused import:
httpx(F401)
79-80: Avoid accessing the private attributemcp._mcp_serverUnderscore-prefixed attributes are considered implementation details and may vanish without notice.
Expose the server via a public property or helper in the FastMCP library, or contribute such an accessor upstream.
88-90: Derive the tool list dynamically instead of hard-codingHard-coding tool names risks the list drifting from the actual exported tools.
-# Hardcode the tool names since we know what they are -tool_names = ["get_greeting", "get_weather"] +tool_names = [t.__name__ for t in mcp.tools] # type: ignore[attr-defined]This guarantees the CLI always prints the real set of registered tools.
src/praisonai-agents/praisonaiagents/mcp/mcp.py (2)
169-177: SSE endpoint detection could be too permissive
re.match(r'^https?://', …)treats any HTTP URL as an SSE endpoint.
Accidentally passing a non-SSE REST URL will silently route through the SSE client and fail later. Consider:if isinstance(command_or_string, str) and command_or_string.endswith("/sse"): ...or a stricter regex like
r'^https?://.*?/sse$'.
340-347: Update the docstring to reflect the actual return type
to_openai_toolnow returns list for both stdio and SSE paths, yet the docstring still mentionsdict or list. Harmonising the return contract avoids confusion for downstream users.- """Convert the MCP tool to an OpenAI-compatible tool definition. + """Convert the MCP tools to OpenAI-compatible definitions. ... - dict or list: OpenAI-compatible tool definition(s) + list: List of OpenAI-compatible tool definitionssrc/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (1)
11-12: Prune unused imports to silence Ruff F401
json,List,Callable, andIterableare imported but not referenced.-import json -from typing import List, Dict, Any, Optional, Callable, Iterable +from typing import Dict, Any, Optional🧰 Tools
🪛 Ruff (0.8.2)
11-11:
jsonimported but unusedRemove unused import:
json(F401)
12-12:
typing.Listimported but unusedRemove unused import
(F401)
12-12:
typing.Callableimported but unusedRemove unused import
(F401)
12-12:
typing.Iterableimported but unusedRemove unused import
(F401)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
src/praisonai-agents/.gitignore(1 hunks)src/praisonai-agents/README.md(1 hunks)src/praisonai-agents/mcp-sse-direct-agent.py(1 hunks)src/praisonai-agents/mcp-sse-direct-client.py(1 hunks)src/praisonai-agents/mcp-sse-direct-server.py(1 hunks)src/praisonai-agents/mcp-sse-weather.py(1 hunks)src/praisonai-agents/praisonaiagents/agent/agent.py(2 hunks)src/praisonai-agents/praisonaiagents/llm/llm.py(1 hunks)src/praisonai-agents/praisonaiagents/mcp/__init__.py(1 hunks)src/praisonai-agents/praisonaiagents/mcp/mcp.py(7 hunks)src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py(1 hunks)
✅ Files skipped from review due to trivial changes (5)
- src/praisonai-agents/.gitignore
- src/praisonai-agents/README.md
- src/praisonai-agents/mcp-sse-direct-agent.py
- src/praisonai-agents/praisonaiagents/mcp/init.py
- src/praisonai-agents/mcp-sse-weather.py
🧰 Additional context used
🪛 Ruff (0.8.2)
src/praisonai-agents/mcp-sse-direct-server.py
2-2: typing.Any imported but unused
Remove unused import: typing.Any
(F401)
3-3: httpx imported but unused
Remove unused import: httpx
(F401)
14-14: inspect imported but unused
Remove unused import: inspect
(F401)
src/praisonai-agents/mcp-sse-direct-client.py
4-4: os imported but unused
Remove unused import: os
(F401)
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py
11-11: json imported but unused
Remove unused import: json
(F401)
12-12: typing.List imported but unused
Remove unused import
(F401)
12-12: typing.Callable imported but unused
Remove unused import
(F401)
12-12: typing.Iterable imported but unused
Remove unused import
(F401)
🔇 Additional comments (5)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
296-301: Good enhancement for handling tools provided as listsThis addition properly extends the
get_responsemethod to handle tools that are provided as lists (e.g., those returned byMCP.to_openai_tool()). The code correctly iterates through each subtool, checks if it's properly formatted as an OpenAI function tool, and adds it to the formatted_tools list with appropriate debug logging.src/praisonai-agents/mcp-sse-direct-client.py (1)
1-120: Well-structured SSE client implementationThe MCP SSE client implementation is well-organized and follows good practices:
- Proper use of async context managers for resource management
- Clear error handling with try/except blocks
- Good separation of concerns between connection, tool discovery, and query processing
The client successfully demonstrates how to connect to an MCP server over SSE, discover available tools, and process different types of queries by calling the appropriate tools.
🧰 Tools
🪛 Ruff (0.8.2)
4-4:
osimported but unusedRemove unused import:
os(F401)
src/praisonai-agents/praisonaiagents/agent/agent.py (2)
534-547: Good enhancement to support multiple MCP client typesThe code now properly handles both SSE MCP clients and stdio MCP clients by checking:
- If
self.tools.is_sseis True, it looks for tools inself.tools.sse_client.tools- For traditional stdio MCP clients, it looks in
self.tools.runner.toolsThis change ensures that the Agent class can work seamlessly with both client types, improving backward compatibility.
828-833: Good handling of multiple tool formatsThe code now correctly handles both single tools and lists of tools returned by
to_openai_tool(). This is a necessary enhancement to support the new SSE MCP client implementation that can return multiple OpenAI-compatible tools.src/praisonai-agents/mcp-sse-direct-server.py (1)
57-64:Details
❌ Incorrect review comment
Relying on Starlette’s private attribute
_sendis brittle
request._sendis an internal detail; Starlette makes no stability guarantees for it.
If the implementation changes, the server will break at runtime. Prefer the officially-documented pattern of delegating the ASGIsendcallable that’s already available in theRequest’s scope:- async with sse.connect_sse( - request.scope, - request.receive, - request._send, # noqa: SLF001 + async with sse.connect_sse( + request.scope, + request.receive, + request.scope["send"], ) as (read_stream, write_stream):Verify that
request.scope["send"](or the helper exposed by your Starlette version) is supported; otherwise inject thesendcallable via a small ASGI middleware instead of touching private attributes.
🏁 Script executed:
#!/bin/bash # Locate the handle_sse definition to inspect imports and context rg "async def handle_sse" -n srcLength of output: 135
🏁 Script executed:
#!/bin/bash # Display the import section and relevant context to identify sse and Request usage sed -n '1,100p' src/praisonai-agents/mcp-sse-direct-server.pyLength of output: 3325
Incorrect suggestion:
request.scope["send"]is not availableStarlette’s ASGI scope does not include a
"send"key—sorequest.scope["send"]will always raise a KeyError. The only way to get the ASGI send callable from aRequestis via its privaterequest._send, and Starlette does not expose a public equivalent.If you want to avoid directly accessing a private attribute, consider one of the following approaches:
- Define your SSE handler as a raw ASGI application instead of a
Request-based endpoint:and mount it viaasync def handle_sse(scope, receive, send): logger.debug(f"SSE connection from {scope['client']}") async with sse.connect_sse(scope, receive, send) as (read_stream, write_stream): await mcp_server.run( read_stream, write_stream, mcp_server.create_initialization_options(), )Route( "/sse", endpoint=handle_sse, methods=["GET"], include_in_schema=False, )- Continue using
request._sendbut track Starlette’s changelogs to catch any breaking changes early.Likely an incorrect or invalid review comment.
| # python mcp-sse-direct-client.py http://0.0.0.0:8080/sse | ||
| import asyncio | ||
| import json | ||
| import os |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Remove unused import
The 'os' module is imported but never used in the code.
-import os🏁 Script executed:
#!/bin/bash
# Check if the os import is actually used
grep -n "os\." src/praisonai-agents/mcp-sse-direct-client.pyLength of output: 62
🏁 Script executed:
#!/bin/bash
python3 - <<'EOF'
import ast, sys
path = 'src/praisonai-agents/mcp-sse-direct-client.py'
try:
source = open(path).read()
except FileNotFoundError:
print("FILE_NOT_FOUND")
sys.exit(1)
tree = ast.parse(source)
class OsVisitor(ast.NodeVisitor):
def __init__(self):
self.used = False
def visit_Name(self, node):
if node.id == 'os':
self.used = True
self.generic_visit(node)
visitor = OsVisitor()
visitor.visit(tree)
print(visitor.used)
EOFLength of output: 17
Remove unused os import
The os module is imported in src/praisonai-agents/mcp-sse-direct-client.py but never referenced. It can be safely removed.
• File: src/praisonai-agents/mcp-sse-direct-client.py
- import os📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import os |
🧰 Tools
🪛 Ruff (0.8.2)
4-4: os imported but unused
Remove unused import: os
(F401)
| # Create SSE client | ||
| self._streams_context = sse_client(url=self.server_url) | ||
| streams = await self._streams_context.__aenter__() | ||
|
|
||
| self._session_context = ClientSession(*streams) | ||
| self.session = await self._session_context.__aenter__() | ||
|
|
||
| # Initialize | ||
| await self.session.initialize() | ||
|
|
||
| # List available tools | ||
| logger.debug("Listing tools...") | ||
| response = await self.session.list_tools() | ||
| tools_data = response.tools | ||
| logger.debug(f"Found {len(tools_data)} tools: {[tool.name for tool in tools_data]}") | ||
|
|
||
| # Create tool wrappers | ||
| tools = [] | ||
| for tool in tools_data: | ||
| input_schema = tool.inputSchema if hasattr(tool, 'inputSchema') else None | ||
| wrapper = SSEMCPTool( | ||
| name=tool.name, | ||
| description=tool.description if hasattr(tool, 'description') else f"Call the {tool.name} tool", | ||
| session=self.session, | ||
| input_schema=input_schema | ||
| ) | ||
| tools.append(wrapper) | ||
|
|
||
| return tools | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Missing clean-up of async contexts leaks network resources
_streams_context.__aenter__() and _session_context.__aenter__() are called, but their corresponding __aexit__() are never invoked. Provide an explicit close() (and call it from __del__) to shut down the SSE connection gracefully.
+ async def _async_close(self):
+ if self.session:
+ await self.session.__aexit__(None, None, None)
+ if hasattr(self, "_streams_context"):
+ await self._streams_context.__aexit__(None, None, None)
+
+ def close(self):
+ loop = get_event_loop()
+ asyncio.run_coroutine_threadsafe(self._async_close(), loop).result(timeout=10)
+
+ def __del__(self):
+ try:
+ self.close()
+ except Exception: # noqa: BLE001
+ passNeglecting to close can leave sockets open and stall interpreter shutdown.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Create SSE client | |
| self._streams_context = sse_client(url=self.server_url) | |
| streams = await self._streams_context.__aenter__() | |
| self._session_context = ClientSession(*streams) | |
| self.session = await self._session_context.__aenter__() | |
| # Initialize | |
| await self.session.initialize() | |
| # List available tools | |
| logger.debug("Listing tools...") | |
| response = await self.session.list_tools() | |
| tools_data = response.tools | |
| logger.debug(f"Found {len(tools_data)} tools: {[tool.name for tool in tools_data]}") | |
| # Create tool wrappers | |
| tools = [] | |
| for tool in tools_data: | |
| input_schema = tool.inputSchema if hasattr(tool, 'inputSchema') else None | |
| wrapper = SSEMCPTool( | |
| name=tool.name, | |
| description=tool.description if hasattr(tool, 'description') else f"Call the {tool.name} tool", | |
| session=self.session, | |
| input_schema=input_schema | |
| ) | |
| tools.append(wrapper) | |
| return tools | |
| # Create SSE client | |
| self._streams_context = sse_client(url=self.server_url) | |
| streams = await self._streams_context.__aenter__() | |
| self._session_context = ClientSession(*streams) | |
| self.session = await self._session_context.__aenter__() | |
| # Initialize | |
| await self.session.initialize() | |
| # List available tools | |
| logger.debug("Listing tools...") | |
| response = await self.session.list_tools() | |
| tools_data = response.tools | |
| logger.debug(f"Found {len(tools_data)} tools: {[tool.name for tool in tools_data]}") | |
| # Create tool wrappers | |
| tools = [] | |
| for tool in tools_data: | |
| input_schema = tool.inputSchema if hasattr(tool, 'inputSchema') else None | |
| wrapper = SSEMCPTool( | |
| name=tool.name, | |
| description=tool.description if hasattr(tool, 'description') else f"Call the {tool.name} tool", | |
| session=self.session, | |
| input_schema=input_schema | |
| ) | |
| tools.append(wrapper) | |
| return tools | |
| async def _async_close(self): | |
| if self.session: | |
| await self.session.__aexit__(None, None, None) | |
| if hasattr(self, "_streams_context"): | |
| await self._streams_context.__aexit__(None, None, None) | |
| def close(self): | |
| loop = get_event_loop() | |
| asyncio.run_coroutine_threadsafe(self._async_close(), loop).result(timeout=10) | |
| def __del__(self): | |
| try: | |
| self.close() | |
| except Exception: # noqa: BLE001 | |
| pass |
| def __call__(self, **kwargs): | ||
| """Synchronous wrapper for the async call.""" | ||
| logger.debug(f"Tool {self.name} called with args: {kwargs}") | ||
|
|
||
| # Use the global event loop | ||
| loop = get_event_loop() | ||
|
|
||
| # Run the async call in the event loop | ||
| future = asyncio.run_coroutine_threadsafe(self._async_call(**kwargs), loop) | ||
| try: | ||
| # Wait for the result with a timeout | ||
| return future.result(timeout=30) | ||
| except Exception as e: |
There was a problem hiding this comment.
Support positional arguments in SSEMCPTool.__call__
__call__ only accepts **kwargs; any positional use (tool("paris")) will raise a TypeError. Map positional arguments to parameter names, mirroring the stdio wrapper.
- def __call__(self, **kwargs):
+ def __call__(self, *args, **kwargs):
"""Synchronous wrapper for the async call."""
logger.debug(f"Tool {self.name} called with args: {kwargs}")
+ # Map positional args to the schema-defined names
+ if args:
+ positional_map = dict(zip(self.__signature__.parameters, args))
+ kwargs = {**positional_map, **kwargs}
+
# Use the global event loop📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __call__(self, **kwargs): | |
| """Synchronous wrapper for the async call.""" | |
| logger.debug(f"Tool {self.name} called with args: {kwargs}") | |
| # Use the global event loop | |
| loop = get_event_loop() | |
| # Run the async call in the event loop | |
| future = asyncio.run_coroutine_threadsafe(self._async_call(**kwargs), loop) | |
| try: | |
| # Wait for the result with a timeout | |
| return future.result(timeout=30) | |
| except Exception as e: | |
| def __call__(self, *args, **kwargs): | |
| """Synchronous wrapper for the async call.""" | |
| logger.debug(f"Tool {self.name} called with args: {kwargs}") | |
| # Map positional args to the schema-defined names | |
| if args: | |
| positional_map = dict(zip(self.__signature__.parameters, args)) | |
| kwargs = {**positional_map, **kwargs} | |
| # Use the global event loop | |
| loop = get_event_loop() | |
| # Run the async call in the event loop | |
| future = asyncio.run_coroutine_threadsafe(self._async_call(**kwargs), loop) | |
| try: | |
| # Wait for the result with a timeout | |
| return future.result(timeout=30) | |
| except Exception as e: |
| logger.setLevel(logging.DEBUG) | ||
|
|
||
| self._initialize() | ||
|
|
||
| def _initialize(self): | ||
| """Initialize the connection and tools.""" | ||
| # Use the global event loop | ||
| loop = get_event_loop() | ||
|
|
||
| # Start a background thread to run the event loop | ||
| def run_event_loop(): | ||
| asyncio.set_event_loop(loop) | ||
| loop.run_forever() | ||
|
|
||
| self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) | ||
| self.loop_thread.start() | ||
|
|
||
| # Run the initialization in the event loop | ||
| future = asyncio.run_coroutine_threadsafe(self._async_initialize(), loop) |
There was a problem hiding this comment.
Starting a new thread per client with a shared event loop causes runtime errors
get_event_loop() returns a singleton loop. Creating multiple SSEMCPClient instances will attempt to start another thread running the same loop, triggering RuntimeError: This event loop is already running.
Guard the thread start:
- self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
- self.loop_thread.start()
+ if not loop.is_running():
+ self.loop_thread = threading.Thread(target=run_event_loop, daemon=True)
+ self.loop_thread.start()Alternatively, create a dedicated event loop per client instead of sharing a global one.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.setLevel(logging.DEBUG) | |
| self._initialize() | |
| def _initialize(self): | |
| """Initialize the connection and tools.""" | |
| # Use the global event loop | |
| loop = get_event_loop() | |
| # Start a background thread to run the event loop | |
| def run_event_loop(): | |
| asyncio.set_event_loop(loop) | |
| loop.run_forever() | |
| self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) | |
| self.loop_thread.start() | |
| # Run the initialization in the event loop | |
| future = asyncio.run_coroutine_threadsafe(self._async_initialize(), loop) | |
| def _initialize(self): | |
| """Initialize the connection and tools.""" | |
| # Use the global event loop | |
| loop = get_event_loop() | |
| # Start a background thread to run the event loop | |
| def run_event_loop(): | |
| asyncio.set_event_loop(loop) | |
| loop.run_forever() | |
| if not loop.is_running(): | |
| self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) | |
| self.loop_thread.start() | |
| # Run the initialization in the event loop | |
| future = asyncio.run_coroutine_threadsafe(self._async_initialize(), loop) |
…ance logging configuration, and increment version to 0.0.77 in `pyproject.toml` and `uv.lock` for improved functionality.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (6)
src/praisonai-agents/mcp-sse-direct-agent.py (3)
70-84: 🛠️ Refactor suggestionSupport positional arguments in
__call__.Currently, the method only accepts keyword arguments. To improve usability, support positional arguments by mapping them to parameter names based on the signature.
- def __call__(self, **kwargs): + def __call__(self, *args, **kwargs): """Synchronous wrapper for the async call.""" logger.debug(f"Tool {self.name} called with args: {kwargs}") + # Map positional args to the schema-defined names + if args: + positional_map = dict(zip(self.__signature__.parameters, args)) + kwargs = {**positional_map, **kwargs} + # Use the global event loop loop = get_event_loop()
114-122:⚠️ Potential issueAdd a proper cleanup mechanism to prevent resource leaks.
The client creates asynchronous context managers but never calls their
__aexit__methods, which can lead to resource leaks.class SSEMCPClient: """A client for connecting to an MCP server over SSE.""" def __init__(self, server_url: str): self.server_url = server_url self.session = None self.tools = [] self._initialize() + + def __del__(self): + try: + self.close() + except Exception: + pass + + def close(self): + """Close the connection and clean up resources.""" + loop = get_event_loop() + future = asyncio.run_coroutine_threadsafe(self._async_close(), loop) + future.result(timeout=10) + + async def _async_close(self): + """Asynchronously close the connection and clean up resources.""" + if self.session: + await self._session_context.__aexit__(None, None, None) + if hasattr(self, "_streams_context"): + await self._streams_context.__aexit__(None, None, None)
123-139:⚠️ Potential issueGuard against runtime errors when creating multiple clients.
Starting a new thread per client with a shared event loop can cause
RuntimeError: This event loop is already runningwhen multiple clients are created.def _initialize(self): """Initialize the connection and tools.""" # Use the global event loop loop = get_event_loop() # Start a background thread to run the event loop def run_event_loop(): asyncio.set_event_loop(loop) loop.run_forever() import threading - self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) - self.loop_thread.start() + # Only start the thread if the loop is not already running + if not loop.is_running(): + self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) + self.loop_thread.start()Additionally, import threading at the module level rather than inside this method.
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (3)
58-73: 🛠️ Refactor suggestionSupport positional arguments in
__call__.Currently, the method only accepts keyword arguments. To improve usability, support positional arguments by mapping them to parameter names based on the signature.
- def __call__(self, **kwargs): + def __call__(self, *args, **kwargs): """Synchronous wrapper for the async call.""" logger.debug(f"Tool {self.name} called with args: {kwargs}") + # Map positional args to the schema-defined names + if args: + positional_map = dict(zip(self.__signature__.parameters, args)) + kwargs = {**positional_map, **kwargs} + # Use the global event loop loop = get_event_loop()
127-143:⚠️ Potential issueGuard against runtime errors when creating multiple clients.
Starting a new thread per client with a shared event loop can cause
RuntimeError: This event loop is already runningwhen multiple clients are created.def _initialize(self): """Initialize the connection and tools.""" # Use the global event loop loop = get_event_loop() # Start a background thread to run the event loop def run_event_loop(): asyncio.set_event_loop(loop) loop.run_forever() - self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) - self.loop_thread.start() + # Only start the thread if the loop is not already running + if not loop.is_running(): + self.loop_thread = threading.Thread(target=run_event_loop, daemon=True) + self.loop_thread.start()
144-177:⚠️ Potential issueMissing clean-up of async contexts leaks network resources.
_streams_context.__aenter__()and_session_context.__aenter__()are called, but their corresponding__aexit__()are never invoked. Provide an explicitclose()(and call it from__del__) to shut down the SSE connection gracefully.+ async def _async_close(self): + """Asynchronously close the connection and clean up resources.""" + if self.session: + await self._session_context.__aexit__(None, None, None) + if hasattr(self, "_streams_context"): + await self._streams_context.__aexit__(None, None, None) + + def close(self): + """Close the connection and clean up resources.""" + loop = get_event_loop() + future = asyncio.run_coroutine_threadsafe(self._async_close(), loop) + try: + future.result(timeout=10) + except Exception as e: + logger.error(f"Error closing SSE connection: {e}") + + def __del__(self): + """Clean up resources when the object is garbage collected.""" + try: + self.close() + except Exception: + passNeglecting to close can leave sockets open and stall interpreter shutdown.
🧹 Nitpick comments (3)
src/praisonai-agents/mcp-sse-direct-agent.py (2)
1-20: Remove unused imports.The following imports are not used in the file:
import os import logging import asyncio -import time -import sys import inspect import json -from typing import List, Dict, Any, Optional, Callable -from contextlib import AsyncExitStack +from typing import Dict, Any, Optional🧰 Tools
🪛 Ruff (0.8.2)
4-4:
timeimported but unusedRemove unused import:
time(F401)
5-5:
sysimported but unusedRemove unused import:
sys(F401)
8-8:
typing.Listimported but unusedRemove unused import
(F401)
8-8:
typing.Callableimported but unusedRemove unused import
(F401)
9-9:
contextlib.AsyncExitStackimported but unusedRemove unused import:
contextlib.AsyncExitStack(F401)
215-219: Import traceback at the module level instead of inside the exception handler.+import traceback + def main(): @@ ... @@ except Exception as e: logger.error(f"Error: {e}") - import traceback traceback.print_exc()src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (1)
1-17: Remove unused imports.Several imported modules are not used in the code.
import asyncio import logging import threading import inspect -import json -from typing import List, Dict, Any, Optional, Callable, Iterable +from typing import Dict, Any, Optional🧰 Tools
🪛 Ruff (0.8.2)
11-11:
jsonimported but unusedRemove unused import:
json(F401)
12-12:
typing.Listimported but unusedRemove unused import
(F401)
12-12:
typing.Callableimported but unusedRemove unused import
(F401)
12-12:
typing.Iterableimported but unusedRemove unused import
(F401)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
src/praisonai-agents/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
src/praisonai-agents/mcp-sse-direct-agent.py(1 hunks)src/praisonai-agents/praisonaiagents/mcp/mcp.py(7 hunks)src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py(1 hunks)src/praisonai-agents/pyproject.toml(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src/praisonai-agents/pyproject.toml
- src/praisonai-agents/praisonaiagents/mcp/mcp.py
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/praisonai-agents/mcp-sse-direct-agent.py (2)
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (8)
get_event_loop(22-28)SSEMCPTool(31-99)_async_call(74-88)to_openai_tool(90-99)SSEMCPClient(102-184)_initialize(127-142)run_event_loop(133-135)_async_initialize(144-176)src/praisonai-agents/praisonaiagents/mcp/mcp.py (3)
call_tool(74-94)to_openai_tool(358-401)wrapper(313-324)
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (2)
src/praisonai-agents/mcp-sse-direct-agent.py (8)
get_event_loop(35-41)SSEMCPTool(43-111)_async_call(86-100)to_openai_tool(102-111)SSEMCPClient(114-177)_initialize(123-139)run_event_loop(129-131)_async_initialize(141-173)src/praisonai-agents/praisonaiagents/mcp/mcp.py (3)
call_tool(74-94)to_openai_tool(358-401)wrapper(313-324)
🪛 Ruff (0.8.2)
src/praisonai-agents/mcp-sse-direct-agent.py
4-4: time imported but unused
Remove unused import: time
(F401)
5-5: sys imported but unused
Remove unused import: sys
(F401)
8-8: typing.List imported but unused
Remove unused import
(F401)
8-8: typing.Callable imported but unused
Remove unused import
(F401)
9-9: contextlib.AsyncExitStack imported but unused
Remove unused import: contextlib.AsyncExitStack
(F401)
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py
11-11: json imported but unused
Remove unused import: json
(F401)
12-12: typing.List imported but unused
Remove unused import
(F401)
12-12: typing.Callable imported but unused
Remove unused import
(F401)
12-12: typing.Iterable imported but unused
Remove unused import
(F401)
🔇 Additional comments (8)
src/praisonai-agents/mcp-sse-direct-agent.py (3)
35-41: LGTM! Well-implemented event loop management.The function correctly handles the case when the event loop is None or closed, providing a stable way to access the global event loop.
43-69: The class signature setup looks good.The implementation properly sets up special attributes needed for tool integration with the Agent framework and builds a dynamic signature based on the input schema.
86-101: LGTM! Well-structured asynchronous tool call implementation.The asynchronous implementation handles exceptions appropriately and includes proper result extraction.
src/praisonai-agents/praisonaiagents/mcp/mcp_sse.py (5)
22-28: LGTM! Well-implemented event loop management.The function correctly handles the case when the event loop is None or closed, providing a stable way to access the global event loop.
31-57: The class signature setup looks good.The implementation properly sets up special attributes needed for tool integration with the Agent framework and builds a dynamic signature based on the input schema.
74-99: LGTM! Well-structured asynchronous tool call implementation.The asynchronous implementation handles exceptions appropriately and includes proper result extraction.
105-126: Good improvements with debug parameter and logging levels!The addition of a debug parameter to control logging levels is a nice improvement over the previous version.
178-184: Well-implemented iterability and OpenAI tools conversion.The methods to support iteration over tools and conversion to OpenAI format are well-implemented.
| def main(): | ||
| # Server URL | ||
| server_url = "http://0.0.0.0:8080/sse" | ||
|
|
||
| try: | ||
| # Connect to the MCP server | ||
| client = SSEMCPClient(server_url) | ||
|
|
||
| if not client.tools: | ||
| logger.error("No tools found on the server") | ||
| return | ||
|
|
||
| logger.info(f"Connected to server with {len(client.tools)} tools: {[tool.name for tool in client.tools]}") | ||
|
|
||
| # Create OpenAI-compatible tool definitions | ||
| openai_tools = [tool.to_openai_tool() for tool in client.tools] | ||
| logger.debug(f"OpenAI tools: {json.dumps(openai_tools, indent=2)}") | ||
|
|
||
| # Create an agent with the tools | ||
| assistant_agent = Agent( | ||
| instructions=system_prompt, | ||
| llm="openai/gpt-4o-mini", | ||
| tools=client.tools, | ||
| verbose=True | ||
| ) | ||
|
|
||
| # Start the agent with a query | ||
| logger.info("Starting agent with query about weather in Paris") | ||
| result = assistant_agent.chat( | ||
| "Hello! Can you tell me what the weather is like in Paris today?", | ||
| tools=openai_tools | ||
| ) | ||
|
|
||
| logger.info(f"Agent response: {result}") |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid hardcoded server URL and improve error handling.
The hardcoded server URL may cause issues in different environments. Consider extracting it to a configuration variable or command-line argument.
def main():
- # Server URL
- server_url = "http://0.0.0.0:8080/sse"
+ # Get server URL from environment variable or use default
+ server_url = os.environ.get("MCP_SERVER_URL", "http://0.0.0.0:8080/sse")Also, consider adding a proper cleanup in a finally block:
try:
# Connect to the MCP server
client = SSEMCPClient(server_url)
-
+ try:
if not client.tools:
logger.error("No tools found on the server")
return
@@ ... @@
logger.info(f"Agent response: {result}")
-
+ finally:
+ # Clean up resources
+ client.close()
except Exception as e:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def main(): | |
| # Server URL | |
| server_url = "http://0.0.0.0:8080/sse" | |
| try: | |
| # Connect to the MCP server | |
| client = SSEMCPClient(server_url) | |
| if not client.tools: | |
| logger.error("No tools found on the server") | |
| return | |
| logger.info(f"Connected to server with {len(client.tools)} tools: {[tool.name for tool in client.tools]}") | |
| # Create OpenAI-compatible tool definitions | |
| openai_tools = [tool.to_openai_tool() for tool in client.tools] | |
| logger.debug(f"OpenAI tools: {json.dumps(openai_tools, indent=2)}") | |
| # Create an agent with the tools | |
| assistant_agent = Agent( | |
| instructions=system_prompt, | |
| llm="openai/gpt-4o-mini", | |
| tools=client.tools, | |
| verbose=True | |
| ) | |
| # Start the agent with a query | |
| logger.info("Starting agent with query about weather in Paris") | |
| result = assistant_agent.chat( | |
| "Hello! Can you tell me what the weather is like in Paris today?", | |
| tools=openai_tools | |
| ) | |
| logger.info(f"Agent response: {result}") | |
| def main(): | |
| # Get server URL from environment variable or use default | |
| server_url = os.environ.get("MCP_SERVER_URL", "http://0.0.0.0:8080/sse") | |
| try: | |
| # Connect to the MCP server | |
| client = SSEMCPClient(server_url) | |
| try: | |
| if not client.tools: | |
| logger.error("No tools found on the server") | |
| return | |
| logger.info(f"Connected to server with {len(client.tools)} tools: {[tool.name for tool in client.tools]}") | |
| # Create OpenAI-compatible tool definitions | |
| openai_tools = [tool.to_openai_tool() for tool in client.tools] | |
| logger.debug(f"OpenAI tools: {json.dumps(openai_tools, indent=2)}") | |
| # Create an agent with the tools | |
| assistant_agent = Agent( | |
| instructions=system_prompt, | |
| llm="openai/gpt-4o-mini", | |
| tools=client.tools, | |
| verbose=True | |
| ) | |
| # Start the agent with a query | |
| logger.info("Starting agent with query about weather in Paris") | |
| result = assistant_agent.chat( | |
| "Hello! Can you tell me what the weather is like in Paris today?", | |
| tools=openai_tools | |
| ) | |
| logger.info(f"Agent response: {result}") | |
| finally: | |
| # Clean up resources | |
| client.close() | |
| except Exception as e: | |
| logger.exception("Unhandled exception in main") |
Increment version to 0.0.77 in `pyproject.toml` and update `markitdow…
User description
…n
dependency to include all extras inuv.lock` for enhanced functionality.PR Type
enhancement, dependencies
Description
Incremented project version to
0.0.76inpyproject.toml.Updated
markitdowndependency to include all extras (markitdown[all]).Improved dependency management for enhanced functionality.
Changes walkthrough 📝
pyproject.toml
Increment version and enhance `markitdown` dependencysrc/praisonai-agents/pyproject.toml
0.0.75to0.0.76.markitdowndependency tomarkitdown[all]for extendedfunctionality.
Summary by CodeRabbit
.qodofiles.