-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Increment version to 0.0.84 in pyproject.toml and uv.lock, update…
#485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
a2bec46
42d7fb4
468e36f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,6 +1,6 @@ | ||
| FROM python:3.11-slim | ||
| WORKDIR /app | ||
| COPY . . | ||
| RUN pip install flask praisonai==2.2.0 gunicorn markdown | ||
| RUN pip install flask praisonai==2.2.1 gunicorn markdown | ||
| EXPOSE 8080 | ||
| CMD ["gunicorn", "-b", "0.0.0.0:8080", "api:app"] |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,9 +1,8 @@ | ||
| from praisonaiagents import Agent, MCP | ||
|
|
||
| qa_agent = Agent( | ||
| instructions="""You are a Question Answering Agent.""", | ||
| llm="openai/gpt-4o-mini", | ||
| tools=MCP("http://localhost:8080/agents/sse") | ||
| tweet_agent = Agent( | ||
| instructions="""You are a Tweet Formatter Agent.""", | ||
| tools=MCP("http://localhost:8080/sse") | ||
| ) | ||
|
|
||
| qa_agent.start("AI in 2025") | ||
| tweet_agent.start("AI in Healthcare") | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,8 +2,7 @@ | |
|
|
||
| search_agent = Agent( | ||
| instructions="""You help book apartments on Airbnb.""", | ||
| llm="openai/gpt-4o-mini", | ||
| tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt") | ||
| ) | ||
|
Comment on lines
3
to
6
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Similar to the change in Is this shift in the example's LLM interaction pathway and the specific model used intentional? This modification could impact:
Clarifying the rationale behind this change would be helpful. If the goal is to showcase a simpler default OpenAI setup, consider adding comments to guide users on more advanced configurations using the |
||
|
|
||
| search_agent.start("I want to book an apartment in Paris for 2 nights. 03/28 - 03/30 for 2 adults") | ||
| search_agent.start("Search apartment in Paris for 2 nights. 07/28 - 07/30 for 2 adults") | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -16,13 +16,14 @@ | |
| class MCPToolRunner(threading.Thread): | ||
| """A dedicated thread for running MCP operations.""" | ||
|
|
||
| def __init__(self, server_params): | ||
| def __init__(self, server_params, timeout=60): | ||
| super().__init__(daemon=True) | ||
| self.server_params = server_params | ||
| self.queue = queue.Queue() | ||
| self.result_queue = queue.Queue() | ||
| self.initialized = threading.Event() | ||
| self.tools = [] | ||
| self.timeout = timeout | ||
| self.start() | ||
|
|
||
| def run(self): | ||
|
|
@@ -74,9 +75,9 @@ async def _run_async(self): | |
| def call_tool(self, tool_name, arguments): | ||
| """Call an MCP tool and wait for the result.""" | ||
| if not self.initialized.is_set(): | ||
| self.initialized.wait(timeout=30) | ||
| self.initialized.wait(timeout=self.timeout) | ||
| if not self.initialized.is_set(): | ||
| return "Error: MCP initialization timed out" | ||
| return f"Error: MCP initialization timed out after {self.timeout} seconds" | ||
|
Comment on lines
79
to
+80
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
|
||
| # Put request in queue | ||
| self.queue.put((tool_name, arguments)) | ||
|
|
@@ -189,7 +190,7 @@ def __init__(self, command_or_string=None, args=None, *, command=None, timeout=6 | |
| if isinstance(command_or_string, str) and re.match(r'^https?://', command_or_string): | ||
| # Import the SSE client implementation | ||
| from .mcp_sse import SSEMCPClient | ||
| self.sse_client = SSEMCPClient(command_or_string, debug=debug) | ||
| self.sse_client = SSEMCPClient(command_or_string, debug=debug, timeout=timeout) | ||
| self._tools = list(self.sse_client.tools) | ||
| self.is_sse = True | ||
| self.is_npx = False | ||
|
|
@@ -216,11 +217,11 @@ def __init__(self, command_or_string=None, args=None, *, command=None, timeout=6 | |
| args=arguments, | ||
| **kwargs | ||
| ) | ||
| self.runner = MCPToolRunner(self.server_params) | ||
| self.runner = MCPToolRunner(self.server_params, timeout) | ||
|
|
||
| # Wait for initialization | ||
| if not self.runner.initialized.wait(timeout=30): | ||
| print("Warning: MCP initialization timed out") | ||
| if not self.runner.initialized.wait(timeout=self.timeout): | ||
| print(f"Warning: MCP initialization timed out after {self.timeout} seconds") | ||
|
|
||
| # Automatically detect if this is an NPX command | ||
| self.is_npx = cmd == 'npx' or (isinstance(cmd, str) and os.path.basename(cmd) == 'npx') | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
| from praisonaiagents import Agent | ||
|
|
||
| agent = Agent(name="TweetAgent", instructions="Create a Tweet based on the topic provided") | ||
| agent = Agent(instructions="Create a Tweet") | ||
| agent.launch(port=8080, protocol="mcp") |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
llm="openai/gpt-4o-mini"parameter has been removed, and the agent name has changed. Previously, specifying the LLM this way (with a/) would engage thepraisonaiagents.llm.LLMclass, which serves as a wrapper around LiteLLM, for LLM interactions. With this parameter's removal, theAgentnow defaults to usingos.getenv('OPENAI_MODEL_NAME', 'gpt-4o')and interacts with the OpenAI API directly via theopenaiclient, bypassing theLLMwrapper.Could you clarify if this change in the example's LLM interaction mechanism (from
LLMwrapper/LiteLLM to directopenaiclient) and model (fromgpt-4o-minitogpt-4oor an environment variable) is intentional? Also, why was the agent name changed fromqa_agenttotweet_agent?This change could lead to:
LLMwrapper for non-OpenAI models or advanced LiteLLM configurations through these examples.If the intention is to simplify examples to use a direct OpenAI default, perhaps adding a comment in the example to explain this and to guide users on how to use the
LLMclass for other providers would be beneficial. Alternatively, ifgpt-4o-minivia theLLMwrapper was specifically chosen for this example previously, it might be worth considering if that configuration should be retained or updated to reflect the new preferred method of specification.