Skip to content

Update to version 2.1.1#442

Merged
MervinPraison merged 1 commit intomainfrom
develop
Apr 1, 2025
Merged

Update to version 2.1.1#442
MervinPraison merged 1 commit intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Apr 1, 2025

  • Incremented version number to 2.1.1 in pyproject.toml, praisonai.rb, and uv.lock to reflect recent changes.
  • Updated dependency version for praisonaiagents to 0.0.71 in pyproject.toml and uv.lock.
  • Modified Dockerfile and deployment scripts to install the updated version of praisonai.
  • Added new documentation files for various MCP integrations, including Anthropic, Gemini, Groq, Mistral, Ollama, OpenAI, and XAI.
  • Introduced example scripts for each MCP integration to facilitate user understanding and implementation.
  • Enhanced the LLM class to process tool results based on the original user query for improved response accuracy.

Summary by CodeRabbit

  • New Features

    • Introduced comprehensive integration guides for several AI models (e.g., Anthropic, Gemini, Groq, Mistral, Ollama, OpenAI, OpenRouter, xAI) with quick start instructions and visual flowcharts.
    • Added new example scripts demonstrating both apartment search automation and a stock price assistant.
  • Documentation

    • Expanded user resources with updated guides and pages, offering step-by-step setups for model integrations.
  • Chores

    • Updated project and dependency versions (from 2.1.0 to 2.1.1) along with refined deployment configurations.

- Incremented version number to 2.1.1 in `pyproject.toml`, `praisonai.rb`, and `uv.lock` to reflect recent changes.
- Updated dependency version for `praisonaiagents` to 0.0.71 in `pyproject.toml` and `uv.lock`.
- Modified Dockerfile and deployment scripts to install the updated version of `praisonai`.
- Added new documentation files for various MCP integrations, including Anthropic, Gemini, Groq, Mistral, Ollama, OpenAI, and XAI.
- Introduced example scripts for each MCP integration to facilitate user understanding and implementation.
- Enhanced the LLM class to process tool results based on the original user query for improved response accuracy.
@MervinPraison MervinPraison merged commit 54fdf5a into main Apr 1, 2025
3 of 6 checks passed
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This pull request primarily upgrades the PraisonAI package version from 2.1.0 to 2.1.1 across various files. In addition to updating Dockerfile and deployment scripts, the change introduces several new MCP integration documentation files covering multiple AI models (Anthropic, Gemini, Groq, Mistral, Ollama, OpenAI, OpenRouter, and xAI). New example scripts and a stock price assistant agent implementation are also added. Minor adjustments in internal prompt handling and project configuration updates are included throughout the codebase.

Changes

File(s) Change Summary
docker/Dockerfile, docs/api/praisonai/deploy.html, praisonai/deploy.py Updated package installation commands to upgrade praisonai from 2.1.0 to 2.1.1.
docs/mcp/anthropic.mdx, docs/mcp/gemini.mdx, docs/mcp/groq.mdx, docs/mcp/mistral.mdx, docs/mcp/ollama-python.mdx, docs/mcp/ollama.mdx, docs/mcp/openai.mdx, docs/mcp/openrouter.mdx, docs/mcp/xai.mdx Added new documentation files for MCP integrations with various AI models, including setup instructions, quick starts, and flowcharts.
docs/mint.json Added new pages under the "MCP" group to index the new integration guides.
examples/mcp/ollama-python.py, examples/mcp/openrouter-mcp.py, examples/mcp/xai-mcp.py Introduced example scripts: renamed a variable in ollama-python.py (from search_agent to stock_agent) and added new examples demonstrating MCP agent configurations.
praisonai.rb Updated the package URL to reference version 2.1.1 of PraisonAI.
pyproject.toml, src/praisonai-agents/pyproject.toml Bumped the project version and updated the dependency version for praisonaiagents (from >=0.0.70 to >=0.0.71).
src/praisonai-agents/mcp-ollama-python.py Added a new file that initializes a stock price assistant agent using MCP and the Ollama/llama3.2 language model.
src/praisonai-agents/llm/llm.py Revised follow-up prompt construction in LLM response methods to directly use the original user query alongside the tool call results.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant A as Agent
    participant L as LLM
    participant T as MCP Tool

    U->>A: Submit query (e.g. "What is the Stock Price of Apple?")
    A->>L: Send initial prompt with query
    L-->>A: Return response with tool call info
    A->>T: Execute tool command based on response
    T-->>A: Return execution result
    A->>L: Request detailed answer with result context
    L-->>A: Return final answer
    A->>U: Deliver answer
Loading

Possibly related PRs

Poem

I'm a rabbit, hopping with delight,
Upgrading versions and coding through the night,
New guides and agents make the code shine bright,
With each commit, I munch on carrots in flight,
Celebrating changes with leaps so light!
🐇💻 Hop on, code friends, into the digital light!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ba17c6 and e3d4b78.

⛔ Files ignored due to path filters (3)
  • poetry.lock is excluded by !**/*.lock
  • src/praisonai-agents/uv.lock is excluded by !**/*.lock
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (21)
  • docker/Dockerfile (1 hunks)
  • docs/api/praisonai/deploy.html (1 hunks)
  • docs/mcp/anthropic.mdx (1 hunks)
  • docs/mcp/gemini.mdx (1 hunks)
  • docs/mcp/groq.mdx (1 hunks)
  • docs/mcp/mistral.mdx (1 hunks)
  • docs/mcp/ollama-python.mdx (1 hunks)
  • docs/mcp/ollama.mdx (1 hunks)
  • docs/mcp/openai.mdx (1 hunks)
  • docs/mcp/openrouter.mdx (1 hunks)
  • docs/mcp/xai.mdx (1 hunks)
  • docs/mint.json (1 hunks)
  • examples/mcp/ollama-python.py (1 hunks)
  • examples/mcp/openrouter-mcp.py (1 hunks)
  • examples/mcp/xai-mcp.py (1 hunks)
  • praisonai.rb (1 hunks)
  • praisonai/deploy.py (1 hunks)
  • pyproject.toml (4 hunks)
  • src/praisonai-agents/mcp-ollama-python.py (1 hunks)
  • src/praisonai-agents/praisonaiagents/llm/llm.py (2 hunks)
  • src/praisonai-agents/pyproject.toml (1 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request updates the PraisonAI framework to version 2.1.1. The changes include incrementing the version number in multiple files, updating the praisonaiagents dependency, modifying the Dockerfile and deployment scripts, adding new documentation files and example scripts for various MCP integrations (Anthropic, Gemini, Groq, Mistral, Ollama, OpenAI, and XAI), and enhancing the LLM class to process tool results based on the original user query. The goal is to reflect recent changes, improve user understanding, and enhance response accuracy.

Highlights

  • Version Update: The core PraisonAI library and related files are updated to version 2.1.1.
  • Dependency Update: The praisonaiagents dependency is updated to version 0.0.71.
  • MCP Integrations: New documentation and example scripts are added for Anthropic, Gemini, Groq, Mistral, Ollama, OpenAI, and XAI MCP integrations, providing users with quick start guides and feature overviews.
  • LLM Enhancement: The LLM class is enhanced to process tool results based on the original user query, improving response accuracy.

Changelog

Click here to see the changelog
  • docker/Dockerfile
    • Updated praisonai version to 2.1.1 in the Dockerfile.
  • docs/api/praisonai/deploy.html
    • Updated praisonai version to 2.1.1 in the deployment documentation.
  • docs/mcp/anthropic.mdx
    • Added documentation for Anthropic MCP integration, including a quick start guide and features overview.
  • docs/mcp/gemini.mdx
    • Added documentation for Gemini MCP integration, including a quick start guide and features overview.
  • docs/mcp/groq.mdx
    • Added documentation for Groq MCP integration, including a quick start guide and features overview.
  • docs/mcp/mistral.mdx
    • Added documentation for Mistral MCP integration, including a quick start guide and features overview.
  • docs/mcp/ollama-python.mdx
    • Added documentation for Ollama Python MCP integration, including a quick start guide and features overview.
  • docs/mcp/ollama.mdx
    • Added documentation for Ollama MCP integration, including a quick start guide and features overview.
  • docs/mcp/openai.mdx
    • Added documentation for OpenAI MCP integration, including a quick start guide and features overview.
  • docs/mcp/openrouter.mdx
    • Added documentation for OpenRouter MCP integration, including a quick start guide and features overview.
  • docs/mcp/xai.mdx
    • Added documentation for xAI MCP integration, including a quick start guide and features overview.
  • docs/mint.json
    • Added new MCP documentation files to the documentation index.
  • examples/mcp/ollama-python.py
    • Updated example script for Ollama Python MCP integration to use stock price assistant.
  • examples/mcp/openrouter-mcp.py
    • Added example script for OpenRouter MCP integration.
  • examples/mcp/xai-mcp.py
    • Added example script for xAI MCP integration.
  • praisonai.rb
    • Updated Praisonai formula version to 2.1.1.
  • praisonai/deploy.py
    • Updated praisonai version to 2.1.1 in the deployment script.
  • pyproject.toml
    • Updated PraisonAI version to 2.1.1.
    • Updated praisonaiagents dependency to version 0.0.71.
  • src/praisonai-agents/mcp-ollama-python.py
    • Added example script for Ollama Python MCP integration to use stock price assistant.
  • src/praisonai-agents/praisonaiagents/llm/llm.py
    • Enhanced LLM class to process tool results based on the original user query for improved response accuracy.
  • src/praisonai-agents/pyproject.toml
    • Updated praisonaiagents version to 0.0.71.
  • src/praisonai-agents/uv.lock
    • Updated praisonaiagents version to 0.0.71.
  • uv.lock
    • Updated praisonai version to 2.1.1.
    • Updated praisonaiagents version to 0.0.71.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


A version ascends,
New docs and agents blend,
LLM's insight grows,
As accuracy shows,
Progress without end.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@netlify
Copy link
Copy Markdown

netlify bot commented Apr 1, 2025

Deploy Preview for praisonai ready!

Name Link
🔨 Latest commit e3d4b78
🔍 Latest deploy log https://app.netlify.com/sites/praisonai/deploys/67ec58fbc2a9ef0008466453
😎 Deploy Preview https://deploy-preview-442--praisonai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the project to version 2.1.1, updates a dependency, modifies Dockerfile and deployment scripts, adds new documentation files for various MCP integrations, introduces example scripts, and enhances the LLM class. Overall, the changes seem well-organized and contribute to the project's functionality and usability.

Summary of Findings

  • Missing API Key Environment Variables in Groq and XAI Integrations: The Groq and XAI MCP integration guides lack explicit instructions on setting the API key as an environment variable, unlike other integrations. This inconsistency could lead to confusion for users trying to set up these integrations.
  • Inconsistent Dependency Installation Instructions: The OpenAI and OpenRouter MCP integration guides use pip install praisonaiagents, while other guides use pip install "praisonaiagents[llm]". This inconsistency could lead to confusion for users installing dependencies.
  • Hardcoded Paths in Ollama Python Example: The Ollama Python example script contains hardcoded file paths, which need to be replaced by the user. While there is a comment noting this, it's a potential point of failure if users miss the comment or don't understand how to modify the paths correctly.

Merge Readiness

The pull request includes several enhancements and updates, including documentation and dependency updates. However, there are a few inconsistencies and potential issues that should be addressed before merging to ensure a smooth user experience. Specifically, the missing API key instructions for Groq and XAI, the inconsistent dependency installation instructions, and the hardcoded paths in the Ollama Python example should be reviewed and corrected. I am unable to directly approve this pull request, and other reviewers should review and approve this code before merging.

Comment on lines +39 to +42
instructions="""You help book apartments on Airbnb.""",
llm="groq/llama-3.2-90b-vision-preview",
tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Groq API key is not being passed to the MCP server. This is inconsistent with other integrations. You should pass the Groq API key as an environment variable to the MCP server.

tools=MCP(
                command="npx",
                args=["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"],
                env={"GROQ_API_KEY": os.environ.get("GROQ_API_KEY")}
            )

Comment on lines +41 to +42
tools=MCP("npx -y @openbnb/mcp-server-airbnb --ignore-robots-txt")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The XAI API key is not being passed to the MCP server. This is inconsistent with other integrations. You should pass the XAI API key as an environment variable to the MCP server.

tools=MCP(
                command="npx",
                args=["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"],
                env={"XAI_API_KEY": os.environ.get("XAI_API_KEY")}
            )

Comment on lines +523 to +524
# Create a prompt that asks the model to process the tool results based on original context
# Extract the original user query from messages
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider extracting the prompt construction logic into a separate function for better readability and maintainability. This would also allow for easier testing and modification of the prompt.

Suggested change
# Create a prompt that asks the model to process the tool results based on original context
# Extract the original user query from messages
# Extract the original user query from messages
original_query = ""
for msg in messages:
if msg.get("role") == "user":
original_query = msg.get("content", "")
break

Comment on lines +531 to +532
# Create a shorter follow-up prompt
follow_up_prompt = f"Results:\n{json.dumps(tool_result, indent=2)}\nProvide Answer to this Original Question based on the above results: '{original_query}'"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The follow-up prompt could be improved by including more context about the original task. This might help the model generate a more relevant and accurate response. Consider adding a brief summary of the user's goal or instructions to the prompt.

follow_up_prompt = f"Original Question: {original_query}\nResults:\n{json.dumps(tool_result, indent=2)}\nProvide Answer to this Original Question based on the above results: '{original_query}'"

Comment on lines +1102 to +1103
# Create a prompt that asks the model to process the tool results based on original context
# Extract the original user query from messages
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider extracting the prompt construction logic into a separate function for better readability and maintainability. This would also allow for easier testing and modification of the prompt.

# Extract the original user query from messages
                                original_query = ""
                                for msg in messages:
                                    if msg.get("role") == "user":
                                        original_query = msg.get("content", "")
                                        break

Comment on lines +1110 to +1111
# Create a shorter follow-up prompt
follow_up_prompt = f"Results:\n{json.dumps(tool_result, indent=2)}\nProvide Answer to this Original Question based on the above results: '{original_query}'"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The follow-up prompt could be improved by including more context about the original task. This might help the model generate a more relevant and accurate response. Consider adding a brief summary of the user's goal or instructions to the prompt.

follow_up_prompt = f"Original Question: {original_query}\nResults:\n{json.dumps(tool_result, indent=2)}\nProvide Answer to this Original Question based on the above results: '{original_query}'"

Comment on lines +9 to +10
# NOTE: Python Path replace with yours: /Users/praison/miniconda3/envs/mcp/bin/python
# NOTE: app.py file path, replace it with yours: /Users/praison/stockprice/app.py
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While the note is helpful, consider using os.path.expanduser('~') to make the path more portable across different user environments. Also, consider using os.path.join for constructing the path to ensure cross-platform compatibility.

Suggested change
# NOTE: Python Path replace with yours: /Users/praison/miniconda3/envs/mcp/bin/python
# NOTE: app.py file path, replace it with yours: /Users/praison/stockprice/app.py
import os
# NOTE: Python Path replace with yours: os.path.join(os.path.expanduser('~'), 'miniconda3', 'envs', 'mcp', 'bin', 'python')
# NOTE: app.py file path, replace it with yours: os.path.join(os.path.expanduser('~'), 'stockprice', 'app.py')

Comment on lines +6 to +7
tools=MCP("/Users/praison/miniconda3/envs/mcp/bin/python /Users/praison/stockprice/app.py")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider using os.path.expanduser('~') to make the path more portable across different user environments. Also, consider using os.path.join for constructing the path to ensure cross-platform compatibility.

import os

    tools=MCP(os.path.join(os.path.expanduser('~'), 'miniconda3', 'envs', 'mcp', 'bin', 'python'), os.path.join(os.path.expanduser('~'), 'stockprice', 'app.py'))

Comment on lines +58 to +59
```zsh
pip install praisonaiagents
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The installation instructions are inconsistent with other MCP integration guides. Most guides use pip install "praisonaiagents[llm]", while this one uses pip install praisonaiagents. Consider standardizing the installation instructions across all MCP guides.

pip install "praisonaiagents[llm]"

Comment on lines +51 to +52
pip install praisonaiagents
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The installation instructions are inconsistent with other MCP integration guides. Most guides use pip install "praisonaiagents[llm]", while this one uses pip install praisonaiagents. Consider standardizing the installation instructions across all MCP guides.

pip install "praisonaiagents[llm]"

@coderabbitai coderabbitai bot mentioned this pull request May 14, 2025
shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant