Skip to content

Update PraisonAI version to 2.2.28 across Dockerfiles and related files#604

Merged
MervinPraison merged 1 commit intomainfrom
develop
Jun 4, 2025
Merged

Update PraisonAI version to 2.2.28 across Dockerfiles and related files#604
MervinPraison merged 1 commit intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Jun 4, 2025

User description

  • Updated the version of PraisonAI from 2.2.27 to 2.2.28 in Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui).
  • Modified the version in the README.md and pyproject.toml files to reflect the new version.
  • Adjusted the deploy.py script to install the updated version of PraisonAI.
  • Ensured consistency across all relevant files for seamless integration.

PR Type

Enhancement, Tests


Description

  • Add comprehensive tests for LLM and agent tool-calling logic

    • New scripts compare agent.py and llm.py tool-calling paths
    • Tests for sequential tool calling and self-reflection
  • Refactor and enhance sequential tool-calling logic in LLM

    • Implements robust iteration and tool call handling
    • Aligns LLM tool-calling with agent.py logic
  • Update PraisonAI and praisonaiagents versions to latest (2.2.28, 0.0.99)

    • Update Dockerfiles, pyproject.toml, and documentation
  • Improve output formatting and verbose/debug display in LLM


Changes walkthrough 📝

Relevant files
Enhancement
10 files
llm.py
Refactor sequential tool-calling and self-reflection logic in LLM
+248/-148
sequence.py
Change LLM model to gpt-4o-mini for agent test                     
+1/-1     
Dockerfile
Update PraisonAI version to 2.2.28                                             
+1/-1     
Dockerfile.chat
Update PraisonAI version to 2.2.28                                             
+1/-1     
Dockerfile.dev
Update PraisonAI version to 2.2.28                                             
+1/-1     
Dockerfile.ui
Update PraisonAI version to 2.2.28                                             
+1/-1     
pyproject.toml
Bump PraisonAI and praisonaiagents dependency versions     
+4/-4     
pyproject.toml
Bump praisonaiagents version to 0.0.99                                     
+1/-1     
praisonai.rb
Update Homebrew formula to PraisonAI 2.2.28                           
+2/-2     
deploy.py
Update Dockerfile generation to PraisonAI 2.2.28                 
+1/-1     
Tests
5 files
debug_comparison.py
Add script to compare agent.py and llm.py tool-calling     
+96/-0   
test_both_formats.py
Add test for both agent.py and llm.py LLM formats               
+55/-0   
test_sequential_fix.py
Add test for sequential tool-calling in both LLM paths     
+56/-0   
test_self_reflection.py
Add test for self-reflection in Agent and LLM                       
+61/-0   
sequence_llm.py
Add sequence test using openai/gpt-4o-mini LLM path           
+11/-0   
Documentation
1 files
README.md
Update PraisonAI version references to 2.2.28                       
+2/-2     

Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features

      • Added new test scripts to validate agent initialization, tool integration, sequential reasoning, and self-reflection features.
    • Bug Fixes

      • Improved agent logic to prevent infinite loops and enhance tool calling and self-reflection reliability.
    • Chores

      • Updated PraisonAI and PraisonAIAgents package versions and related dependencies across configuration files and documentation.
      • Updated Docker and deployment files to use the latest package versions.

    - Updated the version of PraisonAI from 2.2.27 to 2.2.28 in Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui).
    - Modified the version in the README.md and pyproject.toml files to reflect the new version.
    - Adjusted the deploy.py script to install the updated version of PraisonAI.
    - Ensured consistency across all relevant files for seamless integration.
    @MervinPraison MervinPraison merged commit ec18d4a into main Jun 4, 2025
    8 of 13 checks passed
    @coderabbitai
    Copy link
    Copy Markdown
    Contributor

    coderabbitai bot commented Jun 4, 2025

    Caution

    Review failed

    The pull request is closed.

    Walkthrough

    This update increments the PraisonAI and praisonaiagents package versions across Dockerfiles, deployment scripts, and metadata files. It introduces new test scripts for agent and LLM usage patterns, self-reflection, and sequential tool calling. The LLM response generation logic is refactored for improved control flow, bounded iteration, and error handling.

    Changes

    File(s) Change Summary
    docker/Dockerfile, docker/Dockerfile.chat, docker/Dockerfile.dev, docker/Dockerfile.ui Updated praisonai package version from 2.2.27 to 2.2.28 in pip install commands.
    docker/README.md Updated documentation to reflect praisonai version bump from 2.2.27 to 2.2.28.
    src/praisonai-agents/praisonaiagents/llm/llm.py Refactored get_response method: introduced bounded iteration, improved tool call and self-reflection handling.
    src/praisonai-agents/sequence.py Changed LLM identifier from "openai/gpt-4o-mini" to "gpt-4o-mini".
    src/praisonai-agents/sequence_llm.py Added script to define and run a sequential agent using "openai/gpt-4o-mini" and MCP tool integration.
    src/praisonai-agents/debug_comparison.py, src/praisonai-agents/test_both_formats.py, src/praisonai-agents/test_sequential_fix.py Added scripts to test agent usage with different LLM specification formats and MCP tool integration.
    src/praisonai-agents/test_self_reflection.py Added script to test self-reflection in Agent and LLM classes with and without the reflection flag.
    src/praisonai-agents/pyproject.toml Bumped praisonaiagents version from 0.0.98 to 0.0.99.
    src/praisonai/pyproject.toml Bumped PraisonAI version from 2.2.27 to 2.2.28; updated praisonaiagents dependency to >=0.0.99.
    src/praisonai/praisonai.rb Updated formula to reference praisonai version 2.2.28 and corresponding SHA256.
    src/praisonai/praisonai/deploy.py Updated Dockerfile creation to pin praisonai version 2.2.28.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant Agent
        participant LLM
        participant Tool (MCP)
        User->>Agent: Submit prompt
        Agent->>LLM: Generate response (with/without tool calls)
        alt Tool call needed
            LLM->>Tool (MCP): Execute tool call
            Tool (MCP)-->>LLM: Return tool result
            LLM->>Agent: Append tool result, possibly continue
        end
        alt Self-reflection enabled
            Agent->>LLM: Request reflection
            LLM-->>Agent: Return reflection result
            Agent->>LLM: Regenerate response if needed
        end
        Agent-->>User: Return final response
    
    Loading

    Possibly related PRs

    Poem

    🐇
    A hop, a skip, a version bump,
    New numbers thump-thump-thump!
    Agents reflect and tools now think,
    With Docker fresh and in the pink.
    Tests abound, all systems go—
    The code’s in step, now watch it flow!


    📜 Recent review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 3588f1c and 7b2e2b6.

    ⛔ Files ignored due to path filters (2)
    • src/praisonai-agents/uv.lock is excluded by !**/*.lock
    • src/praisonai/uv.lock is excluded by !**/*.lock
    📒 Files selected for processing (16)
    • docker/Dockerfile (1 hunks)
    • docker/Dockerfile.chat (1 hunks)
    • docker/Dockerfile.dev (1 hunks)
    • docker/Dockerfile.ui (1 hunks)
    • docker/README.md (2 hunks)
    • src/praisonai-agents/debug_comparison.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/llm/llm.py (8 hunks)
    • src/praisonai-agents/pyproject.toml (1 hunks)
    • src/praisonai-agents/sequence.py (1 hunks)
    • src/praisonai-agents/sequence_llm.py (1 hunks)
    • src/praisonai-agents/test_both_formats.py (1 hunks)
    • src/praisonai-agents/test_self_reflection.py (1 hunks)
    • src/praisonai-agents/test_sequential_fix.py (1 hunks)
    • src/praisonai/praisonai.rb (1 hunks)
    • src/praisonai/praisonai/deploy.py (1 hunks)
    • src/praisonai/pyproject.toml (4 hunks)
    ✨ Finishing Touches
    • 📝 Generate Docstrings

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    Summary of Changes

    Hello team, Gemini here with a summary of this pull request. This PR primarily focuses on updating the PraisonAI version across the project to 2.2.28. This includes updating the version specified in various Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui), the README.md documentation, the Homebrew formula (praisonai.rb), and the internal deployment script (deploy.py). Additionally, the praisonaiagents dependency within the main PraisonAI project has been updated to >=0.0.99, corresponding to a version bump in the praisonai-agents sub-project from 0.0.98 to 0.0.99. The praisonai-agents changes also include significant refactoring in the LLM class's get_response method to improve sequential tool calling logic and the addition of several new test scripts (debug_comparison.py, test_both_formats.py, test_self_reflection.py, test_sequential_fix.py, sequence_llm.py) to verify different LLM configurations, sequential tool execution, and self-reflection functionality.

    Highlights

    • PraisonAI Agents Update: Updated the praisonaiagents dependency to >=0.0.99 and the internal praisonai-agents version to 0.0.99.
    • LLM Tool Calling Refactor: Refactored the get_response method in praisonaiagents/llm/llm.py to implement a sequential tool calling loop, improving handling of tools like sequentialthinking.
    • New Test Scripts: Added multiple new test scripts (debug_comparison.py, test_both_formats.py, test_self_reflection.py, test_sequential_fix.py, sequence_llm.py) to validate different LLM configurations, sequential tool use, and self-reflection.

    Changelog

    Click here to see the changelog
    • docker/Dockerfile
      • Updated praisonai dependency version to >=2.2.28.
    • docker/Dockerfile.chat
      • Updated praisonai dependency version to >=2.2.28.
    • docker/Dockerfile.dev
      • Updated praisonai dependency version to >=2.2.28.
    • docker/Dockerfile.ui
      • Updated praisonai dependency version to >=2.2.28.
    • docker/README.md
      • Updated PraisonAI version mention in 'Package Versions' section to >=2.2.28.
      • Updated pinned version example in 'Version Pinning' section to ==2.2.28.
    • src/praisonai-agents/debug_comparison.py
      • Added new script to compare Agent behavior using 'gpt-4o-mini' (agent.py path) vs 'openai/gpt-4o-mini' (llm.py path) with sequential thinking tool.
    • src/praisonai-agents/llm/llm.py
      • Refactored get_response method to use a sequential tool calling loop with a maximum iteration limit (10).
      • Adjusted display logic to show initial prompt processing once.
      • Updated time tracking variables for display updates.
      • Ensured tool calls are checked after the initial response generation.
      • Implemented logic to continue the tool calling loop if the 'sequentialthinking' tool indicates nextThoughtNeeded.
      • Added a step to generate a final response after tool calls are completed.
      • Separated the self-reflection logic into its own distinct loop.
    • src/praisonai-agents/pyproject.toml
      • Updated praisonaiagents version from 0.0.98 to 0.0.99.
    • src/praisonai-agents/sequence.py
      • Changed the llm parameter for the Agent from 'openai/gpt-4o-mini' to 'gpt-4o-mini'.
    • src/praisonai-agents/sequence_llm.py
      • Added new script demonstrating Agent use with 'openai/gpt-4o-mini' and sequential thinking tool.
    • src/praisonai-agents/test_both_formats.py
      • Added new script to test Agent behavior with both 'gpt-4o-mini' and 'openai/gpt-4o-mini' configurations.
    • src/praisonai-agents/test_self_reflection.py
      • Added new script to test self-reflection functionality in both the Agent and LLM classes.
    • src/praisonai-agents/test_sequential_fix.py
      • Added new script to test sequential tool calling fix with both 'gpt-4o-mini' and 'openai/gpt-4o-mini' configurations.
    • src/praisonai/praisonai.rb
      • Updated Homebrew formula URL and sha256 hash to use v2.2.28.
    • src/praisonai/praisonai/deploy.py
      • Updated pinned praisonai version in the generated Dockerfile from ==2.2.27 to ==2.2.28.
    • src/praisonai/pyproject.toml
      • Updated PraisonAI project version from 2.2.27 to 2.2.28.
      • Updated praisonaiagents dependency version from >=0.0.97 to >=0.0.99.
      • Updated Poetry project version from 2.2.27 to 2.2.28.
      • Updated Poetry praisonaiagents dependency version from >=0.0.97 to >=0.0.99.
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


    Versions increment, a steady flow,
    Through Dockerfiles and code they go.
    Agents reflect, and tools align,
    In Python's world, a new design.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-code-review
    Copy link
    Copy Markdown

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Iteration Limit

    The sequential tool calling loop has a hardcoded max_iterations value of 10. Consider making this configurable through parameters to allow for more complex tool calling sequences when needed.

    max_iterations = 10  # Prevent infinite loops
    iteration_count = 0
    Time Tracking

    The code uses both start_time and current_time for timing operations. In some places, generation_time is calculated using current_time while in others it uses start_time, which may lead to inconsistent timing reports.

            generation_time=time.time() - current_time,
            console=console
        )
    else:
        display_interaction(
            original_prompt,
            response_text,
            markdown=markdown,
            generation_time=time.time() - current_time,
    Error Handling

    The script catches exceptions but doesn't provide a mechanism to exit with a non-zero status code when tests fail, which could be important for automated testing environments.

    if success1 and success2:
        print("\n🎉 BOTH FORMATS WORK CORRECTLY!")
        print("📝 The issue mentioned might be resolved or was a different problem.")
    elif success1 and not success2:
        print("\n⚠️  CONFIRMED: LLM class path has issues")
        print("📝 Need to debug the LLM class implementation")
    elif success2 and not success1:
        print("\n⚠️  CONFIRMED: Agent direct path has issues")
        print("📝 Need to debug the agent direct implementation")
    else:
        print("\n💥 BOTH PATHS FAILED - Something is fundamentally wrong")

    @qodo-code-review
    Copy link
    Copy Markdown

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    Copy link
    Copy Markdown
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    This pull request successfully updates the PraisonAI version across various Dockerfiles, README, pyproject.toml, and a deployment script. It also introduces new test scripts and refactors the LLM class's get_response method to align its tool-calling and self-reflection logic with the Agent class. The addition of comprehensive tests for LLM and agent tool-calling is a valuable contribution.

    Summary of Findings

    • LLM get_response Logic Flow (High): The main loop termination condition and the placement of the self-reflection loop in llm.py::get_response may not correctly implement the intended sequential tool calling and reflection flow. (Commented)
    • LLM get_response_async Logic Flow (High): Similar logic flow issues regarding sequential tool calling and self-reflection placement appear to be present in the asynchronous get_response_async method. (Commented)
    • LLM Parameter Handling Inconsistency (High): The way **kwargs are handled and passed to litellm.completion alongside parameters from _build_completion_params can lead to conflicts or duplication. This issue exists in both get_response and get_response_async. (Commented)
    • Time Calculation Consistency (Medium): Inconsistent use of start_time and current_time for calculating generation_time in llm.py::get_response. (Commented)
    • Variable Clarity (response_text, final_response_text) (Medium): The purpose and lifecycle of response_text and final_response_text in llm.py::get_response could be made clearer. (Commented)
    • Ollama Workaround Extraction (Medium): The specific logic for handling Ollama tool results in llm.py::get_response could be extracted into a helper method for better readability. (Commented)
    • Version Bumps (Low): Updated PraisonAI and praisonaiagents versions across Dockerfiles, README, pyproject.toml, deploy.py, and praisonai.rb. (Not commented)
    • Test Scripts Use Print (Low): New test/debug scripts (debug_comparison.py, test_both_formats.py, test_self_reflection.py, test_sequential_fix.py) use print statements instead of logging. (Not commented)
    • Test Scripts Lack Docstrings/Type Hints (Low): New test/debug scripts could benefit from docstrings and type hints for improved maintainability. (Not commented)

    Merge Readiness

    The pull request includes necessary version updates and valuable new test coverage. However, the refactoring of the llm.py methods introduces potential logic flow and parameter handling issues of high severity. These issues should be addressed to ensure the correct behavior of sequential tool calling and self-reflection, and to improve the robustness and maintainability of the code. Therefore, I recommend requesting changes before merging. Please note that I am unable to directly approve the pull request; other reviewers should review and approve this code before merging.

    Comment on lines 413 to 414
    start_time = time.time()
    reflection_count = 0
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    There seems to be an inconsistency in how generation_time is calculated and displayed. start_time is set once at the beginning (line 413), while current_time is set at the start of each loop iteration (line 439). The display functions then use a mix of time.time() - start_time (e.g., lines 726, 749, 761) and time.time() - current_time (e.g., lines 462, 470, 491). Using current_time seems more appropriate for showing the duration of the current LLM call within the sequential tool loop, while start_time would be for the total time of the entire get_response call. Could we standardize this based on the intended meaning (per-step time vs. total time) for clarity?

    # Sequential tool calling loop - similar to agent.py
    max_iterations = 10 # Prevent infinite loops
    iteration_count = 0
    final_response_text = ""
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The variables response_text (e.g., lines 453, 478, 505, 747, 753) and final_response_text (e.g., lines 434, 690, 705, 716, 718, 740, 741) are used to store response content. Their assignment and usage across the different branches (reasoning_steps, streaming, non-streaming, before/after tool calls) seem a bit complex and could potentially lead to confusion about which variable holds the final output at different stages. Could we add comments or potentially refactor to make the purpose and lifecycle of these variables clearer?

    # If we reach here, no more tool calls needed - get final response
    # Make one more call to get the final summary response
    # Special handling for Ollama models that don't automatically process tool results
    if self.model and self.model.startswith("ollama/") and tool_result:
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The special handling logic for Ollama models after tool calls (lines 598-653) is quite detailed and involves checking JSON structure and making a follow-up call. While necessary, this block adds significant complexity to the main get_response function. To improve readability and potentially make this logic reusable, could it be extracted into a dedicated helper method (e.g., _process_ollama_tool_results)?

    Copy link
    Copy Markdown

    @cursor cursor bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Bug: Dynamic Hash Calculation in Homebrew Formula

    The formula dynamically calculates the SHA256 hash at build time using curl and shasum. This approach is insecure and prevents reproducible builds; Homebrew formulas require static, pre-calculated hashes.

    src/praisonai/praisonai.rb#L5-L7

    homepage "https://github.com/MervinPraison/PraisonAI"
    url "https://github.com/MervinPraison/PraisonAI/archive/refs/tags/v2.2.28.tar.gz"
    sha256 `curl -sL https://github.com/MervinPraison/PraisonAI/archive/refs/tags/v2.2.28.tar.gz | shasum -a 256`.split.first

    Fix in Cursor


    BugBot free trial expires on June 9, 2025
    You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.

    Was this report helpful? Give feedback by reacting with 👍 or 👎

    This was referenced Jun 5, 2025
    shaneholloman pushed a commit to shaneholloman/praisonai that referenced this pull request Feb 4, 2026
    Update PraisonAI version to 2.2.28 across Dockerfiles and related files
    @coderabbitai coderabbitai bot mentioned this pull request Apr 7, 2026
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    1 participant