Update PraisonAI version to 2.2.28 across Dockerfiles and related files#604
Update PraisonAI version to 2.2.28 across Dockerfiles and related files#604MervinPraison merged 1 commit intomainfrom
Conversation
- Updated the version of PraisonAI from 2.2.27 to 2.2.28 in Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui). - Modified the version in the README.md and pyproject.toml files to reflect the new version. - Adjusted the deploy.py script to install the updated version of PraisonAI. - Ensured consistency across all relevant files for seamless integration.
|
Caution Review failedThe pull request is closed. WalkthroughThis update increments the PraisonAI and praisonaiagents package versions across Dockerfiles, deployment scripts, and metadata files. It introduces new test scripts for agent and LLM usage patterns, self-reflection, and sequential tool calling. The LLM response generation logic is refactored for improved control flow, bounded iteration, and error handling. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent
participant LLM
participant Tool (MCP)
User->>Agent: Submit prompt
Agent->>LLM: Generate response (with/without tool calls)
alt Tool call needed
LLM->>Tool (MCP): Execute tool call
Tool (MCP)-->>LLM: Return tool result
LLM->>Agent: Append tool result, possibly continue
end
alt Self-reflection enabled
Agent->>LLM: Request reflection
LLM-->>Agent: Return reflection result
Agent->>LLM: Regenerate response if needed
end
Agent-->>User: Return final response
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (16)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, Gemini here with a summary of this pull request. This PR primarily focuses on updating the PraisonAI version across the project to 2.2.28. This includes updating the version specified in various Dockerfiles (Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui), the README.md documentation, the Homebrew formula (praisonai.rb), and the internal deployment script (deploy.py). Additionally, the praisonaiagents dependency within the main PraisonAI project has been updated to >=0.0.99, corresponding to a version bump in the praisonai-agents sub-project from 0.0.98 to 0.0.99. The praisonai-agents changes also include significant refactoring in the LLM class's get_response method to improve sequential tool calling logic and the addition of several new test scripts (debug_comparison.py, test_both_formats.py, test_self_reflection.py, test_sequential_fix.py, sequence_llm.py) to verify different LLM configurations, sequential tool execution, and self-reflection functionality.
Highlights
- PraisonAI Agents Update: Updated the
praisonaiagentsdependency to>=0.0.99and the internalpraisonai-agentsversion to0.0.99. - LLM Tool Calling Refactor: Refactored the
get_responsemethod inpraisonaiagents/llm/llm.pyto implement a sequential tool calling loop, improving handling of tools likesequentialthinking. - New Test Scripts: Added multiple new test scripts (
debug_comparison.py,test_both_formats.py,test_self_reflection.py,test_sequential_fix.py,sequence_llm.py) to validate different LLM configurations, sequential tool use, and self-reflection.
Changelog
Click here to see the changelog
- docker/Dockerfile
- Updated praisonai dependency version to
>=2.2.28.
- Updated praisonai dependency version to
- docker/Dockerfile.chat
- Updated praisonai dependency version to
>=2.2.28.
- Updated praisonai dependency version to
- docker/Dockerfile.dev
- Updated praisonai dependency version to
>=2.2.28.
- Updated praisonai dependency version to
- docker/Dockerfile.ui
- Updated praisonai dependency version to
>=2.2.28.
- Updated praisonai dependency version to
- docker/README.md
- Updated PraisonAI version mention in 'Package Versions' section to
>=2.2.28. - Updated pinned version example in 'Version Pinning' section to
==2.2.28.
- Updated PraisonAI version mention in 'Package Versions' section to
- src/praisonai-agents/debug_comparison.py
- Added new script to compare Agent behavior using 'gpt-4o-mini' (agent.py path) vs 'openai/gpt-4o-mini' (llm.py path) with sequential thinking tool.
- src/praisonai-agents/llm/llm.py
- Refactored
get_responsemethod to use a sequential tool calling loop with a maximum iteration limit (10). - Adjusted display logic to show initial prompt processing once.
- Updated time tracking variables for display updates.
- Ensured tool calls are checked after the initial response generation.
- Implemented logic to continue the tool calling loop if the 'sequentialthinking' tool indicates
nextThoughtNeeded. - Added a step to generate a final response after tool calls are completed.
- Separated the self-reflection logic into its own distinct loop.
- Refactored
- src/praisonai-agents/pyproject.toml
- Updated praisonaiagents version from
0.0.98to0.0.99.
- Updated praisonaiagents version from
- src/praisonai-agents/sequence.py
- Changed the
llmparameter for the Agent from 'openai/gpt-4o-mini' to 'gpt-4o-mini'.
- Changed the
- src/praisonai-agents/sequence_llm.py
- Added new script demonstrating Agent use with 'openai/gpt-4o-mini' and sequential thinking tool.
- src/praisonai-agents/test_both_formats.py
- Added new script to test Agent behavior with both 'gpt-4o-mini' and 'openai/gpt-4o-mini' configurations.
- src/praisonai-agents/test_self_reflection.py
- Added new script to test self-reflection functionality in both the Agent and LLM classes.
- src/praisonai-agents/test_sequential_fix.py
- Added new script to test sequential tool calling fix with both 'gpt-4o-mini' and 'openai/gpt-4o-mini' configurations.
- src/praisonai/praisonai.rb
- Updated Homebrew formula URL and sha256 hash to use
v2.2.28.
- Updated Homebrew formula URL and sha256 hash to use
- src/praisonai/praisonai/deploy.py
- Updated pinned praisonai version in the generated Dockerfile from
==2.2.27to==2.2.28.
- Updated pinned praisonai version in the generated Dockerfile from
- src/praisonai/pyproject.toml
- Updated PraisonAI project version from
2.2.27to2.2.28. - Updated praisonaiagents dependency version from
>=0.0.97to>=0.0.99. - Updated Poetry project version from
2.2.27to2.2.28. - Updated Poetry praisonaiagents dependency version from
>=0.0.97to>=0.0.99.
- Updated PraisonAI project version from
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Versions increment, a steady flow,
Through Dockerfiles and code they go.
Agents reflect, and tools align,
In Python's world, a new design.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨No code suggestions found for the PR. |
There was a problem hiding this comment.
Code Review
This pull request successfully updates the PraisonAI version across various Dockerfiles, README, pyproject.toml, and a deployment script. It also introduces new test scripts and refactors the LLM class's get_response method to align its tool-calling and self-reflection logic with the Agent class. The addition of comprehensive tests for LLM and agent tool-calling is a valuable contribution.
Summary of Findings
- LLM
get_responseLogic Flow (High): The main loop termination condition and the placement of the self-reflection loop inllm.py::get_responsemay not correctly implement the intended sequential tool calling and reflection flow. (Commented) - LLM
get_response_asyncLogic Flow (High): Similar logic flow issues regarding sequential tool calling and self-reflection placement appear to be present in the asynchronousget_response_asyncmethod. (Commented) - LLM Parameter Handling Inconsistency (High): The way
**kwargsare handled and passed tolitellm.completionalongside parameters from_build_completion_paramscan lead to conflicts or duplication. This issue exists in bothget_responseandget_response_async. (Commented) - Time Calculation Consistency (Medium): Inconsistent use of
start_timeandcurrent_timefor calculatinggeneration_timeinllm.py::get_response. (Commented) - Variable Clarity (
response_text,final_response_text) (Medium): The purpose and lifecycle ofresponse_textandfinal_response_textinllm.py::get_responsecould be made clearer. (Commented) - Ollama Workaround Extraction (Medium): The specific logic for handling Ollama tool results in
llm.py::get_responsecould be extracted into a helper method for better readability. (Commented) - Version Bumps (Low): Updated PraisonAI and praisonaiagents versions across Dockerfiles, README, pyproject.toml, deploy.py, and praisonai.rb. (Not commented)
- Test Scripts Use Print (Low): New test/debug scripts (
debug_comparison.py,test_both_formats.py,test_self_reflection.py,test_sequential_fix.py) useprintstatements instead of logging. (Not commented) - Test Scripts Lack Docstrings/Type Hints (Low): New test/debug scripts could benefit from docstrings and type hints for improved maintainability. (Not commented)
Merge Readiness
The pull request includes necessary version updates and valuable new test coverage. However, the refactoring of the llm.py methods introduces potential logic flow and parameter handling issues of high severity. These issues should be addressed to ensure the correct behavior of sequential tool calling and self-reflection, and to improve the robustness and maintainability of the code. Therefore, I recommend requesting changes before merging. Please note that I am unable to directly approve the pull request; other reviewers should review and approve this code before merging.
| start_time = time.time() | ||
| reflection_count = 0 |
There was a problem hiding this comment.
There seems to be an inconsistency in how generation_time is calculated and displayed. start_time is set once at the beginning (line 413), while current_time is set at the start of each loop iteration (line 439). The display functions then use a mix of time.time() - start_time (e.g., lines 726, 749, 761) and time.time() - current_time (e.g., lines 462, 470, 491). Using current_time seems more appropriate for showing the duration of the current LLM call within the sequential tool loop, while start_time would be for the total time of the entire get_response call. Could we standardize this based on the intended meaning (per-step time vs. total time) for clarity?
| # Sequential tool calling loop - similar to agent.py | ||
| max_iterations = 10 # Prevent infinite loops | ||
| iteration_count = 0 | ||
| final_response_text = "" |
There was a problem hiding this comment.
The variables response_text (e.g., lines 453, 478, 505, 747, 753) and final_response_text (e.g., lines 434, 690, 705, 716, 718, 740, 741) are used to store response content. Their assignment and usage across the different branches (reasoning_steps, streaming, non-streaming, before/after tool calls) seem a bit complex and could potentially lead to confusion about which variable holds the final output at different stages. Could we add comments or potentially refactor to make the purpose and lifecycle of these variables clearer?
| # If we reach here, no more tool calls needed - get final response | ||
| # Make one more call to get the final summary response | ||
| # Special handling for Ollama models that don't automatically process tool results | ||
| if self.model and self.model.startswith("ollama/") and tool_result: |
There was a problem hiding this comment.
The special handling logic for Ollama models after tool calls (lines 598-653) is quite detailed and involves checking JSON structure and making a follow-up call. While necessary, this block adds significant complexity to the main get_response function. To improve readability and potentially make this logic reusable, could it be extracted into a dedicated helper method (e.g., _process_ollama_tool_results)?
There was a problem hiding this comment.
Bug: Dynamic Hash Calculation in Homebrew Formula
The formula dynamically calculates the SHA256 hash at build time using curl and shasum. This approach is insecure and prevents reproducible builds; Homebrew formulas require static, pre-calculated hashes.
src/praisonai/praisonai.rb#L5-L7
PraisonAI/src/praisonai/praisonai.rb
Lines 5 to 7 in 7b2e2b6
BugBot free trial expires on June 9, 2025
You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.
Was this report helpful? Give feedback by reacting with 👍 or 👎
Update PraisonAI version to 2.2.28 across Dockerfiles and related files
User description
PR Type
Enhancement, Tests
Description
Add comprehensive tests for LLM and agent tool-calling logic
Refactor and enhance sequential tool-calling logic in LLM
Update PraisonAI and praisonaiagents versions to latest (2.2.28, 0.0.99)
Improve output formatting and verbose/debug display in LLM
Changes walkthrough 📝
10 files
Refactor sequential tool-calling and self-reflection logic in LLMChange LLM model to gpt-4o-mini for agent testUpdate PraisonAI version to 2.2.28Update PraisonAI version to 2.2.28Update PraisonAI version to 2.2.28Update PraisonAI version to 2.2.28Bump PraisonAI and praisonaiagents dependency versionsBump praisonaiagents version to 0.0.99Update Homebrew formula to PraisonAI 2.2.28Update Dockerfile generation to PraisonAI 2.2.285 files
Add script to compare agent.py and llm.py tool-callingAdd test for both agent.py and llm.py LLM formatsAdd test for sequential tool-calling in both LLM pathsAdd test for self-reflection in Agent and LLMAdd sequence test using openai/gpt-4o-mini LLM path1 files
Update PraisonAI version references to 2.2.28Summary by CodeRabbit
New Features
Bug Fixes
Chores