Update version to 2.2.12 across project files#511
Conversation
- Incremented PraisonAI version from 2.2.11 to 2.2.12 in `pyproject.toml`, `uv.lock`, and all relevant Dockerfiles for consistency. - Ensured minimal changes to existing code while maintaining versioning accuracy across the project.
|
Caution Review failedThe pull request is closed. WalkthroughThis change updates the pinned version of the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AgentsGenerator
participant PraisonAIModel
User->>AgentsGenerator: Call _run_crewai(config_list)
AgentsGenerator->>PraisonAIModel: Instantiate LLM model with config_list
Note right of PraisonAIModel: API key automatically resolved (no manual override)
AgentsGenerator->>AgentsGenerator: Create agents and tasks
AgentsGenerator->>AgentsGenerator: Run CrewAI crew
AgentsGenerator-->>User: Return results
Possibly related PRs
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (12)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here to provide a summary of this pull request. The primary intent of this PR, as stated in the title and description, is to synchronize the project's version number across various configuration and documentation files, incrementing it from 2.2.11 to 2.2.12. This ensures consistency in dependencies and project metadata. While the description mentions minimal changes, I've observed an additional functional change in praisonai/agents_generator.py related to LLM API key handling.
Highlights
- Version Update: The project version has been consistently updated to
2.2.12inpyproject.toml,uv.lock, and several Dockerfile examples/templates. - LLM API Key Handling Adjustment: Removed explicit logic in
praisonai/agents_generator.pythat was overriding LLM API keys based on theconfig_listwithin the_run_crewaimethod.
Changelog
Click here to see the changelog
- docker/Dockerfile
- Updated the
praisonaipackage version from2.2.11to2.2.12in thepip installcommand.
- Updated the
- docker/Dockerfile.chat
- Updated the
praisonaipackage version from2.2.11to2.2.12in thepip installcommand.
- Updated the
- docker/Dockerfile.dev
- Updated the
praisonaipackage version from2.2.11to2.2.12in thepip installcommand.
- Updated the
- docker/Dockerfile.ui
- Updated the
praisonaipackage version from2.2.11to2.2.12in thepip installcommand.
- Updated the
- docs/api/praisonai/deploy.html
- Updated the
praisonaipackage version from2.2.11to2.2.12within the embedded Dockerfile example string.
- Updated the
- docs/developers/local-development.mdx
- Updated the
praisonaipackage version from2.2.11to2.2.12in the example Dockerfile content.
- Updated the
- docs/ui/chat.mdx
- Updated the
praisonaipackage version from2.2.11to2.2.12in the example Dockerfile content.
- Updated the
- docs/ui/code.mdx
- Updated the
praisonaipackage version from2.2.11to2.2.12in the example Dockerfile content.
- Updated the
- praisonai/agents_generator.py
- Removed the
api_key_var=Noneparameter fromPraisonAIModelinstantiation. - Removed the conditional logic that explicitly set
llm.api_keyandfunction_calling_llm.api_keyfromself.config_list.
- Removed the
- praisonai/deploy.py
- Updated the
praisonaipackage version from2.2.11to2.2.12within the generated Dockerfile string.
- Updated the
- pyproject.toml
- Updated the project
versionfrom2.2.11to2.2.12in the[project]section. - Updated the project
versionfrom2.2.11to2.2.12in the[tool.poetry]section.
- Updated the project
- uv.lock
- Updated the locked
praisonaipackageversionfrom2.2.11to2.2.12.
- Updated the locked
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Version bumps appear,
Small numbers, big release near,
Code flows, bugs disappear.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
The pull request successfully updates the PraisonAI version to 2.2.12 across pyproject.toml, uv.lock, and the various Dockerfiles and documentation files, ensuring version consistency as intended. This is a straightforward and necessary change for a version bump.
However, I've identified a significant change in praisonai/agents_generator.py related to how API keys are handled for CrewAI models, which might impact existing user configurations. Additionally, the current test suite doesn't fully cover this specific change.
Summary of Findings
- Change in API Key Loading for CrewAI: The explicit override of the LLM's API key using the value from
config_listin the YAML file has been removed inpraisonai/agents_generator.py. This means API keys for CrewAI models are now strictly loaded via environment variables as handled by thePraisonAIModelclass. This is a breaking change for users who previously specified API keys directly in theirconfig_listYAML entries. - Incomplete Test Coverage: The existing integration tests (
tests/integration/test_base_url_api_base_fix.py) primarily focus onbase_urlmapping and do not specifically test the scenario where anapi_keyis provided in theconfig_listYAML entry. Adding a test case for this scenario would help ensure the intended behavior (whether it's to now ignore the YAML key or to handle it differently) is correctly implemented and prevent regressions.
Merge Readiness
The core purpose of the PR (version update) is achieved, but the change in API key loading logic for CrewAI agents in praisonai/agents_generator.py is a high-severity issue as it might break existing user setups. I recommend addressing this potential breaking change (either by restoring compatibility or clearly documenting the new requirement) and adding a test case for it before merging. I am unable to approve this pull request; please ensure other reviewers approve it before merging.
| if llm_model: | ||
| llm = PraisonAIModel( | ||
| model=llm_model.get("model") or os.environ.get("MODEL_NAME") or "openai/gpt-4o", | ||
| api_key_var=None, # Don't rely on env var lookup | ||
| base_url=self.config_list[0].get('base_url') if self.config_list else None | ||
| ).get_model() | ||
| # Override with explicit API key from config_list | ||
| if self.config_list and self.config_list[0].get('api_key'): | ||
| llm.api_key = self.config_list[0]['api_key'] |
There was a problem hiding this comment.
It looks like the logic to explicitly override the LLM's API key using self.config_list[0].get('api_key') after creating the PraisonAIModel instance has been removed here and in subsequent blocks for function_calling_llm.
Previously, if a user provided an api_key directly in the config_list entry within their agents.yaml (or equivalent config), that key would be used to set the llm.api_key attribute. With this change, the PraisonAIModel will now rely solely on its internal logic to determine the API key, which primarily involves looking up environment variables based on the model prefix (e.g., OPENAI_API_KEY, GROQ_API_KEY, etc.).
This change simplifies the code but potentially breaks backward compatibility for users who were providing API keys directly in their YAML configuration config_list entries instead of environment variables. Should we document this change in behavior or consider restoring the ability to load the API key from config_list for backward compatibility?
✅ Deploy Preview for praisonai ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Update version to 2.2.12 across project files
pyproject.toml,uv.lock, and all relevant Dockerfiles for consistency.Summary by CodeRabbit