Skip to content

Update version to 2.2.13 across project files#512

Merged
MervinPraison merged 1 commit intomainfrom
develop
May 24, 2025
Merged

Update version to 2.2.13 across project files#512
MervinPraison merged 1 commit intomainfrom
develop

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented May 24, 2025

  • Incremented PraisonAI version from 2.2.12 to 2.2.13 in pyproject.toml, uv.lock, and all relevant Dockerfiles for consistency.
  • Ensured minimal changes to existing code while maintaining versioning accuracy across the project.

Summary by CodeRabbit

  • Bug Fixes
    • Improved test suite behavior to only skip tests for authentication errors when a valid OpenAI API key is not present, ensuring more accurate test results.
  • Chores
    • Upgraded the praisonai package version from 2.2.12 to 2.2.13 across all Docker environments and documentation.
    • Updated project version metadata to 2.2.13.
  • Documentation
    • Refreshed setup and deployment instructions to reference the latest praisonai package version.
  • Refactor
    • Enhanced internal API key handling for better compatibility with AutoGen.

- Incremented PraisonAI version from 2.2.12 to 2.2.13 in `pyproject.toml`, `uv.lock`, and all relevant Dockerfiles for consistency.
- Ensured minimal changes to existing code while maintaining versioning accuracy across the project.
@MervinPraison MervinPraison merged commit bbbf7de into main May 24, 2025
7 of 10 checks passed
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 24, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This update increments the praisonai package version from 2.2.12 to 2.2.13 across all Dockerfiles, deployment scripts, and documentation. It also refines API key handling in the CLI for better AutoGen compatibility and improves test logic for skipping authentication-related failures.

Changes

Files/Group Change Summary
docker/Dockerfile, Dockerfile.chat, Dockerfile.dev, Dockerfile.ui Bump praisonai package version from 2.2.12 to 2.2.13.
docs/api/praisonai/deploy.html, docs/developers/local-development.mdx, docs/ui/chat.mdx, docs/ui/code.mdx Update documentation and examples to reference praisonai 2.2.13.
praisonai/deploy.py Update Dockerfile creation to install praisonai 2.2.13.
pyproject.toml Update project version to 2.2.13 in metadata sections.
praisonai/cli.py Enhance API key handling for AutoGen: add openai_api_key and api_type to config dictionary.
tests/test.py Refine test skipping logic for API authentication errors based on environment API key.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant PraisonAI CLI
    participant Env
    participant AutoGen

    User->>PraisonAI CLI: Initialize PraisonAI
    PraisonAI CLI->>Env: Get OPENAI_API_KEY
    PraisonAI CLI->>PraisonAI CLI: Build config_list with api_key, openai_api_key, api_type
    PraisonAI CLI->>AutoGen: Provide config_list for compatibility
Loading

Possibly related PRs

Suggested labels

Review effort 1/5

Poem

In the warren, code hops anew,
PraisonAI’s version grew!
Dockerfiles and docs in line,
API keys now intertwine.
Tests skip with wisdom, not with fright—
All is ready for a versioned flight!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d42e76f and c69350b.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (12)
  • docker/Dockerfile (1 hunks)
  • docker/Dockerfile.chat (1 hunks)
  • docker/Dockerfile.dev (1 hunks)
  • docker/Dockerfile.ui (1 hunks)
  • docs/api/praisonai/deploy.html (1 hunks)
  • docs/developers/local-development.mdx (1 hunks)
  • docs/ui/chat.mdx (1 hunks)
  • docs/ui/code.mdx (1 hunks)
  • praisonai/cli.py (1 hunks)
  • praisonai/deploy.py (1 hunks)
  • pyproject.toml (2 hunks)
  • tests/test.py (9 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@netlify
Copy link
Copy Markdown

netlify bot commented May 24, 2025

Deploy Preview for praisonai ready!

Name Link
🔨 Latest commit c69350b
🔍 Latest deploy log https://app.netlify.com/projects/praisonai/deploys/683245ba8a089c000809b387
😎 Deploy Preview https://deploy-preview-512--praisonai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, Gemini here with a summary of this pull request. This PR primarily focuses on updating the project version to 2.2.13 across various configuration files, Dockerfiles, and documentation snippets to ensure consistency. Additionally, it includes a functional change to improve AutoGen compatibility in the CLI and refines the test suite's handling of API key authentication errors.

Highlights

  • Version Update: The project version has been incremented from 2.2.12 to 2.2.13 in the main pyproject.toml file and the uv.lock dependency lock file.
  • Dockerfile & Documentation Sync: The praisonai package version specified in all Dockerfiles (docker/Dockerfile, docker/Dockerfile.chat, docker/Dockerfile.dev, docker/Dockerfile.ui) and various documentation code snippets (docs/api/praisonai/deploy.html, docs/developers/local-development.mdx, docs/ui/chat.mdx, docs/ui/code.mdx, praisonai/deploy.py) has been updated to 2.2.13.
  • AutoGen Compatibility: Added openai_api_key and api_type: 'openai' fields to the config_list in praisonai/cli.py to improve compatibility with the AutoGen framework's expected configuration format.
  • Improved Test Skipping Logic: Modified the test suite (tests/test.py) to more precisely control when tests are skipped due to API authentication issues. Tests will now only skip if no API key is provided or if a known test/fallback key is used, allowing tests to fail correctly for invalid real keys.

Changelog

Click here to see the changelog
  • docker/Dockerfile
    • Updated praisonai version from 2.2.12 to 2.2.13 in the pip install command.
  • docker/Dockerfile.chat
    • Updated praisonai version from 2.2.12 to 2.2.13 in the pip install command.
  • docker/Dockerfile.dev
    • Updated praisonai version from 2.2.12 to 2.2.13 in the pip install command.
  • docker/Dockerfile.ui
    • Updated praisonai version from 2.2.12 to 2.2.13 in the pip install command.
  • docs/api/praisonai/deploy.html
    • Updated praisonai version from 2.2.12 to 2.2.13 in the embedded Dockerfile code snippet.
  • docs/developers/local-development.mdx
    • Updated praisonai version from 2.2.12 to 2.2.13 in the embedded Dockerfile code snippet.
  • docs/ui/chat.mdx
    • Updated praisonai version from 2.2.12 to 2.2.13 in the embedded Dockerfile code snippet.
  • docs/ui/code.mdx
    • Updated praisonai version from 2.2.12 to 2.2.13 in the embedded Dockerfile code snippet.
  • praisonai/cli.py
    • Added openai_api_key and api_type: 'openai' to the config_list for better AutoGen compatibility.
  • praisonai/deploy.py
    • Updated praisonai version from 2.2.12 to 2.2.13 in the generated Dockerfile content.
  • pyproject.toml
    • Updated the project version from 2.2.12 to 2.2.13 in both [project] and [tool.poetry] sections.
  • tests/test.py
    • Refined the logic for skipping API-dependent tests to only skip when no API key is provided or a test/fallback key is detected, ensuring tests fail correctly for invalid real keys.
  • uv.lock
    • Updated the locked version of praisonai from 2.2.12 to 2.2.13.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Version bumps abound,
Across files, the number's found,
Tests check the key,
AutoGen now agrees,
Code consistency is crowned.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily updates the PraisonAI version to 2.2.13 across various project files, which has been done consistently. Additionally, it introduces compatibility changes for AutoGen in praisonai/cli.py and significantly improves the test skipping logic in tests/test.py related to API key handling.

The changes in praisonai/cli.py are clear and well-commented. The updates to the test skipping logic in tests/test.py are a good improvement for test reliability, ensuring tests fail appropriately if a supposedly valid API key causes authentication issues. However, this new skip logic is duplicated across multiple test methods, which could be refactored for better maintainability.

Summary of Findings

  • Code Duplication in Test Skip Logic: The logic for determining whether to skip a test due to API key issues is duplicated across multiple test methods in tests/test.py. This should be refactored into a helper method to improve maintainability.
  • Handling of None API Key in praisonai/cli.py (Low Severity - Not Commented): In praisonai/cli.py, if OPENAI_API_KEY is not set, api_key (and subsequently openai_api_key) will be None. While this is often standard, it's worth ensuring that downstream consumers like AutoGen correctly handle a None value for API keys if one is expected. This is a low-severity observation and was not added as a direct comment due to review settings.
  • Test Skip Message Verbosity in Command-Line Tests (Low Severity - Not Commented): In tests/test.py (Diff 6, test_praisonai_command and test_praisonai_init_command), the skip message f"Skipping due to no valid API key provided" does not include the original error or result string. Including this could make debugging easier. This is a low-severity observation and was not added as a direct comment due to review settings.

Merge Readiness

The pull request effectively updates the project version and improves test reliability. However, there's a medium-severity issue regarding duplicated code in tests/test.py that should be addressed to enhance maintainability. I recommend refactoring the test skip logic before merging. As an AI, I am not authorized to approve pull requests; please ensure further review and approval by project maintainers.

Comment on lines +21 to 30
# Only skip if no API key provided or using test/fallback key
api_key = os.environ.get('OPENAI_API_KEY', '')
if (not api_key or
api_key.startswith('sk-test-') or
api_key == 'nokey' or
'fallback' in api_key):
self.skipTest(f"Skipping due to no valid API key provided: {e}")
else:
# Real API key provided - test should fail, not skip
raise
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for skipping tests based on the API key status (lines 22-30) is a great improvement for test reliability. However, this exact block of code is repeated in several test methods within this file (e.g., test_main_with_autogen_framework, test_main_with_custom_framework, test_main_with_internet_search_tool, test_main_with_built_in_tool, and similar logic in TestExamples and TestCommandLine).

Could we refactor this duplicated logic into a helper method within the test classes (or a standalone utility function)? This would improve maintainability, as any future changes to this skip logic would only need to be made in one place.

For example, you could introduce a helper method like _handle_api_exception_skip(self, e):

# In TestMainFunctionality class (and similar for other classes)
def _handle_api_exception_skip(self, e: Exception):
    """Checks if the test should be skipped due to API key status and the nature of the exception."""
    api_key = os.environ.get('OPENAI_API_KEY', '')
    is_fallback_or_missing_key = (
        not api_key or 
        api_key.startswith('sk-test-') or 
        api_key == 'nokey' or
        'fallback' in api_key
    )

    if is_fallback_or_missing_key:
        # Current logic skips for any exception if key is fallback/missing.
        # You might want to add checks for specific auth-related error strings from 'e' here
        # if you want to be more precise about *why* it's skipping with a fallback key.
        self.skipTest(f"Skipping due to no valid API key provided or fallback key in use: {e}")
        return True # Indicates test was skipped
    return False # Indicates test should not be skipped, and exception should be raised

# Then, in your test methods:
try:
    # ... test code ...
except Exception as e:
    if self._handle_api_exception_skip(e):
        return # Test was skipped
    # Real API key provided or not an API-related skip - test should fail
    raise

This would make the test methods cleaner and the skip logic centralized. A similar helper could be adapted for the command-line tests that check result strings instead of exceptions.

        except Exception as e:
            if self._handle_api_exception_skip(e):
                return # Test was skipped by the helper
            # Real API key provided - test should fail, not skip
            raise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant