Skip to content

feat(ag2): add AG2 framework backend integration#1156

Merged
MervinPraison merged 1 commit intomainfrom
feat/ag2-to-main
Mar 26, 2026
Merged

feat(ag2): add AG2 framework backend integration#1156
MervinPraison merged 1 commit intomainfrom
feat/ag2-to-main

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Mar 26, 2026

Port of PR #1143 (by @faridun-ag2) to main branch.

Original PR targeted develop (1713 commits behind main). This PR applies the same purely additive AG2 changes to the current main codebase.

Changes

  • agents_generator.py: AG2 detection via importlib.metadata + _run_ag2() with LLMConfig, GroupChat orchestration, Bedrock support
  • auto.py: AG2 lazy availability check + framework validation
  • pyproject.toml: [ag2] optional dependency extra (ag2>=0.11.0)
  • examples/ag2/: 3 YAML examples (basic, multi-agent, Bedrock)
  • tests/: 16 unit + 9 integration tests

Verification

  • All changes are purely additive: +1428/-2 lines
  • No existing code paths affected
  • Follows lazy-loading pattern consistent with existing autogen/crewai integration

Co-authored-by: Faridun Mirzoev faridun@ag2.ai

Summary by CodeRabbit

Release Notes

  • New Features

    • Added AG2 framework support with multi-agent orchestration capabilities.
    • Added AWS Bedrock integration for LLM inference.
    • Added tool registration support for AG2 agents.
  • Documentation

    • Added example configurations demonstrating basic agent setup, multi-agent workflows, and Bedrock integration.
    • Added example showing tool registration and usage with AG2 agents.

Port of PR #1143 (by @faridun-ag2) to main branch.

AG2 (community fork of AutoGen, PyPI: ag2) added as a new framework
option alongside praisonai, crewai, and autogen.

Changes:
- agents_generator.py: AG2 detection + _run_ag2() with LLMConfig,
  GroupChat orchestration, Bedrock support, TERMINATE cleanup
- auto.py: AG2 lazy availability check + validation
- pyproject.toml: [ag2] optional dependency extra
- examples/ag2/: basic, multi-agent, and Bedrock YAML examples
- tests: 16 unit tests + 9 integration tests

Co-authored-by: Faridun Mirzoev <faridun@ag2.ai>
@MervinPraison MervinPraison merged commit 1d4721d into main Mar 26, 2026
9 of 14 checks passed
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 26, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c797dcb1-2e65-4b64-bd30-8de64ca89c1d

📥 Commits

Reviewing files that changed from the base of the PR and between 29facc8 and 90915fe.

📒 Files selected for processing (10)
  • examples/ag2/ag2_basic.yaml
  • examples/ag2/ag2_bedrock.yaml
  • examples/ag2/ag2_multi_agent.yaml
  • src/praisonai/praisonai/agents_generator.py
  • src/praisonai/praisonai/auto.py
  • src/praisonai/pyproject.toml
  • src/praisonai/tests/integration/ag2/__init__.py
  • src/praisonai/tests/integration/ag2/test_ag2_integration.py
  • src/praisonai/tests/source/ag2_function_tools.py
  • src/praisonai/tests/unit/test_ag2_adapter.py

📝 Walkthrough

Walkthrough

This PR adds comprehensive AG2 framework support to PraisonAI, including three example YAML configurations, runtime detection and framework validation, a new _run_ag2 orchestration method, dependency declarations, and corresponding integration and unit tests.

Changes

Cohort / File(s) Summary
AG2 Example Configurations
examples/ag2/ag2_basic.yaml, examples/ag2/ag2_bedrock.yaml, examples/ag2/ag2_multi_agent.yaml
New example YAML files demonstrating single-agent, multi-agent, and AWS Bedrock-based AG2 deployments with templated task descriptions and expected output formats.
Core Framework Integration
src/praisonai/praisonai/agents_generator.py
Added AG2 availability detection (AG2_AVAILABLE), framework routing in generate_crew_and_kickoff, and new _run_ag2(...) method implementing AG2 orchestration: LLMConfig resolution, tool registration, agent/role construction, GroupChat assembly, execution via UserProxyAgent.initiate_chat, and output extraction with error handling.
Framework Validation
src/praisonai/praisonai/auto.py
Added AG2 availability check (_check_ag2_available) and constructor validation to raise ImportError with installation hint when framework == "ag2" but AG2 is unavailable.
Dependency Declaration
src/praisonai/pyproject.toml
Added optional extra ag2 specifying ag2>=0.11.0 and praisonai-tools>=0.1.0.
Test Suite
src/praisonai/tests/integration/ag2/test_ag2_integration.py, src/praisonai/tests/unit/test_ag2_adapter.py
Integration tests validating AG2 YAML parsing, agent initialization, group chat execution, and backward compatibility; unit tests covering LLMConfig construction, agent/chat assembly, message composition, output extraction, and error handling with mocked AG2 dependencies.
Example Source
src/praisonai/tests/source/ag2_function_tools.py
Demonstration script showing AG2 tool registration with a calculator function, AssistantAgent setup, UserProxyAgent configuration, and chat initiation.

Sequence Diagram

sequenceDiagram
    participant Config as YAML Config
    participant AG as AgentsGenerator
    participant LLM as autogen.LLMConfig
    participant Assist as AssistantAgent(s)
    participant User as UserProxyAgent
    participant Chat as GroupChat
    participant Manager as GroupChatManager

    Config->>AG: Parse roles & tasks
    AG->>LLM: Resolve model config (OpenAI/Bedrock)
    AG->>Assist: Create one per role
    AG->>Assist: Register tools (LLM + execution)
    AG->>User: Create with tool handlers
    AG->>Chat: Assemble agents & max_rounds
    AG->>Manager: Initialize with GroupChat
    User->>Manager: initiate_chat(initial_message)
    Manager->>Assist: Exchange messages in loop
    Assist->>User: Tool calls & responses
    User->>Manager: Termination condition met
    Manager-->>AG: Return summary or messages
    AG-->>AG: Extract & format output
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Develop #501: Modifies agents_generator.py to adjust verbosity arguments on existing framework-specific run methods; overlaps with the same module modified here for AG2 dispatch.
  • feat: Add AutoGen v0.4 support with backward compatibility #936: Adds AutoGen v4 framework support via similar pattern—framework availability flags, framework dispatch, and a new _run_autogen_v4 method alongside existing framework handlers in agents_generator.py and auto.py.

Suggested labels

Review effort 4/5

Poem

🐰 AG2 agents hop and play,
Building teams the PraisonAI way,
With Bedrock whispers and GroupChat cheer,
AutoGen teamwork is finally here! 🤖✨

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/ag2-to-main

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add AG2 framework backend integration to PraisonAI

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add AG2 framework backend integration as new orchestration option
• Implement _run_ag2() method with LLMConfig, GroupChat, and Bedrock support
• Add AG2 availability detection and framework validation in initialization
• Include 3 YAML examples (basic, multi-agent, Bedrock) and comprehensive tests
• Update pyproject.toml with [ag2] optional dependency extra
Diagram
flowchart LR
  A["PraisonAI Config"] -->|"framework: ag2"| B["AgentsGenerator"]
  B -->|"AG2_AVAILABLE check"| C["_run_ag2()"]
  C -->|"LLMConfig"| D["AssistantAgent + UserProxyAgent"]
  D -->|"GroupChat orchestration"| E["GroupChatManager"]
  E -->|"initiate_chat()"| F["AG2 Output"]
  C -->|"Bedrock support"| G["AWS Bedrock LLM"]
Loading

Grey Divider

File Changes

1. src/praisonai/praisonai/agents_generator.py ✨ Enhancement +162/-2

Add AG2 framework orchestration and LLMConfig support

src/praisonai/praisonai/agents_generator.py


2. src/praisonai/praisonai/auto.py ✨ Enhancement +22/-0

Add AG2 lazy loading and availability check

src/praisonai/praisonai/auto.py


3. src/praisonai/tests/integration/ag2/test_ag2_integration.py 🧪 Tests +477/-0

Add comprehensive AG2 integration tests with mocks

src/praisonai/tests/integration/ag2/test_ag2_integration.py


View more (7)
4. src/praisonai/tests/source/ag2_function_tools.py 📝 Documentation +98/-0

Add AG2 tool registration example with calculator

src/praisonai/tests/source/ag2_function_tools.py


5. src/praisonai/tests/unit/test_ag2_adapter.py 🧪 Tests +542/-0

Add unit tests for AG2 adapter and LLMConfig construction

src/praisonai/tests/unit/test_ag2_adapter.py


6. examples/ag2/ag2_basic.yaml 📝 Documentation +30/-0

Add basic AG2 single-agent YAML example

examples/ag2/ag2_basic.yaml


7. examples/ag2/ag2_bedrock.yaml 📝 Documentation +42/-0

Add AG2 Bedrock AWS deployment YAML example

examples/ag2/ag2_bedrock.yaml


8. examples/ag2/ag2_multi_agent.yaml 📝 Documentation +54/-0

Add AG2 multi-agent GroupChat YAML example

examples/ag2/ag2_multi_agent.yaml


9. src/praisonai/pyproject.toml ⚙️ Configuration changes +1/-0

Add AG2 optional dependency extra configuration

src/praisonai/pyproject.toml


10. src/praisonai/tests/integration/ag2/__init__.py Additional files +0/-0

...

src/praisonai/tests/integration/ag2/init.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Mar 26, 2026

Code Review by Qodo

🐞 Bugs (5) 📘 Rule violations (0) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Action required

1. CLI rejects ag2 framework 🐞 Bug ✓ Correctness
Description
The repo’s CLI --framework argument does not include ag2 in its allowed choices, so the newly
added AG2 path is unreachable via the documented praisonai --framework ag2 ... commands. Users
following the new examples will get an argparse validation error before any AG2 dispatch runs.
Code

examples/ag2/ag2_basic.yaml[R4-6]

+# Install: pip install "praisonai[ag2]"
+# Run:     praisonai --framework ag2 examples/ag2/ag2_basic.yaml
+#       or praisonai run examples/ag2/ag2_basic.yaml --framework ag2
Evidence
The new examples instruct running with --framework ag2, but the CLI parser restricts --framework
to crewai|autogen|praisonai, which will reject ag2 at argument parsing time.

examples/ag2/ag2_basic.yaml[1-6]
src/praisonai/praisonai/cli/main.py[877-879]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The CLI rejects `--framework ag2` because it is not included in argparse `choices`, so AG2 cannot be used through the documented CLI commands.

### Issue Context
Examples under `examples/ag2/` explicitly instruct `praisonai --framework ag2 ...`, but `cli/main.py` restricts choices.

### Fix Focus Areas
- src/praisonai/praisonai/cli/main.py[877-879]
- src/praisonai/praisonai/cli/main.py[5053-5056]

### Expected fix
- Add `"ag2"` to the `--framework` choices list.
- Update any UI dropdowns (e.g., Gradio) to include `ag2` as well so the feature is reachable from all supported entrypoints.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. AG2 masks autogen detection 🐞 Bug ✓ Correctness
Description
AUTOGEN_AVAILABLE/_check_autogen_available() treat any importable autogen module as “AutoGen
v0.2”, but AG2 also installs under the autogen namespace, so installing AG2 can make
framework="autogen" bind to AG2 unintentionally. This can change behavior or break the autogen
path even when users didn’t install the autogen extra.
Code

src/praisonai/praisonai/agents_generator.py[R58-66]

+AG2_AVAILABLE = False
+try:
+    import importlib.metadata as _importlib_metadata
+    _importlib_metadata.distribution('ag2')
+    from autogen import LLMConfig as _AG2LLMConfig  # noqa: F401 — AG2-exclusive class
+    AG2_AVAILABLE = True
+    del _AG2LLMConfig, _importlib_metadata
+except Exception:
+    pass
Evidence
The code explicitly states AG2 installs under autogen, while the existing autogen availability
checks are based on import autogen only; this makes AG2 satisfy the “autogen available” condition
and blurs the framework separation introduced by the PR.

src/praisonai/praisonai/agents_generator.py[41-45]
src/praisonai/praisonai/agents_generator.py[58-66]
src/praisonai/praisonai/agents_generator.py[732-737]
src/praisonai/praisonai/auto.py[69-79]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
AG2 installs under the `autogen` namespace, but current “autogen availability” detection is `import autogen` which can be satisfied by AG2. This makes `framework='autogen'` potentially run against AG2 instead of the intended `pyautogen` dependency.

### Issue Context
This PR adds an explicit `ag2` framework, but the availability checks for `autogen` are not distribution-aware, so framework separation is unreliable when AG2 is installed.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[41-45]
- src/praisonai/praisonai/auto.py[69-79]
- src/praisonai/praisonai/agents_generator.py[58-66]

### Expected fix
- Update autogen v0.2 availability checks to verify the **pyautogen distribution** is installed (e.g., `importlib.metadata.distribution('pyautogen')`) in addition to importing `autogen`.
- Keep AG2 detection distribution-based (`distribution('ag2')`) as it is.
- Ensure `framework='autogen'` and `framework='ag2'` remain distinct even though they share the `autogen` import namespace.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. AG2 tests assume autogen 🐞 Bug ⛯ Reliability
Description
New AG2 unit tests patch autogen.* symbols, but autogen is not a base dependency (it’s only in
optional extras), so patch("autogen.X") will raise ModuleNotFoundError when tests run without
installing AG2/AutoGen. This can break CI/test runs in minimal installs.
Code

src/praisonai/tests/unit/test_ag2_adapter.py[R180-185]

+        with patch("praisonai.agents_generator.AG2_AVAILABLE", True), \
+             patch("autogen.LLMConfig", return_value=mock_llm_config) as mock_llmcfg, \
+             patch("autogen.AssistantAgent", return_value=mock_assistant), \
+             patch("autogen.UserProxyAgent", return_value=mock_user_proxy), \
+             patch("autogen.GroupChat", return_value=mock_groupchat), \
+             patch("autogen.GroupChatManager", return_value=mock_manager):
Evidence
autogen is not in the project’s base dependencies, but the tests patch autogen.LLMConfig etc.,
which requires importing the autogen module; if neither optional extra is installed, patch target
import fails before the test can run.

src/praisonai/pyproject.toml[11-22]
src/praisonai/pyproject.toml[91-95]
src/praisonai/tests/unit/test_ag2_adapter.py[180-185]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
AG2 unit tests patch `autogen.*` but `autogen` is an optional dependency; on a minimal install, `patch('autogen.LLMConfig', ...)` fails because the module can’t be imported.

### Issue Context
The project’s base dependencies do not include `pyautogen` or `ag2`; they are optional extras.

### Fix Focus Areas
- src/praisonai/tests/unit/test_ag2_adapter.py[1-40]
- src/praisonai/tests/unit/test_ag2_adapter.py[160-210]
- src/praisonai/tests/integration/ag2/test_ag2_integration.py[1-40]

### Expected fix
Choose one approach:
1) **Stub autogen module** in `sys.modules` (e.g., using `types.ModuleType('autogen')`) with placeholder attributes so `patch('autogen.X')` works even when optional deps aren’t installed.
2) Or **skip defensively**: `pytest.importorskip('autogen')` for tests that rely on patching autogen symbols.

Ensure the test suite behavior matches how other optional-dependency integrations are handled (skip when dependency absent, or fully stub the module).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Base_url priority mismatch 🐞 Bug ✓ Correctness
Description
In _run_ag2, the comment and _resolve() logic state YAML llm should override config_list,
but base_url is resolved with config_list taking precedence over YAML. This can silently route
requests to the wrong endpoint when users specify llm.base_url in YAML.
Code

src/praisonai/praisonai/agents_generator.py[R762-775]

+        # Priority: YAML top-level llm > first role llm > config_list > env vars
+        def _resolve(key, env_var=None, default=None):
+            return (yaml_llm.get(key) or first_role_llm.get(key)
+                    or model_config.get(key)
+                    or (os.environ.get(env_var) if env_var else None)
+                    or default)
+
+        api_type = _resolve("api_type", default="openai").lower()
+        model_name = _resolve("model", default="gpt-4o-mini")
+        api_key = _resolve("api_key", env_var="OPENAI_API_KEY")
+        base_url = (model_config.get("base_url")
+                    or yaml_llm.get("base_url")
+                    or os.environ.get("OPENAI_BASE_URL")
+                    or os.environ.get("OPENAI_API_BASE"))
Evidence
The method defines a clear priority order (YAML > first role > config_list > env) and implements it
via _resolve() for other keys, but base_url is handled separately with model_config checked
before yaml_llm.

src/praisonai/praisonai/agents_generator.py[754-767]
src/praisonai/praisonai/agents_generator.py[772-775]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`_run_ag2` claims YAML llm config overrides `config_list`, but `base_url` is resolved in the opposite order (`config_list` first). This breaks the stated precedence and can send traffic to an unintended base URL.

### Issue Context
Other keys (`api_type`, `model`, `api_key`) use `_resolve()` which matches the comment’s priority.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[754-776]

### Expected fix
- Resolve `base_url` using the same priority as `_resolve()` (YAML top-level > first role > config_list), and then fall back to env vars.
- Optionally, extend `_resolve()` to support multiple env vars (OPENAI_BASE_URL / OPENAI_API_BASE) for `base_url`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Termination check case sensitive 🐞 Bug ⛯ Reliability
Description
_run_ag2 only treats uppercase TERMINATE as a termination message, unlike the existing autogen
path which also terminates on a case-insensitive terminate suffix. This can cause AG2 group chats
to run until max_round even when an agent tries to terminate with different casing.
Code

src/praisonai/praisonai/agents_generator.py[R788-792]

+        user_proxy = UserProxyAgent(
+            name="User",
+            human_input_mode="NEVER",
+            is_termination_msg=lambda x: "TERMINATE" in (x.get("content") or ""),
+            code_execution_config=False,
Evidence
AG2 termination detection checks for the exact substring TERMINATE, while _run_autogen includes
a lowercased suffix check, making AG2 more brittle and likely to overrun rounds.

src/praisonai/praisonai/agents_generator.py[788-793]
src/praisonai/praisonai/agents_generator.py[567-571]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
AG2 termination detection is overly strict (uppercase-only), which can prevent early termination and waste rounds/tokens.

### Issue Context
The autogen v0.2 path already has a more robust termination check.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[788-793]
- src/praisonai/praisonai/agents_generator.py[567-571]

### Expected fix
- Make `_run_ag2` termination detection case-insensitive and/or include a `.lower().endswith('terminate')` check similar to `_run_autogen`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces integration for the ag2 framework, a community fork of AutoGen, into PraisonAI. Key additions include example configurations for basic usage, AWS Bedrock, and multi-agent setups, along with framework detection logic and the _run_ag2 execution method. Review feedback suggests improving the robustness of the implementation by using more specific exception handling, unifying configuration priority logic, optimizing tool registration by moving helper functions out of loops, and removing redundant string operations that could interfere with agent output formatting.

Comment on lines +65 to +66
except Exception:
pass
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The except Exception: block is too broad. It can catch unexpected errors that are not related to ag2 not being available, potentially masking real bugs. It's better to catch specific exceptions like importlib.metadata.PackageNotFoundError and ImportError.

Suggested change
except Exception:
pass
except (importlib.metadata.PackageNotFoundError, ImportError):
pass

Comment on lines +757 to +760
first_role_llm = {}
for role_details in config.get("roles", {}).values():
first_role_llm = role_details.get("llm", {}) or {}
break
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The loop to find first_role_llm will always assign the llm config from the first role encountered in the config["roles"] dictionary. Dictionary iteration order is insertion order in Python 3.7+, but relying on this might be brittle. If the intent is to use a specific role's LLM config as a fallback, it should be explicitly stated or handled more robustly (e.g., by checking a specific key like "default_llm_role"). As it stands, it might not pick the intended fallback if roles are not ordered predictably.

Comment on lines +763 to +775
def _resolve(key, env_var=None, default=None):
return (yaml_llm.get(key) or first_role_llm.get(key)
or model_config.get(key)
or (os.environ.get(env_var) if env_var else None)
or default)

api_type = _resolve("api_type", default="openai").lower()
model_name = _resolve("model", default="gpt-4o-mini")
api_key = _resolve("api_key", env_var="OPENAI_API_KEY")
base_url = (model_config.get("base_url")
or yaml_llm.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for resolving configuration values (_resolve function and base_url assignment) has an inconsistent priority order. For most keys, yaml_llm is prioritized over first_role_llm and model_config, but for base_url, model_config is prioritized over yaml_llm. This inconsistency can lead to unexpected behavior. It would be clearer and more maintainable to unify the priority logic for all configuration keys.

        def _resolve(key, env_vars=None, default=None):
            sources = [
                yaml_llm.get(key),
                first_role_llm.get(key),
                model_config.get(key),
            ]
            if env_vars:
                for env_var in env_vars:
                    sources.append(os.environ.get(env_var))
            for value in sources:
                if value is not None:
                    return value
            return default

        api_type = _resolve("api_type", default="openai").lower()
        model_name = _resolve("model", default="gpt-4o-mini")
        api_key = _resolve("api_key", env_vars=["OPENAI_API_KEY"])
        base_url = _resolve("base_url", env_vars=["OPENAI_BASE_URL", "OPENAI_API_BASE"])

Comment on lines +818 to +822
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
return tool_fn
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The make_tool_fn function is defined inside a loop, which creates a new function object on each iteration. This can lead to unnecessary overhead, especially if many tools are being registered. It's more efficient to define such helper functions outside the loop.

Suggested change
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
return tool_fn
def _tool_wrapper(f_to_wrap, name_for_tool):
def tool_fn(**kwargs):
return f_to_wrap(**kwargs) if callable(f_to_wrap) else str(f_to_wrap)
tool_fn.__name__ = name_for_tool
return tool_fn
wrapped = _tool_wrapper(func, tool_name)

result_content = ""
summary = getattr(chat_result, "summary", None)
if summary and isinstance(summary, str) and summary.strip():
result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=_re.IGNORECASE).strip().rstrip('.')
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The rstrip('.') call is redundant and potentially incorrect after the _re.sub operation. The regular expression r'[\s\.\,]*TERMINATE[\s\.\,]*$' already handles stripping trailing periods and other punctuation around the "TERMINATE" keyword. Applying rstrip('.') afterwards could inadvertently remove a legitimate trailing period from the actual content if the original string did not end with "TERMINATE" but happened to end with a period.

Suggested change
result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=_re.IGNORECASE).strip().rstrip('.')
result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=_re.IGNORECASE).strip()

continue
content = (msg.get("content") or "").strip()
if content:
result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=_re.IGNORECASE).strip().rstrip('.')
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The rstrip('.') call is redundant and potentially incorrect after the _re.sub operation. The regular expression r'[\s\.\,]*TERMINATE[\s\.\,]*$' already handles stripping trailing periods and other punctuation around the "TERMINATE" keyword. Applying rstrip('.') afterwards could inadvertently remove a legitimate trailing period from the actual content if the original string did not end with "TERMINATE" but happened to end with a period.

Suggested change
result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=_re.IGNORECASE).strip().rstrip('.')
result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=_re.IGNORECASE).strip()

Comment on lines +106 to +107
except Exception:
_ag2_available = False
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The except Exception: block is too broad. It can catch unexpected errors that are not related to ag2 not being available, potentially masking real bugs. It's better to catch specific exceptions like importlib.metadata.PackageNotFoundError and ImportError.

Suggested change
except Exception:
_ag2_available = False
except (importlib.metadata.PackageNotFoundError, ImportError):
_ag2_available = False

Comment on lines +106 to +107
def _make_generator(self, framework, ag2_available=True):
"""Create AgentsGenerator with mocked availability flags."""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The except Exception: block in the _check_ag2_available helper function is too broad. In a test context, it's especially important to catch specific exceptions to ensure that the test is failing for the expected reason (e.g., PackageNotFoundError or ImportError) and not masking other potential issues.

        except (importlib.metadata.PackageNotFoundError, ImportError):
            _ag2_available = False

Comment on lines +4 to +6
# Install: pip install "praisonai[ag2]"
# Run: praisonai --framework ag2 examples/ag2/ag2_basic.yaml
# or praisonai run examples/ag2/ag2_basic.yaml --framework ag2
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Cli rejects ag2 framework 🐞 Bug ✓ Correctness

The repo’s CLI --framework argument does not include ag2 in its allowed choices, so the newly
added AG2 path is unreachable via the documented praisonai --framework ag2 ... commands. Users
following the new examples will get an argparse validation error before any AG2 dispatch runs.
Agent Prompt
### Issue description
The CLI rejects `--framework ag2` because it is not included in argparse `choices`, so AG2 cannot be used through the documented CLI commands.

### Issue Context
Examples under `examples/ag2/` explicitly instruct `praisonai --framework ag2 ...`, but `cli/main.py` restricts choices.

### Fix Focus Areas
- src/praisonai/praisonai/cli/main.py[877-879]
- src/praisonai/praisonai/cli/main.py[5053-5056]

### Expected fix
- Add `"ag2"` to the `--framework` choices list.
- Update any UI dropdowns (e.g., Gradio) to include `ag2` as well so the feature is reachable from all supported entrypoints.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +58 to +66
AG2_AVAILABLE = False
try:
import importlib.metadata as _importlib_metadata
_importlib_metadata.distribution('ag2')
from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class
AG2_AVAILABLE = True
del _AG2LLMConfig, _importlib_metadata
except Exception:
pass
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Ag2 masks autogen detection 🐞 Bug ✓ Correctness

AUTOGEN_AVAILABLE/_check_autogen_available() treat any importable autogen module as “AutoGen
v0.2”, but AG2 also installs under the autogen namespace, so installing AG2 can make
framework="autogen" bind to AG2 unintentionally. This can change behavior or break the autogen
path even when users didn’t install the autogen extra.
Agent Prompt
### Issue description
AG2 installs under the `autogen` namespace, but current “autogen availability” detection is `import autogen` which can be satisfied by AG2. This makes `framework='autogen'` potentially run against AG2 instead of the intended `pyautogen` dependency.

### Issue Context
This PR adds an explicit `ag2` framework, but the availability checks for `autogen` are not distribution-aware, so framework separation is unreliable when AG2 is installed.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[41-45]
- src/praisonai/praisonai/auto.py[69-79]
- src/praisonai/praisonai/agents_generator.py[58-66]

### Expected fix
- Update autogen v0.2 availability checks to verify the **pyautogen distribution** is installed (e.g., `importlib.metadata.distribution('pyautogen')`) in addition to importing `autogen`.
- Keep AG2 detection distribution-based (`distribution('ag2')`) as it is.
- Ensure `framework='autogen'` and `framework='ag2'` remain distinct even though they share the `autogen` import namespace.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +180 to +185
with patch("praisonai.agents_generator.AG2_AVAILABLE", True), \
patch("autogen.LLMConfig", return_value=mock_llm_config) as mock_llmcfg, \
patch("autogen.AssistantAgent", return_value=mock_assistant), \
patch("autogen.UserProxyAgent", return_value=mock_user_proxy), \
patch("autogen.GroupChat", return_value=mock_groupchat), \
patch("autogen.GroupChatManager", return_value=mock_manager):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Ag2 tests assume autogen 🐞 Bug ⛯ Reliability

New AG2 unit tests patch autogen.* symbols, but autogen is not a base dependency (it’s only in
optional extras), so patch("autogen.X") will raise ModuleNotFoundError when tests run without
installing AG2/AutoGen. This can break CI/test runs in minimal installs.
Agent Prompt
### Issue description
AG2 unit tests patch `autogen.*` but `autogen` is an optional dependency; on a minimal install, `patch('autogen.LLMConfig', ...)` fails because the module can’t be imported.

### Issue Context
The project’s base dependencies do not include `pyautogen` or `ag2`; they are optional extras.

### Fix Focus Areas
- src/praisonai/tests/unit/test_ag2_adapter.py[1-40]
- src/praisonai/tests/unit/test_ag2_adapter.py[160-210]
- src/praisonai/tests/integration/ag2/test_ag2_integration.py[1-40]

### Expected fix
Choose one approach:
1) **Stub autogen module** in `sys.modules` (e.g., using `types.ModuleType('autogen')`) with placeholder attributes so `patch('autogen.X')` works even when optional deps aren’t installed.
2) Or **skip defensively**: `pytest.importorskip('autogen')` for tests that rely on patching autogen symbols.

Ensure the test suite behavior matches how other optional-dependency integrations are handled (skip when dependency absent, or fully stub the module).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

github-actions bot added a commit that referenced this pull request Mar 31, 2026
- Replace pyautogen==0.2.29 with ag2==0.2.29 in autogen extra dependency
- Update integration test documentation to reference ag2 instead of pyautogen
- Fix comment referencing pyautogen numpy conflicts

This completes the AG2 migration that was started in PR #1156, ensuring
backward compatibility while moving to the new ag2 library.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant