feat(ag2): add AG2 framework backend integration#1156
Conversation
Port of PR #1143 (by @faridun-ag2) to main branch. AG2 (community fork of AutoGen, PyPI: ag2) added as a new framework option alongside praisonai, crewai, and autogen. Changes: - agents_generator.py: AG2 detection + _run_ag2() with LLMConfig, GroupChat orchestration, Bedrock support, TERMINATE cleanup - auto.py: AG2 lazy availability check + validation - pyproject.toml: [ag2] optional dependency extra - examples/ag2/: basic, multi-agent, and Bedrock YAML examples - tests: 16 unit tests + 9 integration tests Co-authored-by: Faridun Mirzoev <faridun@ag2.ai>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (10)
📝 WalkthroughWalkthroughThis PR adds comprehensive AG2 framework support to PraisonAI, including three example YAML configurations, runtime detection and framework validation, a new Changes
Sequence DiagramsequenceDiagram
participant Config as YAML Config
participant AG as AgentsGenerator
participant LLM as autogen.LLMConfig
participant Assist as AssistantAgent(s)
participant User as UserProxyAgent
participant Chat as GroupChat
participant Manager as GroupChatManager
Config->>AG: Parse roles & tasks
AG->>LLM: Resolve model config (OpenAI/Bedrock)
AG->>Assist: Create one per role
AG->>Assist: Register tools (LLM + execution)
AG->>User: Create with tool handlers
AG->>Chat: Assemble agents & max_rounds
AG->>Manager: Initialize with GroupChat
User->>Manager: initiate_chat(initial_message)
Manager->>Assist: Exchange messages in loop
Assist->>User: Tool calls & responses
User->>Manager: Termination condition met
Manager-->>AG: Return summary or messages
AG-->>AG: Extract & format output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoAdd AG2 framework backend integration to PraisonAI
WalkthroughsDescription• Add AG2 framework backend integration as new orchestration option • Implement _run_ag2() method with LLMConfig, GroupChat, and Bedrock support • Add AG2 availability detection and framework validation in initialization • Include 3 YAML examples (basic, multi-agent, Bedrock) and comprehensive tests • Update pyproject.toml with [ag2] optional dependency extra Diagramflowchart LR
A["PraisonAI Config"] -->|"framework: ag2"| B["AgentsGenerator"]
B -->|"AG2_AVAILABLE check"| C["_run_ag2()"]
C -->|"LLMConfig"| D["AssistantAgent + UserProxyAgent"]
D -->|"GroupChat orchestration"| E["GroupChatManager"]
E -->|"initiate_chat()"| F["AG2 Output"]
C -->|"Bedrock support"| G["AWS Bedrock LLM"]
File Changes1. src/praisonai/praisonai/agents_generator.py
|
Code Review by Qodo
1. CLI rejects ag2 framework
|
There was a problem hiding this comment.
Code Review
This pull request introduces integration for the ag2 framework, a community fork of AutoGen, into PraisonAI. Key additions include example configurations for basic usage, AWS Bedrock, and multi-agent setups, along with framework detection logic and the _run_ag2 execution method. Review feedback suggests improving the robustness of the implementation by using more specific exception handling, unifying configuration priority logic, optimizing tool registration by moving helper functions out of loops, and removing redundant string operations that could interfere with agent output formatting.
| except Exception: | ||
| pass |
There was a problem hiding this comment.
The except Exception: block is too broad. It can catch unexpected errors that are not related to ag2 not being available, potentially masking real bugs. It's better to catch specific exceptions like importlib.metadata.PackageNotFoundError and ImportError.
| except Exception: | |
| pass | |
| except (importlib.metadata.PackageNotFoundError, ImportError): | |
| pass |
| first_role_llm = {} | ||
| for role_details in config.get("roles", {}).values(): | ||
| first_role_llm = role_details.get("llm", {}) or {} | ||
| break |
There was a problem hiding this comment.
The loop to find first_role_llm will always assign the llm config from the first role encountered in the config["roles"] dictionary. Dictionary iteration order is insertion order in Python 3.7+, but relying on this might be brittle. If the intent is to use a specific role's LLM config as a fallback, it should be explicitly stated or handled more robustly (e.g., by checking a specific key like "default_llm_role"). As it stands, it might not pick the intended fallback if roles are not ordered predictably.
| def _resolve(key, env_var=None, default=None): | ||
| return (yaml_llm.get(key) or first_role_llm.get(key) | ||
| or model_config.get(key) | ||
| or (os.environ.get(env_var) if env_var else None) | ||
| or default) | ||
|
|
||
| api_type = _resolve("api_type", default="openai").lower() | ||
| model_name = _resolve("model", default="gpt-4o-mini") | ||
| api_key = _resolve("api_key", env_var="OPENAI_API_KEY") | ||
| base_url = (model_config.get("base_url") | ||
| or yaml_llm.get("base_url") | ||
| or os.environ.get("OPENAI_BASE_URL") | ||
| or os.environ.get("OPENAI_API_BASE")) |
There was a problem hiding this comment.
The logic for resolving configuration values (_resolve function and base_url assignment) has an inconsistent priority order. For most keys, yaml_llm is prioritized over first_role_llm and model_config, but for base_url, model_config is prioritized over yaml_llm. This inconsistency can lead to unexpected behavior. It would be clearer and more maintainable to unify the priority logic for all configuration keys.
def _resolve(key, env_vars=None, default=None):
sources = [
yaml_llm.get(key),
first_role_llm.get(key),
model_config.get(key),
]
if env_vars:
for env_var in env_vars:
sources.append(os.environ.get(env_var))
for value in sources:
if value is not None:
return value
return default
api_type = _resolve("api_type", default="openai").lower()
model_name = _resolve("model", default="gpt-4o-mini")
api_key = _resolve("api_key", env_vars=["OPENAI_API_KEY"])
base_url = _resolve("base_url", env_vars=["OPENAI_BASE_URL", "OPENAI_API_BASE"])| def make_tool_fn(f): | ||
| def tool_fn(**kwargs): | ||
| return f(**kwargs) if callable(f) else str(f) | ||
| tool_fn.__name__ = tool_name | ||
| return tool_fn |
There was a problem hiding this comment.
The make_tool_fn function is defined inside a loop, which creates a new function object on each iteration. This can lead to unnecessary overhead, especially if many tools are being registered. It's more efficient to define such helper functions outside the loop.
| def make_tool_fn(f): | |
| def tool_fn(**kwargs): | |
| return f(**kwargs) if callable(f) else str(f) | |
| tool_fn.__name__ = tool_name | |
| return tool_fn | |
| def _tool_wrapper(f_to_wrap, name_for_tool): | |
| def tool_fn(**kwargs): | |
| return f_to_wrap(**kwargs) if callable(f_to_wrap) else str(f_to_wrap) | |
| tool_fn.__name__ = name_for_tool | |
| return tool_fn | |
| wrapped = _tool_wrapper(func, tool_name) |
| result_content = "" | ||
| summary = getattr(chat_result, "summary", None) | ||
| if summary and isinstance(summary, str) and summary.strip(): | ||
| result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=_re.IGNORECASE).strip().rstrip('.') |
There was a problem hiding this comment.
The rstrip('.') call is redundant and potentially incorrect after the _re.sub operation. The regular expression r'[\s\.\,]*TERMINATE[\s\.\,]*$' already handles stripping trailing periods and other punctuation around the "TERMINATE" keyword. Applying rstrip('.') afterwards could inadvertently remove a legitimate trailing period from the actual content if the original string did not end with "TERMINATE" but happened to end with a period.
| result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=_re.IGNORECASE).strip().rstrip('.') | |
| result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=_re.IGNORECASE).strip() |
| continue | ||
| content = (msg.get("content") or "").strip() | ||
| if content: | ||
| result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=_re.IGNORECASE).strip().rstrip('.') |
There was a problem hiding this comment.
The rstrip('.') call is redundant and potentially incorrect after the _re.sub operation. The regular expression r'[\s\.\,]*TERMINATE[\s\.\,]*$' already handles stripping trailing periods and other punctuation around the "TERMINATE" keyword. Applying rstrip('.') afterwards could inadvertently remove a legitimate trailing period from the actual content if the original string did not end with "TERMINATE" but happened to end with a period.
| result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=_re.IGNORECASE).strip().rstrip('.') | |
| result_content = _re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=_re.IGNORECASE).strip() |
| except Exception: | ||
| _ag2_available = False |
There was a problem hiding this comment.
The except Exception: block is too broad. It can catch unexpected errors that are not related to ag2 not being available, potentially masking real bugs. It's better to catch specific exceptions like importlib.metadata.PackageNotFoundError and ImportError.
| except Exception: | |
| _ag2_available = False | |
| except (importlib.metadata.PackageNotFoundError, ImportError): | |
| _ag2_available = False |
| def _make_generator(self, framework, ag2_available=True): | ||
| """Create AgentsGenerator with mocked availability flags.""" |
There was a problem hiding this comment.
The except Exception: block in the _check_ag2_available helper function is too broad. In a test context, it's especially important to catch specific exceptions to ensure that the test is failing for the expected reason (e.g., PackageNotFoundError or ImportError) and not masking other potential issues.
except (importlib.metadata.PackageNotFoundError, ImportError):
_ag2_available = False| # Install: pip install "praisonai[ag2]" | ||
| # Run: praisonai --framework ag2 examples/ag2/ag2_basic.yaml | ||
| # or praisonai run examples/ag2/ag2_basic.yaml --framework ag2 |
There was a problem hiding this comment.
1. Cli rejects ag2 framework 🐞 Bug ✓ Correctness
The repo’s CLI --framework argument does not include ag2 in its allowed choices, so the newly added AG2 path is unreachable via the documented praisonai --framework ag2 ... commands. Users following the new examples will get an argparse validation error before any AG2 dispatch runs.
Agent Prompt
### Issue description
The CLI rejects `--framework ag2` because it is not included in argparse `choices`, so AG2 cannot be used through the documented CLI commands.
### Issue Context
Examples under `examples/ag2/` explicitly instruct `praisonai --framework ag2 ...`, but `cli/main.py` restricts choices.
### Fix Focus Areas
- src/praisonai/praisonai/cli/main.py[877-879]
- src/praisonai/praisonai/cli/main.py[5053-5056]
### Expected fix
- Add `"ag2"` to the `--framework` choices list.
- Update any UI dropdowns (e.g., Gradio) to include `ag2` as well so the feature is reachable from all supported entrypoints.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| AG2_AVAILABLE = False | ||
| try: | ||
| import importlib.metadata as _importlib_metadata | ||
| _importlib_metadata.distribution('ag2') | ||
| from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class | ||
| AG2_AVAILABLE = True | ||
| del _AG2LLMConfig, _importlib_metadata | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
2. Ag2 masks autogen detection 🐞 Bug ✓ Correctness
AUTOGEN_AVAILABLE/_check_autogen_available() treat any importable autogen module as “AutoGen v0.2”, but AG2 also installs under the autogen namespace, so installing AG2 can make framework="autogen" bind to AG2 unintentionally. This can change behavior or break the autogen path even when users didn’t install the autogen extra.
Agent Prompt
### Issue description
AG2 installs under the `autogen` namespace, but current “autogen availability” detection is `import autogen` which can be satisfied by AG2. This makes `framework='autogen'` potentially run against AG2 instead of the intended `pyautogen` dependency.
### Issue Context
This PR adds an explicit `ag2` framework, but the availability checks for `autogen` are not distribution-aware, so framework separation is unreliable when AG2 is installed.
### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[41-45]
- src/praisonai/praisonai/auto.py[69-79]
- src/praisonai/praisonai/agents_generator.py[58-66]
### Expected fix
- Update autogen v0.2 availability checks to verify the **pyautogen distribution** is installed (e.g., `importlib.metadata.distribution('pyautogen')`) in addition to importing `autogen`.
- Keep AG2 detection distribution-based (`distribution('ag2')`) as it is.
- Ensure `framework='autogen'` and `framework='ag2'` remain distinct even though they share the `autogen` import namespace.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| with patch("praisonai.agents_generator.AG2_AVAILABLE", True), \ | ||
| patch("autogen.LLMConfig", return_value=mock_llm_config) as mock_llmcfg, \ | ||
| patch("autogen.AssistantAgent", return_value=mock_assistant), \ | ||
| patch("autogen.UserProxyAgent", return_value=mock_user_proxy), \ | ||
| patch("autogen.GroupChat", return_value=mock_groupchat), \ | ||
| patch("autogen.GroupChatManager", return_value=mock_manager): |
There was a problem hiding this comment.
3. Ag2 tests assume autogen 🐞 Bug ⛯ Reliability
New AG2 unit tests patch autogen.* symbols, but autogen is not a base dependency (it’s only in
optional extras), so patch("autogen.X") will raise ModuleNotFoundError when tests run without
installing AG2/AutoGen. This can break CI/test runs in minimal installs.
Agent Prompt
### Issue description
AG2 unit tests patch `autogen.*` but `autogen` is an optional dependency; on a minimal install, `patch('autogen.LLMConfig', ...)` fails because the module can’t be imported.
### Issue Context
The project’s base dependencies do not include `pyautogen` or `ag2`; they are optional extras.
### Fix Focus Areas
- src/praisonai/tests/unit/test_ag2_adapter.py[1-40]
- src/praisonai/tests/unit/test_ag2_adapter.py[160-210]
- src/praisonai/tests/integration/ag2/test_ag2_integration.py[1-40]
### Expected fix
Choose one approach:
1) **Stub autogen module** in `sys.modules` (e.g., using `types.ModuleType('autogen')`) with placeholder attributes so `patch('autogen.X')` works even when optional deps aren’t installed.
2) Or **skip defensively**: `pytest.importorskip('autogen')` for tests that rely on patching autogen symbols.
Ensure the test suite behavior matches how other optional-dependency integrations are handled (skip when dependency absent, or fully stub the module).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
- Replace pyautogen==0.2.29 with ag2==0.2.29 in autogen extra dependency - Update integration test documentation to reference ag2 instead of pyautogen - Fix comment referencing pyautogen numpy conflicts This completes the AG2 migration that was started in PR #1156, ensuring backward compatibility while moving to the new ag2 library. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Port of PR #1143 (by @faridun-ag2) to
mainbranch.Original PR targeted
develop(1713 commits behindmain). This PR applies the same purely additive AG2 changes to the currentmaincodebase.Changes
agents_generator.py: AG2 detection viaimportlib.metadata+_run_ag2()with LLMConfig, GroupChat orchestration, Bedrock supportauto.py: AG2 lazy availability check + framework validationpyproject.toml:[ag2]optional dependency extra (ag2>=0.11.0)examples/ag2/: 3 YAML examples (basic, multi-agent, Bedrock)tests/: 16 unit + 9 integration testsVerification
Co-authored-by: Faridun Mirzoev faridun@ag2.ai
Summary by CodeRabbit
Release Notes
New Features
Documentation