Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions examples/ag2/ag2_basic.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
framework: ag2
topic: "Research the latest developments in AI agents"

# Install: pip install "praisonai[ag2]"
# Run: praisonai --framework ag2 examples/ag2/ag2_basic.yaml
# or praisonai run examples/ag2/ag2_basic.yaml --framework ag2
Comment on lines +4 to +6
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example instructs users to pass --framework ag2, but the CLI currently restricts --framework choices to ["crewai", "autogen", "praisonai"] (praisonai/cli.py). Either update the CLI/UI to accept ag2, or adjust these run instructions to omit --framework and rely on framework: ag2 in the YAML.

Copilot uses AI. Check for mistakes.

roles:
research_agent:
role: "AI Research Specialist"
goal: "Research and summarise the latest developments in AI agent frameworks"
backstory: |
You are an experienced AI researcher with deep knowledge of multi-agent
systems, large language models, and the latest trends in AI tooling.
You excel at synthesising complex technical topics into clear summaries.
tasks:
research_task:
description: |
Research and summarise the latest developments in AI agent frameworks
for the topic: {topic}

Focus on:
1. Key frameworks and their unique capabilities
2. Recent innovations and improvements
3. Community adoption and ecosystem growth
4. Practical use cases and success stories
expected_output: |
A concise research summary covering the key developments,
major frameworks, and practical insights. Include 3-5 bullet
points of the most important findings.
42 changes: 42 additions & 0 deletions examples/ag2/ag2_bedrock.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
framework: ag2
topic: "Cloud-native AI deployment strategies on AWS"

# AG2 exclusive feature: native AWS Bedrock support via LLMConfig(api_type="bedrock")
#
# Prerequisites:
# pip install "praisonai[ag2]"
# aws configure (or set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION)
#
# Run:
# praisonai --framework ag2 examples/ag2/ag2_bedrock.yaml
#
Comment on lines +10 to +12
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Cli rejects ag2 option 🐞 Bug ≡ Correctness

Examples instruct users to run praisonai --framework ag2 ..., but the CLI parser only allows
crewai|autogen|praisonai, so AG2 cannot be used from the CLI as documented.
Agent Prompt
### Issue description
The CLI rejects `--framework ag2` because `ag2` is missing from the argparse `choices` list, even though the PR adds AG2 dispatch and examples document using `--framework ag2`.

### Issue Context
Users following `examples/ag2/*.yaml` will hit an argparse validation error before PraisonAI can run the AG2 adapter.

### Fix Focus Areas
- src/praisonai/praisonai/cli.py[512-514]
- examples/ag2/ag2_bedrock.yaml[10-12]
- src/praisonai/praisonai/agents_generator.py[328-347]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +10 to +12
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run instructions use --framework ag2, but the CLI currently restricts --framework to ["crewai", "autogen", "praisonai"]. Either extend CLI/UI choices to include ag2, or update this example to omit the flag and rely on framework: ag2 in YAML.

Copilot uses AI. Check for mistakes.
# The AG2 adapter detects api_type="bedrock" from the llm config and uses
# LLMConfig(api_type="bedrock", model=...) — no OPENAI_API_KEY required.
# AWS credentials are sourced from boto3 (env vars, ~/.aws/credentials, IAM role).

roles:
cloud_architect:
role: "AWS Cloud Architect"
goal: "Design and explain cloud-native AI deployment strategies on AWS"
backstory: |
You are an AWS Solutions Architect specialising in AI/ML workloads.
You have deep expertise in Amazon Bedrock, SageMaker, ECS, and Lambda,
and you help organisations deploy AI agents at scale securely and cost-effectively.
llm:
model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 LiteLLM bedrock/ prefix is incompatible with AG2's native Bedrock client

bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0 uses the LiteLLM provider-prefix convention. When _run_ag2 builds the LLMConfig with api_type="bedrock", it passes this string verbatim as the model name to AG2's own Bedrock integration (which calls the boto3 Bedrock API directly). AG2's client expects the bare model ID — the bedrock/ prefix will likely cause an UnknownModelException or similar error from the Bedrock API.

Suggested change
model: "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
model: "anthropic.claude-3-5-sonnet-20241022-v2:0"

api_type: "bedrock"
aws_region: "us-east-1"
Comment on lines +25 to +28
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if aws_region is handled in _run_ag2
rg -n "aws_region" src/praisonai/praisonai/agents_generator.py

Repository: MervinPraison/PraisonAI

Length of output: 49


🏁 Script executed:

# Search for _run_ag2 function definition
rg -n "def _run_ag2" src/praisonai/

Repository: MervinPraison/PraisonAI

Length of output: 165


🏁 Script executed:

# Search for how llm config dict is being processed
rg -n "llm\[" src/praisonai/praisonai/agents_generator.py | head -20

Repository: MervinPraison/PraisonAI

Length of output: 49


🏁 Script executed:

# Look for LLMConfig instantiation with bedrock
rg -n "LLMConfig" src/praisonai/praisonai/agents_generator.py | head -20

Repository: MervinPraison/PraisonAI

Length of output: 500


🏁 Script executed:

# Search for bedrock or aws handling in the codebase
rg -n "bedrock" src/praisonai/ -i | head -20

Repository: MervinPraison/PraisonAI

Length of output: 2028


🏁 Script executed:

# Read the relevant section around line 474-483
sed -n '470,495p' src/praisonai/praisonai/agents_generator.py

Repository: MervinPraison/PraisonAI

Length of output: 1239


🏁 Script executed:

# Check what's in llm_config_entry before LLMConfig is called
sed -n '428,500p' src/praisonai/praisonai/agents_generator.py | head -80

Repository: MervinPraison/PraisonAI

Length of output: 3384


🏁 Script executed:

# Check the test to understand expected behavior
cat -n src/praisonai/tests/unit/test_ag2_adapter.py | sed -n '197,240p'

Repository: MervinPraison/PraisonAI

Length of output: 2378


🌐 Web query:

AG2 autogen LLMConfig bedrock aws_region parameter

💡 Result:

In AG2 (AutoGen), the Bedrock region is set via the aws_region key inside the Bedrock entry you pass to LLMConfig (i.e., inside config_list / the Bedrock config dict). [1][2]

from autogen import LLMConfig

llm_config = LLMConfig(
    config_list={
        "api_type": "bedrock",
        "model": "anthropic.claude-3-5-sonnet-20241022-v2:0",
        "aws_region": "us-east-1",
        "aws_access_key": "...",
        "aws_secret_key": "...",
        # optional: "aws_session_token": "...",
        # optional: "aws_profile_name": "default",
    }
)

AG2’s Bedrock examples also show pulling it from env, e.g. os.getenv("AWS_REGION", "eu-north-1"). [2]

Sources: AG2 Amazon Bedrock guide [1], AG2 Bedrock structured output notebook [2].


🏁 Script executed:

# Check if aws_region is mentioned anywhere in the repository in context of LLMConfig
rg -n "aws_region" src/praisonai/ -A 2 -B 2

Repository: MervinPraison/PraisonAI

Length of output: 49


🏁 Script executed:

# Check if there are any other bedrock config examples in the repo
fd -e yaml -o -e yml | xargs grep -l "bedrock" 2>/dev/null

Repository: MervinPraison/PraisonAI

Length of output: 185


🏁 Script executed:

# Look for any bedrock-related documentation or comments in the code
rg -n "Bedrock" src/praisonai/praisonai/agents_generator.py -A 3 -B 3

Repository: MervinPraison/PraisonAI

Length of output: 407


Extract and propagate aws_region to AG2's LLMConfig for Bedrock.

The aws_region field in the YAML config is not extracted or passed to AG2's LLMConfig. AG2's Bedrock integration requires aws_region as a key in the config dict passed to LLMConfig. Without it, the integration will fall back to the AWS_DEFAULT_REGION environment variable, making the YAML configuration incomplete.

Update _run_ag2() to extract aws_region via the _resolve() function and include it in the llm_config_entry dict for Bedrock:

if api_type == "bedrock":
    aws_region = _resolve("aws_region", env_var="AWS_DEFAULT_REGION", default="us-east-1")
    llm_config_entry = {"api_type": "bedrock", "model": model_name, "aws_region": aws_region}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ag2/ag2_bedrock.yaml` around lines 25 - 28, The YAML's aws_region is
not being extracted and passed into AG2's LLMConfig for Bedrock; update
_run_ag2() to call _resolve("aws_region", env_var="AWS_DEFAULT_REGION",
default="us-east-1") when api_type == "bedrock" and add that value into the
llm_config_entry dict (e.g., llm_config_entry = {"api_type":"bedrock","model":
model_name,"aws_region": aws_region}) so the Bedrock integration receives the
region from the config instead of relying on AWS_DEFAULT_REGION.

tasks:
architecture_task:
description: |
Design a cloud-native deployment strategy for AI agents on AWS for: {topic}

Cover:
1. Recommended AWS services (Bedrock, ECS, Lambda, etc.)
2. Scalability and cost optimisation patterns
3. Security and compliance considerations
4. A simple reference architecture overview
expected_output: |
A concise architecture guide with service recommendations,
a high-level deployment diagram description, and key
best practices for production AI agent deployments on AWS.
54 changes: 54 additions & 0 deletions examples/ag2/ag2_multi_agent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
framework: ag2
topic: "The impact of open-source AI on enterprise software development"

# Install: pip install "praisonai[ag2]"
# Run: praisonai --framework ag2 examples/ag2/ag2_multi_agent.yaml
# or praisonai run examples/ag2/ag2_multi_agent.yaml --framework ag2
Comment on lines +4 to +6
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example tells users to pass --framework ag2, but the CLI currently does not list ag2 as an allowed --framework choice. Either update the CLI/UI framework choices to include ag2, or adjust these instructions to rely on framework: ag2 in the YAML (no --framework flag).

Copilot uses AI. Check for mistakes.
#
# This example demonstrates AG2's GroupChat multi-agent coordination.
# Both agents participate in a collaborative conversation managed by
# a GroupChatManager until the task is complete.

roles:
researcher:
role: "Research Specialist"
goal: "Gather and analyse information on the given topic"
backstory: |
You are a meticulous researcher who excels at finding relevant
information, analysing trends, and presenting data-backed insights.
You always cite your reasoning and structure your findings clearly.
tasks:
research_task:
description: |
Research the topic: {topic}

Investigate:
1. Current state and adoption rates
2. Key players and projects driving the trend
3. Technical advantages and challenges
4. Business impact and cost implications
expected_output: |
A structured research briefing with findings on the topic,
including key data points, trends, and technical observations.

writer:
role: "Technical Content Writer"
goal: "Transform research findings into clear, engaging written content"
backstory: |
You are a skilled technical writer who turns complex research into
accessible, well-structured articles. You focus on clarity, logical
flow, and actionable takeaways for a professional audience.
tasks:
writing_task:
description: |
Using the research findings provided by the Research Specialist,
write a concise article on: {topic}

The article should:
1. Open with a compelling hook
2. Present key findings logically
3. Include practical implications for developers
4. Close with a forward-looking conclusion
expected_output: |
A 400-500 word article suitable for a technical blog,
with clear sections, professional tone, and concrete takeaways.
16 changes: 13 additions & 3 deletions src/praisonai/.env.example
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
OPENAI_MODEL_NAME="gpt-4o"
# OpenAI / compatible API
OPENAI_API_KEY="Enter your API key"
OPENAI_MODEL_NAME="gpt-4o"
OPENAI_API_BASE="https://api.openai.com/v1"

# AG2 framework (uses same OPENAI_* vars above, or override below)
# MODEL_NAME=gpt-4o-mini

# AWS Bedrock (for ag2_bedrock.yaml example)
# AWS_DEFAULT_REGION=us-east-1
# AWS_ACCESS_KEY_ID=your-access-key
# AWS_SECRET_ACCESS_KEY=your-secret-key

# Chainlit (optional)
CHAINLIT_USERNAME=admin
CHAINLIT_USERNAME=admin
CHAINLIT_AUTH_SECRET="chainlit create-secret to create"
CHAINLIT_AUTH_SECRET="chainlit create-secret to create"
166 changes: 164 additions & 2 deletions src/praisonai/praisonai/agents_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,16 @@
except ImportError:
pass

AG2_AVAILABLE = False
try:
import importlib.metadata as _importlib_metadata
_importlib_metadata.distribution('ag2')
from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class
Comment on lines +44 to +48
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because AG2 installs under the autogen namespace, import autogen can succeed even when pyautogen is not installed. With the current flags, that can make AUTOGEN_AVAILABLE a false-positive and cause framework="autogen" to run against the AG2 backend (or vice-versa). Consider detecting AutoGen via the pyautogen distribution (importlib.metadata.distribution("pyautogen")) or another robust discriminator to avoid namespace collisions.

Copilot uses AI. Check for mistakes.
AG2_AVAILABLE = True
del _AG2LLMConfig, _importlib_metadata
except Exception:
pass
Comment on lines +51 to +52
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a generic Exception can hide specific issues and make debugging harder. It's better to catch importlib.metadata.PackageNotFoundError and ImportError explicitly, as these are the expected exceptions when a package or its components are not found.

Suggested change
except Exception:
pass
except (importlib.metadata.PackageNotFoundError, ImportError):
pass


try:
import agentops
AGENTOPS_AVAILABLE = True
Expand All @@ -51,7 +61,7 @@
pass

# Only try to import praisonai_tools if either CrewAI or AutoGen is available
if CREWAI_AVAILABLE or AUTOGEN_AVAILABLE or PRAISONAI_AVAILABLE:
if CREWAI_AVAILABLE or AUTOGEN_AVAILABLE or PRAISONAI_AVAILABLE or AG2_AVAILABLE:
try:
from praisonai_tools import (
CodeDocsSearchTool, CSVSearchTool, DirectorySearchTool, DOCXSearchTool, DirectoryReadTool,
Expand Down Expand Up @@ -127,6 +137,8 @@ def __init__(self, agent_file, framework, config_list, log_level=None, agent_cal
raise ImportError("AutoGen is not installed. Please install it with 'pip install praisonai[autogen]'")
elif framework == "praisonai" and not PRAISONAI_AVAILABLE:
raise ImportError("PraisonAI is not installed. Please install it with 'pip install praisonaiagents'")
elif framework == "ag2" and not AG2_AVAILABLE:
raise ImportError("AG2 is not installed. Please install it with 'pip install praisonai[ag2]'")

def is_function_or_decorated(self, obj):
"""
Expand Down Expand Up @@ -274,7 +286,7 @@ def generate_crew_and_kickoff(self):
tools_dict = {}

# Only try to use praisonai_tools if it's available and needed
if PRAISONAI_TOOLS_AVAILABLE and (CREWAI_AVAILABLE or AUTOGEN_AVAILABLE or PRAISONAI_AVAILABLE):
if PRAISONAI_TOOLS_AVAILABLE and (CREWAI_AVAILABLE or AUTOGEN_AVAILABLE or PRAISONAI_AVAILABLE or AG2_AVAILABLE):
tools_dict = {
'CodeDocsSearchTool': CodeDocsSearchTool(),
'CSVSearchTool': CSVSearchTool(),
Expand Down Expand Up @@ -327,6 +339,12 @@ def generate_crew_and_kickoff(self):
if AGENTOPS_AVAILABLE:
agentops.init(os.environ.get("AGENTOPS_API_KEY"), default_tags=["praisonai"])
return self._run_praisonai(config, topic, tools_dict)
elif framework == "ag2":
if not AG2_AVAILABLE:
raise ImportError("AG2 is not installed. Please install it with 'pip install praisonai[ag2]'")
if AGENTOPS_AVAILABLE:
agentops.init(os.environ.get("AGENTOPS_API_KEY"), default_tags=["ag2"])
return self._run_ag2(config, topic, tools_dict)
else: # framework=crewai
if not CREWAI_AVAILABLE:
raise ImportError("CrewAI is not installed. Please install it with 'pip install praisonai[crewai]'")
Expand Down Expand Up @@ -407,6 +425,150 @@ def _run_autogen(self, config, topic, tools_dict):

return result

def _run_ag2(self, config, topic, tools_dict):
"""
Run agents using the AG2 framework (community fork of AutoGen, PyPI: ag2).

AG2 installs under the 'autogen' namespace — there is no 'import ag2'.
Uses LLMConfig context manager + AssistantAgent + GroupChat pattern.

Args:
config (dict): Configuration dictionary parsed from YAML
topic (str): The topic/task to process
tools_dict (dict): Dictionary of available tools

Returns:
str: Result prefixed with '### AG2 Output ###'
"""
import re
from autogen import (
AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, LLMConfig
)

model_config = self.config_list[0] if self.config_list else {}

# Allow YAML top-level llm block to override config_list values
yaml_llm = config.get("llm", {}) or {}
# Also check first role's llm block as a fallback
first_role_llm = {}
for role_details in config.get("roles", {}).values():
first_role_llm = role_details.get("llm", {}) or {}
break

# Priority: YAML top-level llm > first role llm > config_list > env vars
def _resolve(key, env_var=None, default=None):
return (yaml_llm.get(key) or first_role_llm.get(key)
or model_config.get(key)
or (os.environ.get(env_var) if env_var else None)
or default)

api_type = _resolve("api_type", default="openai").lower()
model_name = _resolve("model", default="gpt-4o-mini")
api_key = _resolve("api_key", env_var="OPENAI_API_KEY")
# Fix #3: also check OPENAI_API_BASE for consistency with rest of codebase
base_url = (model_config.get("base_url")
or yaml_llm.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))
Comment on lines +469 to +472
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 base_url resolution skips first_role_llm

All other config fields (model, api_key, api_type) are resolved via the _resolve helper which considers yaml_llm → first_role_llm → config_list → env var. The base_url resolution bypasses first_role_llm, meaning a base_url set inside a role-level llm: block is silently ignored for URL routing. For consistency:

Suggested change
base_url = (model_config.get("base_url")
or yaml_llm.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))
base_url = (yaml_llm.get("base_url")
or first_role_llm.get("base_url")
or model_config.get("base_url")
or os.environ.get("OPENAI_BASE_URL")
or os.environ.get("OPENAI_API_BASE"))

Comment on lines +450 to +472
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Base_url override wrong order 🐞 Bug ≡ Correctness

_run_ag2 documents that YAML llm overrides config_list, but base_url is resolved with
config_list taking precedence, so YAML llm.base_url is silently ignored.
Agent Prompt
### Issue description
`base_url` resolution contradicts the adapter’s documented precedence. YAML `llm.base_url` should override `config_list.base_url` but currently does not.

### Issue Context
The adapter uses `_resolve()` (YAML-first) for other keys, but `base_url` uses a different ordering.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[458-472]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


# Build LLMConfig — pass a config dict; Bedrock needs no api_key
if api_type == "bedrock":
llm_config_entry = {"api_type": "bedrock", "model": model_name}
else:
Comment on lines +474 to +477
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Bedrock region ignored 🐞 Bug ≡ Correctness

_run_ag2 drops the YAML aws_region setting for Bedrock, so the ag2_bedrock.yaml example’s
explicit region is never applied.
Agent Prompt
### Issue description
The AG2 adapter ignores `aws_region` from YAML when configuring Bedrock, so users cannot control region via config files (contradicting the provided Bedrock example).

### Issue Context
`examples/ag2/ag2_bedrock.yaml` specifies `llm.aws_region: us-east-1`, but `_run_ag2` does not read or include it in the Bedrock `llm_config_entry`.

### Fix Focus Areas
- src/praisonai/praisonai/agents_generator.py[450-477]
- examples/ag2/ag2_bedrock.yaml[25-29]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

llm_config_entry = {"model": model_name}
if api_key:
llm_config_entry["api_key"] = api_key
if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"):
llm_config_entry["base_url"] = base_url
Comment on lines +475 to +482
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 aws_region from YAML is silently ignored for Bedrock

The ag2_bedrock.yaml example (and any user YAML) can specify aws_region inside the role-level llm block, but _run_ag2 never extracts it from yaml_llm or first_role_llm. As a result, the region is silently dropped and AG2 falls back to whatever boto3 picks up from the environment or ~/.aws/config.

To honour the YAML setting:

Suggested change
if api_type == "bedrock":
llm_config_entry = {"api_type": "bedrock", "model": model_name}
else:
llm_config_entry = {"model": model_name}
if api_key:
llm_config_entry["api_key"] = api_key
if base_url and base_url not in ("https://api.openai.com/v1", "https://api.openai.com/v1/"):
llm_config_entry["base_url"] = base_url
if api_type == "bedrock":
aws_region = _resolve("aws_region", env_var="AWS_DEFAULT_REGION")
llm_config_entry = {"api_type": "bedrock", "model": model_name}
if aws_region:
llm_config_entry["aws_region"] = aws_region

llm_config = LLMConfig(llm_config_entry)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 LLMConfig called with positional dict — likely TypeError at runtime

LLMConfig(llm_config_entry) passes a plain dict as the first positional argument. AG2's LLMConfig is a Pydantic model whose constructor accepts keyword arguments (e.g. model=, api_type=), not a positional dict. The bundled example tests/source/ag2_function_tools.py (lines 51–55) confirms the expected call style uses keyword arguments, not a positional dict.

Because the unit tests mock LLMConfig entirely, they cannot catch this mismatch. The fix is to unpack the dict:

Suggested change
llm_config = LLMConfig(llm_config_entry)
llm_config = LLMConfig(**llm_config_entry)


user_proxy = UserProxyAgent(
name="User",
human_input_mode="NEVER",
is_termination_msg=lambda x: "TERMINATE" in (x.get("content") or ""),
code_execution_config=False,
)

# Create one AssistantAgent per role, passing llm_config directly
ag2_agent_entries = []
for role, details in config["roles"].items():
agent_name = details.get("role", role).replace("{topic}", topic)
backstory = details.get("backstory", "").replace("{topic}", topic)
agent_name_safe = re.sub(r"[^a-zA-Z0-9_\-]", "_", agent_name)
assistant = AssistantAgent(
name=agent_name_safe,
system_message=backstory + "\nWhen the task is done, reply 'TERMINATE'.",
llm_config=llm_config,
)
ag2_agent_entries.append((role, details, assistant))

# Register tools via AG2 decorator pattern
for role, details, assistant in ag2_agent_entries:
for tool_name in details.get("tools", []):
tool = tools_dict.get(tool_name)
if tool is None:
continue
func = tool if callable(tool) else getattr(tool, "run", None)
if func is None:
continue

def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
Comment on lines +515 to +518
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The make_tool_fn function returns str(f) if f is not callable. This behavior is unexpected for a tool function, which should typically be callable. If f is not callable, it likely indicates a misconfiguration or an issue with the tool definition. It would be safer to raise an error or ensure f is always callable before wrapping it.

Suggested change
def make_tool_fn(f):
def tool_fn(**kwargs):
return f(**kwargs) if callable(f) else str(f)
tool_fn.__name__ = tool_name
def make_tool_fn(f):
if not callable(f):
raise TypeError(f"Tool '{tool_name}' is not callable.")
def tool_fn(**kwargs):
return f(**kwargs)
tool_fn.__name__ = tool_name
return tool_fn

return tool_fn
Comment on lines +515 to +519
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tool wrapper created here uses a generic tool_fn(**kwargs) signature and drops the wrapped tool’s real signature/type hints. AG2’s register_for_llm typically builds the tool schema from the callable’s signature/annotations, so this wrapper can prevent the LLM from seeing required parameters. Preserve the original callable’s signature/annotations (e.g., set wrapped.__signature__ / __annotations__, or avoid wrapping when possible).

Copilot uses AI. Check for mistakes.

wrapped = make_tool_fn(func)
assistant.register_for_llm(description=f"Tool: {tool_name}")(wrapped)
user_proxy.register_for_execution()(wrapped)
Comment on lines +515 to +523
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Closure captures loop variable by reference — all tools will share the last tool_name.

The make_tool_fn closure captures tool_name from the enclosing scope. Since tool_name is reassigned each iteration, all registered tools will have __name__ set to the last tool in the loop. The fix is to pass tool_name as a default argument.

🐛 Proposed fix
-                def make_tool_fn(f):
+                def make_tool_fn(f, name=tool_name):
                     def tool_fn(**kwargs):
                         return f(**kwargs) if callable(f) else str(f)
-                    tool_fn.__name__ = tool_name
+                    tool_fn.__name__ = name
                     return tool_fn
🧰 Tools
🪛 Ruff (0.15.9)

[warning] 518-518: Function definition does not bind loop variable tool_name

(B023)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai/praisonai/agents_generator.py` around lines 515 - 523, The
closure make_tool_fn currently captures the loop variable tool_name by reference
causing every tool_fn to end up with the last tool's name; change make_tool_fn
to accept tool_name as a default parameter (e.g., def make_tool_fn(f,
tool_name=tool_name):) and use that local parameter when setting
tool_fn.__name__, then continue registering wrapped via
assistant.register_for_llm and user_proxy.register_for_execution so each wrapped
function retains its correct name; update references around make_tool_fn,
tool_fn, wrapped, func, assistant.register_for_llm and
user_proxy.register_for_execution accordingly.


all_assistants = [a for _, _, a in ag2_agent_entries]
if not all_assistants:
return "### AG2 Output ###\nNo agents created from configuration."

# Build initial message from all task descriptions
task_lines = []
for role, details, _ in ag2_agent_entries:
for task_name, task_details in details.get("tasks", {}).items():
desc = task_details.get("description", "").replace("{topic}", topic)
if desc:
task_lines.append(desc)
initial_message = "\n".join(task_lines) if task_lines else topic

groupchat = GroupChat(
agents=[user_proxy] + all_assistants,
messages=[],
max_round=12,
)
manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)

try:
chat_result = user_proxy.initiate_chat(manager, message=initial_message)
except Exception as e:
return f"### AG2 Error ###\n{str(e)}"

# Prefer ChatResult.summary if available, otherwise scan messages
result_content = ""
summary = getattr(chat_result, "summary", None)
if summary and isinstance(summary, str) and summary.strip():
result_content = re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', summary, flags=re.IGNORECASE).strip().rstrip('.')

if not result_content:
for msg in reversed(groupchat.messages):
# Skip the initial user proxy message
if msg.get("name") == "User":
continue
content = (msg.get("content") or "").strip()
if content:
result_content = re.sub(r'[\s\.\,]*TERMINATE[\s\.\,]*$', '', content, flags=re.IGNORECASE).strip().rstrip('.')
if result_content:
break

if not result_content:
result_content = "Task completed."

return f"### AG2 Output ###\n{result_content}"

def _run_crewai(self, config, topic, tools_dict):
"""
Run agents using the CrewAI framework.
Expand Down
15 changes: 15 additions & 0 deletions src/praisonai/praisonai/auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,16 @@
except ImportError:
pass

AG2_AVAILABLE = False
try:
import importlib.metadata as _importlib_metadata
_importlib_metadata.distribution('ag2')
from autogen import LLMConfig as _AG2LLMConfig # noqa: F401 — AG2-exclusive class
AG2_AVAILABLE = True
del _AG2LLMConfig, _importlib_metadata
except Exception:
pass
Comment on lines +42 to +43
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a generic Exception can hide specific issues and make debugging harder. It's better to catch importlib.metadata.PackageNotFoundError and ImportError explicitly, as these are the expected exceptions when a package or its components are not found.

Suggested change
except Exception:
pass
except (importlib.metadata.PackageNotFoundError, ImportError):
pass


try:
from praisonai_tools import (
CodeDocsSearchTool, CSVSearchTool, DirectorySearchTool, DOCXSearchTool,
Expand Down Expand Up @@ -83,6 +93,11 @@ def __init__(self, topic="Movie Story writing about AI", agent_file="test.yaml",
Praisonai is not installed. Please install with:
pip install praisonaiagents
""")
elif framework == "ag2" and not AG2_AVAILABLE:
raise ImportError("""
AG2 is not installed. Please install with:
pip install "praisonai[ag2]"
""")

# Only show tools message if using a framework and tools are needed
if (framework in ["crewai", "autogen"]) and not PRAISONAI_TOOLS_AVAILABLE:
Expand Down
Loading
Loading