fix: resolve wrapper gaps - nested asyncio.run, hot-path logging, fragmented approval schema#1457
Conversation
β¦cConfig, and fragmented approval schema (fixes #1456) - Add _async_bridge.py: Safe async/sync bridging with background loop reuse instead of per-call asyncio.run - Add _logging.py: Namespaced logging without root logger mutation to prevent hot-path basicConfig calls - Add _approval_spec.py: Unified approval configuration spec with validation across CLI, YAML, Python surfaces Updated 7 files to use new modules: - agents_generator.py: Fixed logging init and asyncio.run in _run_autogen_v4, unified approval mapping - bots/_approval_base.py: Replaced ThreadPoolExecutor + asyncio.run pattern with bridge - integrations/base.py: Fixed tool wrapper asyncio.run - integrations/managed_local.py: Fixed 4 asyncio.run instances - integrations/registry.py: Fixed registry async calls - endpoints/a2u_server.py: Fixed event publishing async calls - cli/main.py: Use centralized CLI logging configuration Fixes: 1. Eliminates nested event loop crashes in multi-agent/Jupyter/Chainlit/FastAPI contexts 2. Removes per-instance basicConfig hot-path overhead and root logger mutation 3. Provides single source of truth for approval config with typo validation Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
There was a problem hiding this comment.
praisonai-triage-agent[bot] has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
@coderabbitai review |
|
/review |
β Actions performedReview triggered.
|
π WalkthroughWalkthroughThis PR introduces three new infrastructure modules ( Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI / Python Code
participant Sync as Sync Thread
participant AsyncBridge as run_sync()
participant BGLoop as Background Event Loop
participant Coro as Coroutine
CLI->>AsyncBridge: run_sync(coro, timeout=T)
alt Running event loop detected
AsyncBridge->>BGLoop: schedule_coroutine_threadsafe()
BGLoop->>Coro: execute
Coro-->>BGLoop: result
BGLoop-->>AsyncBridge: future.result(T)
else No event loop
AsyncBridge->>Sync: asyncio.run(coro)
Sync->>Coro: execute
Coro-->>Sync: result
Sync-->>AsyncBridge: return
end
AsyncBridge-->>CLI: result / exception
sequenceDiagram
participant CLI as CLI Args
participant YAML as YAML Config
participant ApprovalSpec as ApprovalSpec
participant Agent as Agent Constructor
participant Backend as Approval Backend
CLI->>ApprovalSpec: from_cli(args)
note over ApprovalSpec: Parse trust, approval<br/>timeout, approve_level
ApprovalSpec-->>CLI: spec
YAML->>ApprovalSpec: from_yaml(node)
note over ApprovalSpec: Validate keys<br/>Support legacy aliases
ApprovalSpec-->>YAML: spec
Agent->>ApprovalSpec: spec.to_dict()
ApprovalSpec-->>Agent: {enabled, backend, timeout, ...}
Agent->>Backend: resolve_approval_config()
Backend-->>Agent: approval resolver
Estimated code review effortπ― 4 (Complex) | β±οΈ ~60 minutes Possibly related issues
Possibly related PRs
Suggested labels
Poem
π₯ Pre-merge checks | β 5β Passed checks (5 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touchesπ Generate docstrings
π§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@copilot Do a thorough review of this PR. Read ALL existing reviewer comments above from Qodo, Coderabbit, and Gemini first β incorporate their findings. Review areas:
|
β¦rovalSpec Agent-Logs-Url: https://github.com/MervinPraison/PraisonAI/sessions/855181c2-fd9f-4360-b772-7f8102042df2 Co-authored-by: MervinPraison <454862+MervinPraison@users.noreply.github.com>
There was a problem hiding this comment.
praisonai-triage-agent[bot] has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
Completed a full review and applied fixes in What I found and addressed:
Reviewer-input incorporation:
Validation:
|
|
@claude You are the FINAL architecture reviewer. If the branch is under MervinPraison/PraisonAI (not a fork), you are able to make modifications to this branch and push directly. SCOPE: Focus ONLY on Python packages (praisonaiagents, praisonai). Do NOT modify praisonai-rust or praisonai-ts. Read ALL comments above from Gemini, Qodo, CodeRabbit, and Copilot carefully before responding. Phase 1: Review per AGENTS.md
Phase 2: FIX Valid Issues Phase 3: Final Verdict |
|
Claude finished @MervinPraison's task ββ View job Final Architecture Review Complete βPhase 1: Architecture Review
Phase 2: Code Analysis
Phase 3: Security & Quality
Phase 4: Fix Implementation
π― FINAL VERDICT: APPROVEThis PR excellently addresses the wrapper gaps and follows PraisonAI's architectural principles: β What This PR Fixes
β Architecture Compliance
β Code Quality
β Previous Reviewer Issues ResolvedCopilot already addressed the critical issues:
π Impact Assessment
This PR can be merged. It represents a high-quality architectural fix that addresses real user pain points while maintaining the framework's design principles. |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (3)
src/praisonai/praisonai/integrations/base.py (1)
321-352:β οΈ Potential issue | π MajorReplace the remaining ad-hoc async bridge in this wrapper.
This backward-compatibility path still uses per-call
ThreadPoolExecutorplusasyncio.run(), so it preserves the exact nested-loop/thread churn pattern the PR is removing.π Proposed fix
def get_available_integrations() -> Dict[str, bool]: @@ # Import here to avoid circular imports try: from .registry import get_registry - import asyncio + from .._async_bridge import run_sync registry = get_registry() - - # Handle async call in sync context - try: - # Try to get existing event loop - loop = asyncio.get_event_loop() - if loop.is_running(): - # We're in an async context, use create_task - import concurrent.futures - with concurrent.futures.ThreadPoolExecutor() as executor: - future = executor.submit(asyncio.run, registry.get_available()) - return future.result() - else: - # No running loop, safe to use asyncio.run - return asyncio.run(registry.get_available()) - except RuntimeError: - # No event loop, safe to use asyncio.run - return asyncio.run(registry.get_available()) + return run_sync(registry.get_available())π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/integrations/base.py` around lines 321 - 352, The wrapper get_available_integrations currently spawns a per-call ThreadPoolExecutor and asyncio.run to bridge sync->async; replace that ad-hoc pattern by scheduling the coroutine on the running event loop with asyncio.run_coroutine_threadsafe(registry.get_available(), loop).result() when loop.is_running(), and fall back to asyncio.run(registry.get_available()) only when no loop is running; update the block around registry.get_available() to remove the ThreadPoolExecutor usage and use run_coroutine_threadsafe to wait for the result instead, keeping exception handling similar.src/praisonai/praisonai/cli/main.py (1)
224-237:β οΈ Potential issue | π MajorMove CLI logging setup out of module import.
Line 226 still configures the root logger as soon as
praisonai.cli.mainis imported, and Lines 227-237 also mutate logger levels globally. That preserves the import-time side effect this PR is trying to remove; scope the whole block to the actual CLI run path instead.Proposed fix
# Configure root logging only at CLI entrypoint from .._logging import configure_cli_logging -configure_cli_logging(os.environ.get('LOGLEVEL', 'WARNING') or 'WARNING') -logging.getLogger('alembic').setLevel(logging.ERROR) -logging.getLogger('gradio').setLevel(logging.ERROR) -logging.getLogger('gradio').setLevel(os.environ.get('GRADIO_LOGLEVEL', 'WARNING')) -logging.getLogger('rust_logger').setLevel(logging.WARNING) -logging.getLogger('duckduckgo').setLevel(logging.ERROR) -logging.getLogger('_client').setLevel(logging.ERROR) -# Suppress praisonaiagents INFO logs unless LOGLEVEL is explicitly set to debug/info -if os.environ.get('LOGLEVEL', '').upper() not in ('DEBUG', 'INFO'): - logging.getLogger('praisonaiagents').setLevel(logging.WARNING) - logging.getLogger('praisonaiagents.llm').setLevel(logging.WARNING) - logging.getLogger('praisonaiagents.llm.llm').setLevel(logging.WARNING) + + +def _configure_cli_logging_once(): + configure_cli_logging(os.environ.get('LOGLEVEL', 'WARNING') or 'WARNING') + logging.getLogger('alembic').setLevel(logging.ERROR) + logging.getLogger('gradio').setLevel(logging.ERROR) + logging.getLogger('gradio').setLevel(os.environ.get('GRADIO_LOGLEVEL', 'WARNING')) + logging.getLogger('rust_logger').setLevel(logging.WARNING) + logging.getLogger('duckduckgo').setLevel(logging.ERROR) + logging.getLogger('_client').setLevel(logging.ERROR) + # Suppress praisonaiagents INFO logs unless LOGLEVEL is explicitly set to debug/info + if os.environ.get('LOGLEVEL', '').upper() not in ('DEBUG', 'INFO'): + logging.getLogger('praisonaiagents').setLevel(logging.WARNING) + logging.getLogger('praisonaiagents.llm').setLevel(logging.WARNING) + logging.getLogger('praisonaiagents.llm.llm').setLevel(logging.WARNING)Then call it at the start of
PraisonAI.main():def main(self): """ The main function of the PraisonAI object. It parses the command-line arguments, initializes the necessary attributes, and then calls the appropriate methods based on the provided arguments. """ + _configure_cli_logging_once() # Set OpenTelemetry SDK to disabled to prevent telemetry collectionπ€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/cli/main.py` around lines 224 - 237, The CLI logging configuration block (calls to configure_cli_logging and multiple logging.getLogger(...).setLevel(...), including 'alembic','gradio','rust_logger','duckduckgo','_client','praisonaiagents' and its subloggers) must be removed from module import time and instead invoked at the start of the CLI run path: call the same setup inside the PraisonAI.main() method before any other work; specifically relocate the entire block currently at top-level in praisonai.cli.main to the beginning of the PraisonAI.main() function so logging is configured only when the CLI runs.src/praisonai/praisonai/agents_generator.py (1)
1204-1228:β οΈ Potential issue | π΄ CriticalCritical: Type mismatch on
timeoutwill raiseAttributeErrorat runtime.
ApprovalSpec.timeoutis declaredOptional[float](line 38 of_approval_spec.py), butresolve_approval_configdeclarestimeout: Optional[str](line 109 ofcli/features/approval.py) and immediately callstimeout.lower()at line 130. Passingspec.timeoutdirectly will crash withAttributeError: 'float' object has no attribute 'lower'for any YAML approval with a non-Nonetimeout value.Convert to string before passing:
timeout_arg = None if spec.timeout is None else str(spec.timeout) approval_config = resolve_approval_config( backend_name=spec.backend, all_tools=spec.approve_all_tools, timeout=timeout_arg, )Minor: The closing
)on line 1217 is at the outer indent level, which is syntactically valid but visually misleading.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/agents_generator.py` around lines 1204 - 1228, The code passes ApprovalSpec.timeout (spec.timeout, a Optional[float]) directly into resolve_approval_config which expects Optional[str], causing an AttributeError when resolve_approval_config calls timeout.lower(); fix by converting spec.timeout to a string when not None (e.g., timeout_arg = None if spec.timeout is None else str(spec.timeout)) and pass timeout=timeout_arg into resolve_approval_config (refer to ApprovalSpec.from_yaml, spec.timeout, resolve_approval_config, and approval_config), and optionally move the closing parenthesis for the resolve_approval_config call to the inner indent level for clarity.
π§Ή Nitpick comments (2)
src/praisonai/praisonai/_async_bridge.py (1)
30-30: Narrow therun_syncinput contract toCoroutine[Any, Any, T].Both
asyncio.run()andasyncio.run_coroutine_threadsafe()explicitly require coroutine objects, not genericAwaitableinstances. The current annotation advertises a wider API than the implementation supports; changing the type hint toCoroutine[Any, Any, T]will prevent misuse and clarify the contract for future callers.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/_async_bridge.py` at line 30, The function run_sync currently types its first parameter as Awaitable[T] but only accepts coroutine objects; change the type annotation of the first parameter in run_sync to Coroutine[Any, Any, T] to match asyncio.run()/asyncio.run_coroutine_threadsafe requirements and prevent misuseβupdate the signature for run_sync(coro: Coroutine[Any, Any, T], *, timeout: float | None = None) and adjust any imports/types if necessary to reference typing.Coroutine and typing.Any.src/praisonai/praisonai/endpoints/a2u_server.py (1)
172-183: Minor: considerasyncio.get_running_loop()for the running-loop fast path.The retained branch only cares about a loop that is currently running, so
asyncio.get_running_loop()would be both more precise and avoids theasyncio.get_event_loop()DeprecationWarning in newer Python versions. The fallthrough torun_sync(...)already handles the "no running loop" case correctly.β»οΈ Proposed refactor
- try: - loop = asyncio.get_event_loop() - if loop.is_running(): - # Schedule in running loop - asyncio.ensure_future(self.publish(event, stream_name)) - return len(self._streams.get(stream_name, set())) - except RuntimeError: - pass - - # Use safe bridge for sync execution - from .._async_bridge import run_sync - return run_sync(self.publish(event, stream_name)) + try: + loop = asyncio.get_running_loop() + except RuntimeError: + loop = None + if loop is not None and loop.is_running(): + asyncio.ensure_future(self.publish(event, stream_name)) + return len(self._streams.get(stream_name, set())) + + # Use safe bridge for sync execution outside any running loop + from .._async_bridge import run_sync + return run_sync(self.publish(event, stream_name))π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/praisonai/praisonai/endpoints/a2u_server.py` around lines 172 - 183, Replace the get_event_loop path with asyncio.get_running_loop() in the method that schedules publish (the block calling asyncio.ensure_future(self.publish(event, stream_name)) and returning len(self._streams.get(stream_name, set()))) so you directly detect a running loop without triggering DeprecationWarnings; call asyncio.get_running_loop() inside a try and, if it succeeds, schedule the coroutine with asyncio.ensure_future(...) and return the stream count, else allow the fallthrough to from .._async_bridge import run_sync and return run_sync(self.publish(event, stream_name)). Ensure references to self.publish, self._streams and run_sync remain unchanged.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/praisonai/praisonai/_approval_spec.py`:
- Around line 66-72: The returned ApprovalSpec currently accepts raw strings for
fields like backend, approve_level, and guardrails; add runtime validation in
the factory/constructor that canonicalizes and rejects unknown enum values
(e.g., map/validate against the allowed members of your
Backend/ApproveLevel/Guardrail enums) and raise a clear ValueError for invalid
inputs before calling cls(...). Locate the code path that builds the spec (the
method returning cls(...) and any similar block around lines 92-117) and replace
the direct getattr usage for backend/approve_level/guardrail with a validation
step that converts strings to enum members (or None) and only passes validated
enum values into cls, keeping timeout parsing via _parse_timeout as-is.
- Around line 51-56: The enabled calculation in from_cli() mistakenly omits the
CLI --guardrail flag, so guardrail can be set on the spec but leave
enabled=False; update the boolean expression that sets enabled (the local
variable named enabled in _approval_spec.py) to include getattr(args,
'guardrail', None) (or getattr(args, 'guardrail', False)) alongside 'trust',
'approval', 'approve_all_tools', and 'approve_level' so that providing
--guardrail flips enabled true and prevents the guardrail config from being
silently ignored.
- Around line 107-109: The current fallback logic for backend,
approve_all_tools, and timeout_val uses truthiness which lets legacy aliases
override explicit falsy canonical values; update the assignments to prefer
canonical keys when present (use "backend" if "backend" in node, else use
"backend_name"), similarly use "approve_all_tools" if present else "all_tools",
and use "timeout" if present else "approval_timeout" so that explicit falsy
canonical values are respected (apply this change where backend,
approve_all_tools, and timeout_val are assigned).
- Around line 15-24: _parsΠ΅_timeout currently converts inputs with float() but
allows nan/inf and negatives and loses the original parse error; update
_parse_timeout to (1) try converting with float(timeout_val) capturing the
exception as e and re-raise a ValueError using "from e" to preserve the cause,
(2) validate the resulting value with math.isfinite(value) and value > 0 (reject
nan/inf/negative/zero) and raise a ValueError with a clear message if invalid,
and (3) ensure math is imported if not already; keep the existing handling for
None and the string 'none'.
In `@src/praisonai/praisonai/_async_bridge.py`:
- Around line 55-61: The sync bridge currently uses asyncio.run(coro) when
running is None (creating a fresh loop) and also returns
fut.result(timeout=timeout) without canceling on timeout; change it to always
schedule on the long-lived background loop via _ensure_background_loop() using
asyncio.run_coroutine_threadsafe(coro, _ensure_background_loop()) instead of
asyncio.run(coro), and wrap the fut.result(timeout=timeout) call to catch
concurrent.futures.TimeoutError so you can cancel the submitted work (call
fut.cancel() or post a cancellation to the background loop) before re-raising;
update references in this block (the running check, asyncio.run,
asyncio.run_coroutine_threadsafe, fut, and _ensure_background_loop) accordingly
so no new event loop is created per call and timed-out tasks are cancelled.
---
Outside diff comments:
In `@src/praisonai/praisonai/agents_generator.py`:
- Around line 1204-1228: The code passes ApprovalSpec.timeout (spec.timeout, a
Optional[float]) directly into resolve_approval_config which expects
Optional[str], causing an AttributeError when resolve_approval_config calls
timeout.lower(); fix by converting spec.timeout to a string when not None (e.g.,
timeout_arg = None if spec.timeout is None else str(spec.timeout)) and pass
timeout=timeout_arg into resolve_approval_config (refer to
ApprovalSpec.from_yaml, spec.timeout, resolve_approval_config, and
approval_config), and optionally move the closing parenthesis for the
resolve_approval_config call to the inner indent level for clarity.
In `@src/praisonai/praisonai/cli/main.py`:
- Around line 224-237: The CLI logging configuration block (calls to
configure_cli_logging and multiple logging.getLogger(...).setLevel(...),
including
'alembic','gradio','rust_logger','duckduckgo','_client','praisonaiagents' and
its subloggers) must be removed from module import time and instead invoked at
the start of the CLI run path: call the same setup inside the PraisonAI.main()
method before any other work; specifically relocate the entire block currently
at top-level in praisonai.cli.main to the beginning of the PraisonAI.main()
function so logging is configured only when the CLI runs.
In `@src/praisonai/praisonai/integrations/base.py`:
- Around line 321-352: The wrapper get_available_integrations currently spawns a
per-call ThreadPoolExecutor and asyncio.run to bridge sync->async; replace that
ad-hoc pattern by scheduling the coroutine on the running event loop with
asyncio.run_coroutine_threadsafe(registry.get_available(), loop).result() when
loop.is_running(), and fall back to asyncio.run(registry.get_available()) only
when no loop is running; update the block around registry.get_available() to
remove the ThreadPoolExecutor usage and use run_coroutine_threadsafe to wait for
the result instead, keeping exception handling similar.
---
Nitpick comments:
In `@src/praisonai/praisonai/_async_bridge.py`:
- Line 30: The function run_sync currently types its first parameter as
Awaitable[T] but only accepts coroutine objects; change the type annotation of
the first parameter in run_sync to Coroutine[Any, Any, T] to match
asyncio.run()/asyncio.run_coroutine_threadsafe requirements and prevent
misuseβupdate the signature for run_sync(coro: Coroutine[Any, Any, T], *,
timeout: float | None = None) and adjust any imports/types if necessary to
reference typing.Coroutine and typing.Any.
In `@src/praisonai/praisonai/endpoints/a2u_server.py`:
- Around line 172-183: Replace the get_event_loop path with
asyncio.get_running_loop() in the method that schedules publish (the block
calling asyncio.ensure_future(self.publish(event, stream_name)) and returning
len(self._streams.get(stream_name, set()))) so you directly detect a running
loop without triggering DeprecationWarnings; call asyncio.get_running_loop()
inside a try and, if it succeeds, schedule the coroutine with
asyncio.ensure_future(...) and return the stream count, else allow the
fallthrough to from .._async_bridge import run_sync and return
run_sync(self.publish(event, stream_name)). Ensure references to self.publish,
self._streams and run_sync remain unchanged.
πͺ Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
βΉοΈ Review info
βοΈ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b5f876cf-bd28-450e-9578-ccd18f8d2e0d
π Files selected for processing (10)
src/praisonai/praisonai/_approval_spec.pysrc/praisonai/praisonai/_async_bridge.pysrc/praisonai/praisonai/_logging.pysrc/praisonai/praisonai/agents_generator.pysrc/praisonai/praisonai/bots/_approval_base.pysrc/praisonai/praisonai/cli/main.pysrc/praisonai/praisonai/endpoints/a2u_server.pysrc/praisonai/praisonai/integrations/base.pysrc/praisonai/praisonai/integrations/managed_local.pysrc/praisonai/praisonai/integrations/registry.py
| def _parse_timeout(timeout_val: Optional[Union[str, int, float]]) -> Optional[float]: | ||
| """Parse timeout value to float, handling 'none' case.""" | ||
| if timeout_val is None: | ||
| return None | ||
| if isinstance(timeout_val, str) and timeout_val.lower() == 'none': | ||
| return None | ||
| try: | ||
| return float(timeout_val) | ||
| except (ValueError, TypeError): | ||
| raise ValueError(f"Invalid timeout value: {timeout_val}") |
There was a problem hiding this comment.
Validate timeout bounds and preserve the parse failure cause.
float() accepts values like nan, inf, and negative numbers, which are not valid wait durations for this config. Also address Ruff B904 by chaining the parse error.
π‘οΈ Proposed fix
+import math
@@
def _parse_timeout(timeout_val: Optional[Union[str, int, float]]) -> Optional[float]:
"""Parse timeout value to float, handling 'none' case."""
if timeout_val is None:
return None
if isinstance(timeout_val, str) and timeout_val.lower() == 'none':
return None
try:
- return float(timeout_val)
- except (ValueError, TypeError):
- raise ValueError(f"Invalid timeout value: {timeout_val}")
+ timeout = float(timeout_val)
+ except (ValueError, TypeError) as err:
+ raise ValueError(f"Invalid timeout value: {timeout_val}") from err
+ if not math.isfinite(timeout) or timeout < 0:
+ raise ValueError(f"Invalid timeout value: {timeout_val}")
+ return timeoutπ§° Tools
πͺ Ruff (0.15.10)
[warning] 24-24: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/_approval_spec.py` around lines 15 - 24,
_parsΠ΅_timeout currently converts inputs with float() but allows nan/inf and
negatives and loses the original parse error; update _parse_timeout to (1) try
converting with float(timeout_val) capturing the exception as e and re-raise a
ValueError using "from e" to preserve the cause, (2) validate the resulting
value with math.isfinite(value) and value > 0 (reject nan/inf/negative/zero) and
raise a ValueError with a clear message if invalid, and (3) ensure math is
imported if not already; keep the existing handling for None and the string
'none'.
| enabled = bool( | ||
| getattr(args, 'trust', False) or | ||
| getattr(args, 'approval', None) or | ||
| getattr(args, 'approve_all_tools', False) or | ||
| getattr(args, 'approve_level', None) | ||
| ) |
There was a problem hiding this comment.
Include guardrail when deriving CLI-enabled approval.
from_cli() stores guardrails on the spec, but --guardrail alone leaves enabled=False and backend="none", so the guardrail config can be silently ignored.
π Proposed fix
enabled = bool(
getattr(args, 'trust', False) or
getattr(args, 'approval', None) or
getattr(args, 'approve_all_tools', False) or
- getattr(args, 'approve_level', None)
+ getattr(args, 'approve_level', None) or
+ getattr(args, 'guardrail', None)
)π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| enabled = bool( | |
| getattr(args, 'trust', False) or | |
| getattr(args, 'approval', None) or | |
| getattr(args, 'approve_all_tools', False) or | |
| getattr(args, 'approve_level', None) | |
| ) | |
| enabled = bool( | |
| getattr(args, 'trust', False) or | |
| getattr(args, 'approval', None) or | |
| getattr(args, 'approve_all_tools', False) or | |
| getattr(args, 'approve_level', None) or | |
| getattr(args, 'guardrail', None) | |
| ) |
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/_approval_spec.py` around lines 51 - 56, The enabled
calculation in from_cli() mistakenly omits the CLI --guardrail flag, so
guardrail can be set on the spec but leave enabled=False; update the boolean
expression that sets enabled (the local variable named enabled in
_approval_spec.py) to include getattr(args, 'guardrail', None) (or getattr(args,
'guardrail', False)) alongside 'trust', 'approval', 'approve_all_tools', and
'approve_level' so that providing --guardrail flips enabled true and prevents
the guardrail config from being silently ignored.
| return cls( | ||
| enabled=enabled, | ||
| backend=backend, # type: ignore[arg-type] | ||
| approve_all_tools=bool(getattr(args, 'approve_all_tools', False)), | ||
| timeout=_parse_timeout(getattr(args, 'approval_timeout', None)), | ||
| approve_level=getattr(args, 'approve_level', None), | ||
| guardrails=getattr(args, 'guardrail', None), |
There was a problem hiding this comment.
Runtime-validate canonical enum fields.
Literal only helps static type checkers; YAML strings and CLI args can still pass unsupported backend or approve_level values into ApprovalSpec. Since this is the canonical normalization layer, reject invalid values here instead of relying on downstream failures.
π‘οΈ Proposed validation
-from typing import Optional, Literal, Union, Dict, Any
+from typing import Optional, Literal, Union, Dict, Any, get_args
@@
Backend = Literal["console", "slack", "telegram", "discord", "webhook", "http", "agent", "auto", "none"]
ApprovalLevel = Literal["low", "medium", "high", "critical"]
+_BACKENDS = set(get_args(Backend))
+_APPROVAL_LEVELS = set(get_args(ApprovalLevel))
+
+
+def _validate_backend(value: Any) -> Backend:
+ if value not in _BACKENDS:
+ raise ValueError(f"Invalid approval backend: {value!r}. Allowed: {sorted(_BACKENDS)}")
+ return value
+
+
+def _validate_approval_level(value: Any) -> Optional[ApprovalLevel]:
+ if value is None:
+ return None
+ if value not in _APPROVAL_LEVELS:
+ raise ValueError(f"Invalid approve_level: {value!r}. Allowed: {sorted(_APPROVAL_LEVELS)}")
+ return value
@@
- backend=backend, # type: ignore[arg-type]
+ backend=_validate_backend(backend),
@@
- approve_level=getattr(args, 'approve_level', None),
+ approve_level=_validate_approval_level(getattr(args, 'approve_level', None)),
@@
if isinstance(node, str):
- return cls(enabled=True, backend=node) # type: ignore[arg-type]
+ return cls(enabled=True, backend=_validate_backend(node))
@@
- backend=backend, # type: ignore[arg-type]
+ backend=_validate_backend(backend),
@@
- approve_level=node.get("approve_level"),
+ approve_level=_validate_approval_level(node.get("approve_level")),Also applies to: 92-117
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/_approval_spec.py` around lines 66 - 72, The returned
ApprovalSpec currently accepts raw strings for fields like backend,
approve_level, and guardrails; add runtime validation in the factory/constructor
that canonicalizes and rejects unknown enum values (e.g., map/validate against
the allowed members of your Backend/ApproveLevel/Guardrail enums) and raise a
clear ValueError for invalid inputs before calling cls(...). Locate the code
path that builds the spec (the method returning cls(...) and any similar block
around lines 92-117) and replace the direct getattr usage for
backend/approve_level/guardrail with a validation step that converts strings to
enum members (or None) and only passes validated enum values into cls, keeping
timeout parsing via _parse_timeout as-is.
| backend = node.get("backend") or node.get("backend_name", "console") | ||
| approve_all_tools = node.get("approve_all_tools") or node.get("all_tools", False) | ||
| timeout_val = node.get("timeout") or node.get("approval_timeout") |
There was a problem hiding this comment.
Use key presence instead of truthiness for legacy aliases.
The or fallback makes aliases override explicit canonical falsy values, e.g. approve_all_tools: false with all_tools: true becomes true. Prefer canonical keys whenever they are present.
π Proposed fix
- backend = node.get("backend") or node.get("backend_name", "console")
- approve_all_tools = node.get("approve_all_tools") or node.get("all_tools", False)
- timeout_val = node.get("timeout") or node.get("approval_timeout")
+ backend = node["backend"] if "backend" in node else node.get("backend_name", "console")
+ approve_all_tools = (
+ node["approve_all_tools"]
+ if "approve_all_tools" in node
+ else node.get("all_tools", False)
+ )
+ timeout_val = node["timeout"] if "timeout" in node else node.get("approval_timeout")π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/_approval_spec.py` around lines 107 - 109, The
current fallback logic for backend, approve_all_tools, and timeout_val uses
truthiness which lets legacy aliases override explicit falsy canonical values;
update the assignments to prefer canonical keys when present (use "backend" if
"backend" in node, else use "backend_name"), similarly use "approve_all_tools"
if present else "all_tools", and use "timeout" if present else
"approval_timeout" so that explicit falsy canonical values are respected (apply
this change where backend, approve_all_tools, and timeout_val are assigned).
| if running is None: | ||
| # Cheap path: no outer loop, just run. | ||
| return asyncio.run(coro) | ||
|
|
||
| # Outer loop exists -> schedule on background loop, do NOT nest asyncio.run. | ||
| fut: Future = asyncio.run_coroutine_threadsafe(coro, _ensure_background_loop()) | ||
| return fut.result(timeout=timeout) No newline at end of file |
There was a problem hiding this comment.
π§© Analysis chain
π Script executed:
#!/bin/bash
# Description: Verify run_sync no longer uses per-call asyncio.run(coro)
# and that timeout cancellation is present.
rg -n -C3 'asyncio\.run\(coro\)|run_coroutine_threadsafe|FutureTimeoutError|fut\.cancel\(\)' -- src/praisonai/praisonai/_async_bridge.pyRepository: MervinPraison/PraisonAI
Length of output: 400
Use the long-lived bridge consistently and cancel timed-out work.
Line 57 still creates a fresh event loop per sync call; line 61 can time out while leaving the submitted coroutine running in the background. This reintroduces loop churn and can leak long-running CLI/subprocess work.
π Proposed fix
-from concurrent.futures import Future
+from concurrent.futures import Future, TimeoutError as FutureTimeoutError
@@
- if running is None:
- # Cheap path: no outer loop, just run.
- return asyncio.run(coro)
-
- # Outer loop exists -> schedule on background loop, do NOT nest asyncio.run.
- fut: Future = asyncio.run_coroutine_threadsafe(coro, _ensure_background_loop())
- return fut.result(timeout=timeout)
+ loop = _ensure_background_loop()
+ if running is loop:
+ raise RuntimeError("run_sync() cannot block the async bridge event loop")
+
+ fut: Future = asyncio.run_coroutine_threadsafe(coro, loop)
+ try:
+ return fut.result(timeout=timeout)
+ except FutureTimeoutError:
+ fut.cancel()
+ raise TimeoutError(f"Coroutine timed out after {timeout}s") from Noneπ€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai/praisonai/_async_bridge.py` around lines 55 - 61, The sync
bridge currently uses asyncio.run(coro) when running is None (creating a fresh
loop) and also returns fut.result(timeout=timeout) without canceling on timeout;
change it to always schedule on the long-lived background loop via
_ensure_background_loop() using asyncio.run_coroutine_threadsafe(coro,
_ensure_background_loop()) instead of asyncio.run(coro), and wrap the
fut.result(timeout=timeout) call to catch concurrent.futures.TimeoutError so you
can cancel the submitted work (call fut.cancel() or post a cancellation to the
background loop) before re-raising; update references in this block (the running
check, asyncio.run, asyncio.run_coroutine_threadsafe, fut, and
_ensure_background_loop) accordingly so no new event loop is created per call
and timed-out tasks are cancelled.
PR #1457 β β Approve, 1 minor follow-upLocal validation on pr-1457 branch: Review against AGENTS.mdΒ§4.1 Protocol-Driven Core / Β§2.2 package hierarchy β correctly placed
Gap 1 β Nested
Gap 2 β Hot-path
Gap 3 β Unified
Minor follow-up (non-blocking)
VerdictSurgical, wrapper-only, root-cause fixes that remove three real footguns. Own tests pass, no regressions. Ready to merge β if CI on the PR is green, I'll merge right after #1461. |
Fixes #1456
Summary
This PR resolves three high-impact wrapper gaps that violate PraisonAI's stated principles:
1. Nested asyncio.run() Breaking Multi-Agent Safety π§
2. Hot-Path logging.basicConfig() Calls π
3. Fragmented Approval Configuration π
Files Changed
Validation
All changes align with AGENTS.md architecture:
Testing
Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements