Skip to content

Fix: P1 follow-up to #1550 β€” failing test + example skip-guards#1555

Merged
MervinPraison merged 1 commit intomainfrom
claude/pr-1550-20260426-0625
Apr 26, 2026
Merged

Fix: P1 follow-up to #1550 β€” failing test + example skip-guards#1555
MervinPraison merged 1 commit intomainfrom
claude/pr-1550-20260426-0625

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

@MervinPraison MervinPraison commented Apr 26, 2026

Follow-up to #1550 β€” Resolve P1 issues found during local verification

PR #1550 was merged on 2026-04-25 with two latent issues that surfaced during post-merge local validation:

  1. Failing test test_managed_agent_compute_provider_errors β€” asserted pytest.raises(ValueError) but the implementation correctly returns LocalManagedAgent with a DeprecationWarning (the backward-compat behaviour). CI would fail the moment this test ran.
  2. Examples crash without API keys β€” all 4 runtime_*.py files exited non-zero on a no-creds machine (ImportError, ValueError, UnboundLocalError). New contributors and docs builds would break.

This PR fixes both, surgically. 5 files, +73 / -15.

Changes

1. Test alignment (tests/unit/integrations/test_backend_semantics.py)

Renamed test_managed_agent_compute_provider_errors β†’ test_managed_agent_compute_provider_warnings and updated assertions to match the implementation: pytest.warns(DeprecationWarning, match="compute") + isinstance(returned, LocalManagedAgent). Implementation untouched.

2. Skip-guards on all 4 runtime examples

Added the standard 3-layer guard (env β†’ SDK availability β†’ service reachability) to:

  • runtime_hosted_anthropic.py β€” guards on ANTHROPIC_API_KEY / CLAUDE_API_KEY + import anthropic
  • runtime_local_openai.py β€” guards on OPENAI_API_KEY + import openai
  • runtime_local_gemini.py β€” guards on GEMINI_API_KEY / GOOGLE_API_KEY
  • runtime_local_ollama.py β€” pings http://localhost:11434/api/tags with a short timeout

All guards are before any heavy imports so a no-creds machine exits in < 200 ms.

Verification (local)

=== Backend semantics test ===
================== 10 passed in 0.05s ==================

=== Core SDK regression ===
================== 289 passed, 9 skipped, 1 warning in 0.98s ==================

=== Examples (no creds in env) ===
exit=0  runtime_hosted_anthropic.py  ::  [skip] ANTHROPIC_API_KEY or CLAUDE_API_KEY not set
exit=0  runtime_local_openai.py      ::  [skip] OPENAI_API_KEY not set
exit=0  runtime_local_gemini.py      ::  [skip] GEMINI_API_KEY or GOOGLE_API_KEY not set
exit=0  runtime_local_ollama.py      ::  [skip] Ollama daemon not running at localhost:11434

Before / after evidence

Before (HEAD = 74c4cecc) After (HEAD = 15453cdd)
pytest tests/unit/integrations/test_backend_semantics.py 1 failed, 9 passed 10 passed
python runtime_hosted_anthropic.py exit 1 β€” ImportError: anthropic SDK required exit 0 β€” [skip] ANTHROPIC_API_KEY not set
python runtime_local_openai.py exit 1 β€” ValueError: OPENAI_API_KEY required exit 0 β€” [skip] OPENAI_API_KEY not set
python runtime_local_gemini.py exit 1 β€” UnboundLocalError: 'logging' exit 0 β€” [skip] GEMINI_API_KEY not set
python runtime_local_ollama.py exit 1 β€” UnboundLocalError: 'logging' exit 0 β€” [skip] Ollama daemon not running
Core SDK tests 289 passed 289 passed (no regression)

Backward compatibility

Zero public API changes. Test rename is internal. Skip-guards are no-ops when credentials are present, so existing CI / user behaviour with creds is unchanged.

References

Closes the two blockers from post-merge verification of #1550.

Summary by CodeRabbit

  • Chores

    • Example scripts for Anthropic, Gemini, Ollama, and OpenAI providers now gracefully skip execution when required dependencies, API keys, or services are unavailable.
  • Tests

    • Updated test expectations to emit deprecation warnings for certain compute provider configurations instead of raising exceptions.

…ions and add example skip guards

- Fix test_managed_agent_compute_provider_errors to expect DeprecationWarning instead of ValueError
- Add 3-layer skip guards to all 4 runtime example scripts for clean exit without credentials
- Preserve backward compatibility while implementing the HostedAgent/LocalAgent architectural split

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 26, 2026 06:31
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

πŸ“ Walkthrough

Walkthrough

The PR adds prerequisite validation checks to four runtime example scripts and updates a test to expect deprecation warnings instead of exceptions. Runtime scripts now verify required environment variables and SDK availability before executing their main logic, gracefully skipping execution when dependencies are unavailable.

Changes

Cohort / File(s) Summary
Runtime Provider Examples
examples/python/managed-agents/provider/runtime_hosted_anthropic.py, runtime_local_gemini.py, runtime_local_ollama.py, runtime_local_openai.py
Each script now includes prerequisite guards at startup: environment variable checks (API keys), SDK import availability, and connectivity validation. Failed checks print [skip] and exit with status 0. Actual agent initialization and execution deferred until after guard checks.
Backend Semantics Test
tests/unit/integrations/test_backend_semantics.py
Test updated to expect DeprecationWarning with compute.*deprecated pattern match for modal, e2b, and docker providers, and to verify returned object is LocalManagedAgent instance, replacing previous ValueError exception expectation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related issues

Possibly related PRs

Suggested labels

testing, dependencies, examples

Poem

🐰 Guards in place, so neat and tidy,
Prerequisites checked each Friday,
Skip when missing, exit with graceβ€”
The agent scripts now know their place! 🎯

πŸš₯ Pre-merge checks | βœ… 5
βœ… Passed checks (5 passed)
Check name Status Explanation
Description Check βœ… Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check βœ… Passed The title clearly summarizes the main change: a follow-up fix to PR #1550 that addresses a failing test and adds skip-guards to example scripts.
Docstring Coverage βœ… Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check βœ… Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check βœ… Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
πŸ“ Generate docstrings
  • Create stacked PR
  • Commit on current branch
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/pr-1550-20260426-0625

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@MervinPraison
Copy link
Copy Markdown
Owner Author

@copilot Do a thorough review of this PR. Read ALL existing reviewer comments above from Qodo, Coderabbit, and Gemini first β€” incorporate their findings.

Review areas:

  1. Bloat check: Are changes minimal and focused? Any unnecessary code or scope creep?
  2. Security: Any hardcoded secrets, unsafe eval/exec, missing input validation?
  3. Performance: Any module-level heavy imports? Hot-path regressions?
  4. Tests: Are tests included? Do they cover the changes adequately?
  5. Backward compat: Any public API changes without deprecation?
  6. Code quality: DRY violations, naming conventions, error handling?
  7. Address reviewer feedback: If Qodo, Coderabbit, or Gemini flagged valid issues, include them in your review
  8. Suggest specific improvements with code examples where possible

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces skip guards across several example scripts to ensure they exit cleanly when environment variables or required SDKs are missing. It also updates the integration tests to check for deprecation warnings rather than errors when using legacy compute providers. The review feedback suggests cleaning up an unused import and improving the robustness of the Ollama skip guard, as well as refactoring the unit tests using parametrization for better maintainability.

Comment on lines +6 to +18
import os
import sys
import requests

# Skip guards - exit cleanly if prerequisites not met
# Check if Ollama daemon is running
try:
response = requests.get("http://localhost:11434/api/tags", timeout=2)
if response.status_code != 200:
raise Exception("Ollama not responding")
except Exception:
print("[skip] Ollama daemon not running at localhost:11434")
sys.exit(0)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The os module is imported but not used in this file. Additionally, moving the requests import inside the try block ensures that the script exits cleanly with a skip message even if the requests library is not installed, which is consistent with the other example scripts that handle their SDK imports gracefully to avoid tracebacks for new users.

Suggested change
import os
import sys
import requests
# Skip guards - exit cleanly if prerequisites not met
# Check if Ollama daemon is running
try:
response = requests.get("http://localhost:11434/api/tags", timeout=2)
if response.status_code != 200:
raise Exception("Ollama not responding")
except Exception:
print("[skip] Ollama daemon not running at localhost:11434")
sys.exit(0)
import sys
# Skip guards - exit cleanly if prerequisites not met
# Check if Ollama daemon is running
try:
import requests
response = requests.get("http://localhost:11434/api/tags", timeout=2)
if response.status_code != 200:
raise Exception("Ollama not responding")
except Exception:
print("[skip] Ollama daemon not running at localhost:11434")
sys.exit(0)

Comment on lines +97 to +113
def test_managed_agent_compute_provider_warnings():
"""Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent."""
from praisonai.integrations.managed_agents import ManagedAgent
from praisonai.integrations.managed_local import LocalManagedAgent

# Compute providers should raise ValueError
with pytest.raises(ValueError) as exc_info:
ManagedAgent(provider="modal")
assert "compute" in str(exc_info.value).lower()
assert "LocalAgent" in str(exc_info.value)
# Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider="modal")
assert isinstance(obj, LocalManagedAgent)

with pytest.raises(ValueError) as exc_info:
ManagedAgent(provider="e2b")
assert "compute" in str(exc_info.value).lower()
assert "LocalAgent" in str(exc_info.value)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider="e2b")
assert isinstance(obj, LocalManagedAgent)

with pytest.raises(ValueError) as exc_info:
ManagedAgent(provider="docker")
assert "compute" in str(exc_info.value).lower()
assert "LocalAgent" in str(exc_info.value)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider="docker")
assert isinstance(obj, LocalManagedAgent)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The test logic for different compute providers is repetitive. Using @pytest.mark.parametrize would make the test cleaner, more readable, and easier to maintain when adding new providers in the future.

Suggested change
def test_managed_agent_compute_provider_warnings():
"""Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent."""
from praisonai.integrations.managed_agents import ManagedAgent
from praisonai.integrations.managed_local import LocalManagedAgent
# Compute providers should raise ValueError
with pytest.raises(ValueError) as exc_info:
ManagedAgent(provider="modal")
assert "compute" in str(exc_info.value).lower()
assert "LocalAgent" in str(exc_info.value)
# Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider="modal")
assert isinstance(obj, LocalManagedAgent)
with pytest.raises(ValueError) as exc_info:
ManagedAgent(provider="e2b")
assert "compute" in str(exc_info.value).lower()
assert "LocalAgent" in str(exc_info.value)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider="e2b")
assert isinstance(obj, LocalManagedAgent)
with pytest.raises(ValueError) as exc_info:
ManagedAgent(provider="docker")
assert "compute" in str(exc_info.value).lower()
assert "LocalAgent" in str(exc_info.value)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider="docker")
assert isinstance(obj, LocalManagedAgent)
@pytest.mark.parametrize("provider", ["modal", "e2b", "docker"])
def test_managed_agent_compute_provider_warnings(provider):
"""Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent."""
from praisonai.integrations.managed_agents import ManagedAgent
from praisonai.integrations.managed_local import LocalManagedAgent
# Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility)
with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
obj = ManagedAgent(provider=provider)
assert isinstance(obj, LocalManagedAgent)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/unit/integrations/test_backend_semantics.py (1)

97-113: Test correctly aligns with the implementation; consider parametrizing the three near-identical blocks.

The rename and the new assertions match ManagedAgent's actual behavior for compute providers: the regex compute.*deprecated correctly matches the emitted message ("…for compute providers is deprecated…"), and the factory does return a LocalManagedAgent instance for modal/e2b/docker. Backward-compat semantics are now properly exercised.

One small nit: the three blocks are identical except for the provider value, so a pytest.mark.parametrize would tighten this up and make it trivial to add flyio/daytona (which the implementation also routes through this branch but the test currently omits).

♻️ Optional parametrized version
-def test_managed_agent_compute_provider_warnings():
-    """Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent."""
-    from praisonai.integrations.managed_agents import ManagedAgent
-    from praisonai.integrations.managed_local import LocalManagedAgent
-    
-    # Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility)
-    with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
-        obj = ManagedAgent(provider="modal")
-    assert isinstance(obj, LocalManagedAgent)
-    
-    with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
-        obj = ManagedAgent(provider="e2b")
-    assert isinstance(obj, LocalManagedAgent)
-    
-    with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
-        obj = ManagedAgent(provider="docker")
-    assert isinstance(obj, LocalManagedAgent)
+@pytest.mark.parametrize("provider", ["modal", "e2b", "docker", "flyio", "daytona"])
+def test_managed_agent_compute_provider_warnings(provider):
+    """ManagedAgent(provider=<compute>) emits DeprecationWarning and returns LocalManagedAgent."""
+    from praisonai.integrations.managed_agents import ManagedAgent
+    from praisonai.integrations.managed_local import LocalManagedAgent
+
+    with pytest.warns(DeprecationWarning, match="compute.*deprecated"):
+        obj = ManagedAgent(provider=provider)
+    assert isinstance(obj, LocalManagedAgent)

Based on learnings: "Public API changes require a deprecation cycle: emit DeprecationWarning for one release before breaking change" β€” the updated assertions correctly enforce that contract for compute providers.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/integrations/test_backend_semantics.py` around lines 97 - 113, The
three identical blocks in test_managed_agent_compute_provider_warnings should be
parametrized to reduce duplication and cover additional compute providers;
update the test function (test_managed_agent_compute_provider_warnings) to use
pytest.mark.parametrize over provider values (e.g., "modal", "e2b", "docker",
"flyio", "daytona"), and inside the single test body call
ManagedAgent(provider=provider) and assert isinstance(result, LocalManagedAgent)
while using pytest.warns(DeprecationWarning, match="compute.*deprecated") for
the warning check; keep references to ManagedAgent and LocalManagedAgent so the
test still validates the deprecation warning and the factory return type.
examples/python/managed-agents/provider/runtime_local_ollama.py (1)

6-18: Tighten the Ollama preflight: lazy-import requests and narrow the exception.

Two small concerns in this guard:

  1. import requests runs unconditionally at module level. On a no-credentials machine that also lacks requests, the script will fail with ImportError and a non-zero exit before the [skip] line is printed β€” which is exactly the contract all_runtimes.py relies on (it expects [skip] + exit 0). Move the import inside the guard (or wrap in try/except ImportError with its own [skip]).
  2. The try: ... raise Exception("Ollama not responding") ... except Exception pattern trips Ruff BLE001 and obscures intent. Catching requests.RequestException and checking status_code separately is cleaner.
♻️ Proposed refactor
 import os
 import sys
-import requests

-# Skip guards - exit cleanly if prerequisites not met  
-# Check if Ollama daemon is running
-try:
-    response = requests.get("http://localhost:11434/api/tags", timeout=2)
-    if response.status_code != 200:
-        raise Exception("Ollama not responding")
-except Exception:
-    print("[skip] Ollama daemon not running at localhost:11434")
-    sys.exit(0)
+# Skip guards - exit cleanly if prerequisites not met
+try:
+    import requests
+except ImportError:
+    print("[skip] requests not installed")
+    sys.exit(0)
+
+# Check if Ollama daemon is running
+try:
+    response = requests.get("http://localhost:11434/api/tags", timeout=2)
+except requests.RequestException:
+    response = None
+if response is None or response.status_code != 200:
+    print("[skip] Ollama daemon not running at localhost:11434")
+    sys.exit(0)
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/python/managed-agents/provider/runtime_local_ollama.py` around lines
6 - 18, Move the unconditional import into the preflight and narrow the
exception handling: inside the preflight block attempt to import requests in a
try/except ImportError that prints "[skip] Ollama daemon not running at
localhost:11434" and sys.exit(0) on ImportError, then perform the requests.get
call in a try/except that catches requests.RequestException; after the GET check
response.status_code and if it is not 200 print the same "[skip] ..." message
and sys.exit(0). This updates the top-level guard in runtime_local_ollama.py to
lazy-import requests and to catch RequestException instead of a broad Exception.
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/python/managed-agents/provider/runtime_local_gemini.py`:
- Around line 14-18: Remove the unnecessary google.generativeai SDK import guard
that incorrectly exits when using litellm routing: delete the try/except block
that imports google.generativeai (the lines around the import and sys.exit) so
the example using LocalAgent with model "gemini/gemini-2.0-flash" can run with
only litellm and GEMINI_API_KEY; alternatively, if you intend to require the
SDK, replace that block with a clear comment stating the SDK is optional and
that litellm handles Gemini via REST so only litellm + GEMINI_API_KEY are
required.

---

Nitpick comments:
In `@examples/python/managed-agents/provider/runtime_local_ollama.py`:
- Around line 6-18: Move the unconditional import into the preflight and narrow
the exception handling: inside the preflight block attempt to import requests in
a try/except ImportError that prints "[skip] Ollama daemon not running at
localhost:11434" and sys.exit(0) on ImportError, then perform the requests.get
call in a try/except that catches requests.RequestException; after the GET check
response.status_code and if it is not 200 print the same "[skip] ..." message
and sys.exit(0). This updates the top-level guard in runtime_local_ollama.py to
lazy-import requests and to catch RequestException instead of a broad Exception.

In `@tests/unit/integrations/test_backend_semantics.py`:
- Around line 97-113: The three identical blocks in
test_managed_agent_compute_provider_warnings should be parametrized to reduce
duplication and cover additional compute providers; update the test function
(test_managed_agent_compute_provider_warnings) to use pytest.mark.parametrize
over provider values (e.g., "modal", "e2b", "docker", "flyio", "daytona"), and
inside the single test body call ManagedAgent(provider=provider) and assert
isinstance(result, LocalManagedAgent) while using
pytest.warns(DeprecationWarning, match="compute.*deprecated") for the warning
check; keep references to ManagedAgent and LocalManagedAgent so the test still
validates the deprecation warning and the factory return type.
πŸͺ„ Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
βš™οΈ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4a17fd3c-a616-48c3-9399-8fd9f01fc57d

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 4ce799c and 15453cd.

πŸ“’ Files selected for processing (5)
  • examples/python/managed-agents/provider/runtime_hosted_anthropic.py
  • examples/python/managed-agents/provider/runtime_local_gemini.py
  • examples/python/managed-agents/provider/runtime_local_ollama.py
  • examples/python/managed-agents/provider/runtime_local_openai.py
  • tests/unit/integrations/test_backend_semantics.py

Comment on lines +14 to +18
try:
import google.generativeai
except ImportError:
print("[skip] google-generativeai SDK not installed")
sys.exit(0)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm whether LocalAgent / LocalAgentConfig routes `gemini/...` through litellm
# (in which case google-generativeai is not required) or imports google.generativeai directly.
fd -t f -e py 'local_agent|LocalAgent' | head -50
rg -nP --type=py -C3 '\b(google\.generativeai|google_generativeai|genai)\b'
rg -nP --type=py -C3 '\blitellm\b' -g '!**/tests/**' | head -80
ast-grep --pattern 'class LocalAgent($$$):
    $$$'

Repository: MervinPraison/PraisonAI

Length of output: 20396


🏁 Script executed:

# Find LocalManagedAgent parent class and _resolve_model implementation
fd -t f -e py | xargs grep -l "class LocalManagedAgent"
rg -nP --type=py "_resolve_model|resolve.*model" -A5 -B2 | head -100
# Check if gemini/ is routed through litellm
rg -nP --type=py "gemini/" -C3 | head -80
# Look for how LocalAgent actually uses models
rg -nP --type=py "self\.model|config\.model" src/praisonai/praisonai/integrations/local_agent.py -A2 -B2 | head -50

Repository: MervinPraison/PraisonAI

Length of output: 17938


🏁 Script executed:

# Check how LocalManagedAgent actually uses the resolved model
cat -n src/praisonai/praisonai/integrations/managed_local.py | head -100
rg -nP --type=py "def.*run|def.*execute|def.*complete" src/praisonai/praisonai/integrations/managed_local.py | head -20
# Look for where model is used in completions/LLM calls
rg -nP --type=py "litellm|google\.generativeai|genai" src/praisonai/praisonai/integrations/managed_local.py -A3 -B3

Repository: MervinPraison/PraisonAI

Length of output: 4290


🏁 Script executed:

# Find where model is actually used in completion calls
rg -nP --type=py "completion|ChatCompletion|call.*model" src/praisonai/praisonai/integrations/managed_local.py -A5 | head -150
# Check if google.generativeai is imported anywhere
rg -nP --type=py "^import|^from" src/praisonai/praisonai/integrations/managed_local.py | head -50
# Find the actual LLM completion call
rg -nP --type=py "\.completion\(|\.call\(" src/praisonai/praisonai/integrations/managed_local.py -B5 -A10 | head -100

Repository: MervinPraison/PraisonAI

Length of output: 518


🏁 Script executed:

# Get full file to understand the actual completion flow
wc -l src/praisonai/praisonai/integrations/managed_local.py
# Find all imports including from external packages
head -100 src/praisonai/praisonai/integrations/managed_local.py
# Search for where model is used in call
rg -nP "self\.model|self\._.*model|resolved.*model" src/praisonai/praisonai/integrations/managed_local.py -B2 -A5 | head -150
# Look for litellm usage anywhere
rg -nP "litellm" . --type=py -l 2>/dev/null | head -20

Repository: MervinPraison/PraisonAI

Length of output: 5971


🏁 Script executed:

# Find the Agent class in praisonaiagents
fd -t f -e py "agent\.py" src/praisonai-agents/ | head -5
# Check the Agent init and model handling
rg -nP "class Agent" src/praisonai-agents/praisonaiagents/agent/ -A 20 | head -100
# Look for litellm usage in Agent-related files
rg -nP "litellm\.completion|from litellm" src/praisonai-agents/praisonaiagents/agent/ -B2 -A5 | head -150

Repository: MervinPraison/PraisonAI

Length of output: 15765


Remove the unnecessary google-generativeai SDK check for litellm routing.

The example uses LocalAgent with model gemini/gemini-2.0-flash, which is a litellm routing prefix. litellm handles Gemini calls directly via REST API and does not depend on the google-generativeai SDK. The guard at lines 14–18 will incorrectly skip this example on otherwise-functional machines (litellm + GEMINI_API_KEY set, but without the SDK), since only litellm is required.

If the example should work with just litellm, drop the SDK check entirely (env-var check via litellm is sufficient). If you specifically want to enforce the SDK, update the comment to reflect that intention.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/python/managed-agents/provider/runtime_local_gemini.py` around lines
14 - 18, Remove the unnecessary google.generativeai SDK import guard that
incorrectly exits when using litellm routing: delete the try/except block that
imports google.generativeai (the lines around the import and sys.exit) so the
example using LocalAgent with model "gemini/gemini-2.0-flash" can run with only
litellm and GEMINI_API_KEY; alternatively, if you intend to require the SDK,
replace that block with a clear comment stating the SDK is optional and that
litellm handles Gemini via REST so only litellm + GEMINI_API_KEY are required.

@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Apr 26, 2026

Greptile Summary

This PR surgically fixes the two post-merge blockers from #1550: aligns the test_managed_agent_compute_provider_warnings test with the actual backward-compat behaviour (DeprecationWarning + LocalManagedAgent return rather than ValueError), and adds 3-layer skip-guards to all four runtime example scripts so they exit cleanly on machines without credentials or SDKs. No public API surface is touched.

Confidence Score: 5/5

Safe to merge β€” all findings are P2 style suggestions; both stated blockers are correctly resolved.

The test fix is verified against the actual implementation (warning message matches the regex and the factory does return LocalManagedAgent). All four skip-guards correctly gate on env keys / SDK availability / service reachability before heavy imports. The one P2 note (unconditional import requests in the Ollama file) doesn't affect environments where requests is installed as a project dependency.

examples/python/managed-agents/provider/runtime_local_ollama.py β€” minor: import requests should be wrapped in try/except for full consistency with the skip-guard pattern.

Important Files Changed

Filename Overview
tests/unit/integrations/test_backend_semantics.py Renamed test and updated assertions to match the actual backward-compat implementation: DeprecationWarning + LocalManagedAgent return, not ValueError. Regex "compute.*deprecated" correctly matches the warning text.
examples/python/managed-agents/provider/runtime_hosted_anthropic.py Adds env-key guard + SDK import guard before heavy imports; clean exit(0) when prerequisites are absent.
examples/python/managed-agents/provider/runtime_local_openai.py Adds OPENAI_API_KEY env guard + openai SDK import guard; consistent with the pattern used across the other examples.
examples/python/managed-agents/provider/runtime_local_gemini.py Adds GEMINI_API_KEY/GOOGLE_API_KEY env guard + google.generativeai import guard before heavy imports.
examples/python/managed-agents/provider/runtime_local_ollama.py Adds Ollama daemon reachability guard, but import requests is unconditional at the top level β€” if requests is missing, a bare ImportError is raised instead of a clean skip message.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[python runtime_*.py] --> B{env key set?}
    B -- No --> C[print skip + exit 0]
    B -- Yes --> D{SDK importable?}
    D -- No --> E[print skip + exit 0]
    D -- Yes --> F{Service reachable?\nOllama only}
    F -- No --> G[print skip + exit 0]
    F -- Yes --> H[heavy imports + run example]
    D -- N/A --> H
Loading

Reviews (1): Last reviewed commit: "fix: resolve P1 issues from local verifi..." | Re-trigger Greptile

Uses the new canonical LocalAgent class which clearly communicates that only the
agent loop runs locally. The LLM calls go to a local Ollama instance, no managed runtime involved.
"""
import os
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Unconditional requests import breaks the skip-guard pattern

import requests sits above the guard, so if requests is not installed the script raises a bare ImportError rather than printing a clean [skip] message and exiting 0 β€” exactly the failure mode the guards are meant to prevent. Wrapping it in a try/except is consistent with the SDK-import guards used in the other three files.

Suggested change
import os
import os
import sys
# Skip guards - exit cleanly if prerequisites not met
# Check if Ollama daemon is running
try:
import requests
except ImportError:
print("[skip] requests library not installed")
sys.exit(0)
try:
response = requests.get("http://localhost:11434/api/tags", timeout=2)

@MervinPraison MervinPraison merged commit 6b3a999 into main Apr 26, 2026
49 of 50 checks passed
@MervinPraison MervinPraison deleted the claude/pr-1550-20260426-0625 branch April 26, 2026 06:37
@MervinPraison
Copy link
Copy Markdown
Owner Author

@claude You are the FINAL architecture reviewer. If the branch is under MervinPraison/PraisonAI (not a fork), you are able to make modifications to this branch and push directly. SCOPE: Focus ONLY on Python packages (praisonaiagents, praisonai). Do NOT modify praisonai-rust or praisonai-ts. Read ALL comments above from Gemini, Qodo, CodeRabbit, and Copilot carefully before responding.

Phase 1: Review per AGENTS.md

  1. Protocol-driven: check heavy implementations vs core SDK
  2. Backward compatible: ensure zero feature regressions
  3. Performance: no hot-path regressions

Phase 2: FIX Valid Issues
4. For any VALID bugs or architectural flaws found by Gemini, CodeRabbit, Qodo, Copilot, or any other reviewer: implement the fix
5. Push all code fixes directly to THIS branch (do NOT create a new PR)
6. Comment a summary of exact files modified and what you skipped

Phase 3: Final Verdict
7. If all issues are resolved, approve the PR / close the Issue
8. If blocking issues remain, request changes / leave clear action items

@praisonai-triage-agent
Copy link
Copy Markdown
Contributor

praisonai-triage-agent Bot commented Apr 26, 2026

Claude finished @MervinPraison's task β€”β€” View job β€’ claude/pr-1555-20260426-0642 β€’ [Create PR βž”](https://github.com/MervinPraison/PraisonAI/compare/main...claude/pr-1555-20260426-0642?quick_pull=1&title=PR%20%231555%3A%20Changes%20from%20Claude&body=This%20PR%20addresses%20pr%20%231555%0A%0AGenerated%20with%20%5BClaude%20Code%5D(https%3A%2F%2Fclaude.ai%2Fcode)


Final Architecture Review - PR #1555 βœ…

Todo List:

  • Read AGENTS.md architecture guidelines
  • Analyze all reviewer feedback (Gemini, CodeRabbit, Qodo, Copilot, Greptile)
  • Review 5 changed files for architectural compliance
  • Implement fixes for valid issues
  • Provide final verdict

Phase 1: Architecture Review per AGENTS.md

βœ… Protocol-Driven Core: All changes are in wrapper layer (praisonai/ package) and examples. No core SDK modifications.

βœ… Backward Compatible: Test correctly aligns with implementation - ManagedAgent(provider="modal/e2b/docker") should emit DeprecationWarning and return LocalManagedAgent, not ValueError. Zero public API changes.

βœ… Performance: Skip-guards are lightweight checks that exit early when prerequisites missing. No hot-path regressions.

Phase 2: Implemented Fixes for Valid Issues

Fixed all valid architectural issues identified by reviewers:

Fixed in ae0f7bde:

  1. βœ… Ollama requests import (runtime_local_ollama.py:6-18):

    • Moved import requests inside try-catch for consistency with skip-guard pattern
    • Changed broad Exception to specific requests.RequestException
    • Prevents ImportError when requests not installed
  2. βœ… Gemini SDK guard (runtime_local_gemini.py:14-18):

    • Removed unnecessary google.generativeai import check
    • LocalAgent uses litellm exclusively - only needs GEMINI_API_KEY
    • Added clarifying comment about litellm REST handling
  3. βœ… Test parametrization (test_backend_semantics.py:97-113):

    • Replaced 3 repetitive blocks with @pytest.mark.parametrize
    • Added flyio and daytona providers for comprehensive coverage
    • Reduced duplication and improved maintainability

Phase 3: Final Verdict

🟒 APPROVED - READY TO MERGE

Why this PR is architecturally sound:

  1. Surgically focused: Fixes exactly the 2 P1 blockers identified - failing test + crash-prone examples
  2. Follows AGENTS.md: All wrapper-level changes, no core SDK pollution
  3. Zero breaking changes: Maintains backward compatibility while fixing implementation alignment
  4. Addresses all valid reviewer feedback: Improved skip-guard consistency and test quality
  5. Performance safe: Early exit patterns, no import-time regressions

Files modified:

  • βœ… examples/python/managed-agents/provider/runtime_local_ollama.py - Fixed requests import guard
  • βœ… examples/python/managed-agents/provider/runtime_local_gemini.py - Removed false SDK requirement
  • βœ… tests/unit/integrations/test_backend_semantics.py - Parametrized repetitive test blocks

This PR successfully resolves the P1 issues from #1550 while improving code quality based on comprehensive reviewer feedback.


@MervinPraison MervinPraison review requested due to automatic review settings April 26, 2026 06:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant