Fix: P1 follow-up to #1550 β failing test + example skip-guards#1555
Fix: P1 follow-up to #1550 β failing test + example skip-guards#1555MervinPraison merged 1 commit intomainfrom
Conversation
β¦ions and add example skip guards - Fix test_managed_agent_compute_provider_errors to expect DeprecationWarning instead of ValueError - Add 3-layer skip guards to all 4 runtime example scripts for clean exit without credentials - Preserve backward compatibility while implementing the HostedAgent/LocalAgent architectural split Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
π WalkthroughWalkthroughThe PR adds prerequisite validation checks to four runtime example scripts and updates a test to expect deprecation warnings instead of exceptions. Runtime scripts now verify required environment variables and SDK availability before executing their main logic, gracefully skipping execution when dependencies are unavailable. Changes
Estimated code review effortπ― 2 (Simple) | β±οΈ ~10 minutes Possibly related issues
Possibly related PRs
Suggested labels
Poem
π₯ Pre-merge checks | β 5β Passed checks (5 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touchesπ Generate docstrings
π§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@copilot Do a thorough review of this PR. Read ALL existing reviewer comments above from Qodo, Coderabbit, and Gemini first β incorporate their findings. Review areas:
|
There was a problem hiding this comment.
Code Review
This pull request introduces skip guards across several example scripts to ensure they exit cleanly when environment variables or required SDKs are missing. It also updates the integration tests to check for deprecation warnings rather than errors when using legacy compute providers. The review feedback suggests cleaning up an unused import and improving the robustness of the Ollama skip guard, as well as refactoring the unit tests using parametrization for better maintainability.
| import os | ||
| import sys | ||
| import requests | ||
|
|
||
| # Skip guards - exit cleanly if prerequisites not met | ||
| # Check if Ollama daemon is running | ||
| try: | ||
| response = requests.get("http://localhost:11434/api/tags", timeout=2) | ||
| if response.status_code != 200: | ||
| raise Exception("Ollama not responding") | ||
| except Exception: | ||
| print("[skip] Ollama daemon not running at localhost:11434") | ||
| sys.exit(0) |
There was a problem hiding this comment.
The os module is imported but not used in this file. Additionally, moving the requests import inside the try block ensures that the script exits cleanly with a skip message even if the requests library is not installed, which is consistent with the other example scripts that handle their SDK imports gracefully to avoid tracebacks for new users.
| import os | |
| import sys | |
| import requests | |
| # Skip guards - exit cleanly if prerequisites not met | |
| # Check if Ollama daemon is running | |
| try: | |
| response = requests.get("http://localhost:11434/api/tags", timeout=2) | |
| if response.status_code != 200: | |
| raise Exception("Ollama not responding") | |
| except Exception: | |
| print("[skip] Ollama daemon not running at localhost:11434") | |
| sys.exit(0) | |
| import sys | |
| # Skip guards - exit cleanly if prerequisites not met | |
| # Check if Ollama daemon is running | |
| try: | |
| import requests | |
| response = requests.get("http://localhost:11434/api/tags", timeout=2) | |
| if response.status_code != 200: | |
| raise Exception("Ollama not responding") | |
| except Exception: | |
| print("[skip] Ollama daemon not running at localhost:11434") | |
| sys.exit(0) |
| def test_managed_agent_compute_provider_warnings(): | ||
| """Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent.""" | ||
| from praisonai.integrations.managed_agents import ManagedAgent | ||
| from praisonai.integrations.managed_local import LocalManagedAgent | ||
|
|
||
| # Compute providers should raise ValueError | ||
| with pytest.raises(ValueError) as exc_info: | ||
| ManagedAgent(provider="modal") | ||
| assert "compute" in str(exc_info.value).lower() | ||
| assert "LocalAgent" in str(exc_info.value) | ||
| # Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility) | ||
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | ||
| obj = ManagedAgent(provider="modal") | ||
| assert isinstance(obj, LocalManagedAgent) | ||
|
|
||
| with pytest.raises(ValueError) as exc_info: | ||
| ManagedAgent(provider="e2b") | ||
| assert "compute" in str(exc_info.value).lower() | ||
| assert "LocalAgent" in str(exc_info.value) | ||
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | ||
| obj = ManagedAgent(provider="e2b") | ||
| assert isinstance(obj, LocalManagedAgent) | ||
|
|
||
| with pytest.raises(ValueError) as exc_info: | ||
| ManagedAgent(provider="docker") | ||
| assert "compute" in str(exc_info.value).lower() | ||
| assert "LocalAgent" in str(exc_info.value) | ||
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | ||
| obj = ManagedAgent(provider="docker") | ||
| assert isinstance(obj, LocalManagedAgent) |
There was a problem hiding this comment.
The test logic for different compute providers is repetitive. Using @pytest.mark.parametrize would make the test cleaner, more readable, and easier to maintain when adding new providers in the future.
| def test_managed_agent_compute_provider_warnings(): | |
| """Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent.""" | |
| from praisonai.integrations.managed_agents import ManagedAgent | |
| from praisonai.integrations.managed_local import LocalManagedAgent | |
| # Compute providers should raise ValueError | |
| with pytest.raises(ValueError) as exc_info: | |
| ManagedAgent(provider="modal") | |
| assert "compute" in str(exc_info.value).lower() | |
| assert "LocalAgent" in str(exc_info.value) | |
| # Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility) | |
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | |
| obj = ManagedAgent(provider="modal") | |
| assert isinstance(obj, LocalManagedAgent) | |
| with pytest.raises(ValueError) as exc_info: | |
| ManagedAgent(provider="e2b") | |
| assert "compute" in str(exc_info.value).lower() | |
| assert "LocalAgent" in str(exc_info.value) | |
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | |
| obj = ManagedAgent(provider="e2b") | |
| assert isinstance(obj, LocalManagedAgent) | |
| with pytest.raises(ValueError) as exc_info: | |
| ManagedAgent(provider="docker") | |
| assert "compute" in str(exc_info.value).lower() | |
| assert "LocalAgent" in str(exc_info.value) | |
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | |
| obj = ManagedAgent(provider="docker") | |
| assert isinstance(obj, LocalManagedAgent) | |
| @pytest.mark.parametrize("provider", ["modal", "e2b", "docker"]) | |
| def test_managed_agent_compute_provider_warnings(provider): | |
| """Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent.""" | |
| from praisonai.integrations.managed_agents import ManagedAgent | |
| from praisonai.integrations.managed_local import LocalManagedAgent | |
| # Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility) | |
| with pytest.warns(DeprecationWarning, match="compute.*deprecated"): | |
| obj = ManagedAgent(provider=provider) | |
| assert isinstance(obj, LocalManagedAgent) |
There was a problem hiding this comment.
Actionable comments posted: 1
π§Ή Nitpick comments (2)
tests/unit/integrations/test_backend_semantics.py (1)
97-113: Test correctly aligns with the implementation; consider parametrizing the three near-identical blocks.The rename and the new assertions match
ManagedAgent's actual behavior for compute providers: the regexcompute.*deprecatedcorrectly matches the emitted message ("β¦for compute providers is deprecatedβ¦"), and the factory does return aLocalManagedAgentinstance formodal/e2b/docker. Backward-compat semantics are now properly exercised.One small nit: the three blocks are identical except for the provider value, so a
pytest.mark.parametrizewould tighten this up and make it trivial to addflyio/daytona(which the implementation also routes through this branch but the test currently omits).β»οΈ Optional parametrized version
-def test_managed_agent_compute_provider_warnings(): - """Test that ManagedAgent(provider='modal'|'e2b'|...) emits DeprecationWarning and returns LocalManagedAgent.""" - from praisonai.integrations.managed_agents import ManagedAgent - from praisonai.integrations.managed_local import LocalManagedAgent - - # Compute providers should emit DeprecationWarning and return LocalManagedAgent (backward compatibility) - with pytest.warns(DeprecationWarning, match="compute.*deprecated"): - obj = ManagedAgent(provider="modal") - assert isinstance(obj, LocalManagedAgent) - - with pytest.warns(DeprecationWarning, match="compute.*deprecated"): - obj = ManagedAgent(provider="e2b") - assert isinstance(obj, LocalManagedAgent) - - with pytest.warns(DeprecationWarning, match="compute.*deprecated"): - obj = ManagedAgent(provider="docker") - assert isinstance(obj, LocalManagedAgent) +@pytest.mark.parametrize("provider", ["modal", "e2b", "docker", "flyio", "daytona"]) +def test_managed_agent_compute_provider_warnings(provider): + """ManagedAgent(provider=<compute>) emits DeprecationWarning and returns LocalManagedAgent.""" + from praisonai.integrations.managed_agents import ManagedAgent + from praisonai.integrations.managed_local import LocalManagedAgent + + with pytest.warns(DeprecationWarning, match="compute.*deprecated"): + obj = ManagedAgent(provider=provider) + assert isinstance(obj, LocalManagedAgent)Based on learnings: "Public API changes require a deprecation cycle: emit DeprecationWarning for one release before breaking change" β the updated assertions correctly enforce that contract for compute providers.
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/integrations/test_backend_semantics.py` around lines 97 - 113, The three identical blocks in test_managed_agent_compute_provider_warnings should be parametrized to reduce duplication and cover additional compute providers; update the test function (test_managed_agent_compute_provider_warnings) to use pytest.mark.parametrize over provider values (e.g., "modal", "e2b", "docker", "flyio", "daytona"), and inside the single test body call ManagedAgent(provider=provider) and assert isinstance(result, LocalManagedAgent) while using pytest.warns(DeprecationWarning, match="compute.*deprecated") for the warning check; keep references to ManagedAgent and LocalManagedAgent so the test still validates the deprecation warning and the factory return type.examples/python/managed-agents/provider/runtime_local_ollama.py (1)
6-18: Tighten the Ollama preflight: lazy-importrequestsand narrow the exception.Two small concerns in this guard:
import requestsruns unconditionally at module level. On a no-credentials machine that also lacksrequests, the script will fail withImportErrorand a non-zero exit before the[skip]line is printed β which is exactly the contractall_runtimes.pyrelies on (it expects[skip]+ exit 0). Move the import inside the guard (or wrap intry/except ImportErrorwith its own[skip]).- The
try: ... raise Exception("Ollama not responding") ... except Exceptionpattern trips Ruff BLE001 and obscures intent. Catchingrequests.RequestExceptionand checkingstatus_codeseparately is cleaner.β»οΈ Proposed refactor
import os import sys -import requests -# Skip guards - exit cleanly if prerequisites not met -# Check if Ollama daemon is running -try: - response = requests.get("http://localhost:11434/api/tags", timeout=2) - if response.status_code != 200: - raise Exception("Ollama not responding") -except Exception: - print("[skip] Ollama daemon not running at localhost:11434") - sys.exit(0) +# Skip guards - exit cleanly if prerequisites not met +try: + import requests +except ImportError: + print("[skip] requests not installed") + sys.exit(0) + +# Check if Ollama daemon is running +try: + response = requests.get("http://localhost:11434/api/tags", timeout=2) +except requests.RequestException: + response = None +if response is None or response.status_code != 200: + print("[skip] Ollama daemon not running at localhost:11434") + sys.exit(0)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/python/managed-agents/provider/runtime_local_ollama.py` around lines 6 - 18, Move the unconditional import into the preflight and narrow the exception handling: inside the preflight block attempt to import requests in a try/except ImportError that prints "[skip] Ollama daemon not running at localhost:11434" and sys.exit(0) on ImportError, then perform the requests.get call in a try/except that catches requests.RequestException; after the GET check response.status_code and if it is not 200 print the same "[skip] ..." message and sys.exit(0). This updates the top-level guard in runtime_local_ollama.py to lazy-import requests and to catch RequestException instead of a broad Exception.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/python/managed-agents/provider/runtime_local_gemini.py`:
- Around line 14-18: Remove the unnecessary google.generativeai SDK import guard
that incorrectly exits when using litellm routing: delete the try/except block
that imports google.generativeai (the lines around the import and sys.exit) so
the example using LocalAgent with model "gemini/gemini-2.0-flash" can run with
only litellm and GEMINI_API_KEY; alternatively, if you intend to require the
SDK, replace that block with a clear comment stating the SDK is optional and
that litellm handles Gemini via REST so only litellm + GEMINI_API_KEY are
required.
---
Nitpick comments:
In `@examples/python/managed-agents/provider/runtime_local_ollama.py`:
- Around line 6-18: Move the unconditional import into the preflight and narrow
the exception handling: inside the preflight block attempt to import requests in
a try/except ImportError that prints "[skip] Ollama daemon not running at
localhost:11434" and sys.exit(0) on ImportError, then perform the requests.get
call in a try/except that catches requests.RequestException; after the GET check
response.status_code and if it is not 200 print the same "[skip] ..." message
and sys.exit(0). This updates the top-level guard in runtime_local_ollama.py to
lazy-import requests and to catch RequestException instead of a broad Exception.
In `@tests/unit/integrations/test_backend_semantics.py`:
- Around line 97-113: The three identical blocks in
test_managed_agent_compute_provider_warnings should be parametrized to reduce
duplication and cover additional compute providers; update the test function
(test_managed_agent_compute_provider_warnings) to use pytest.mark.parametrize
over provider values (e.g., "modal", "e2b", "docker", "flyio", "daytona"), and
inside the single test body call ManagedAgent(provider=provider) and assert
isinstance(result, LocalManagedAgent) while using
pytest.warns(DeprecationWarning, match="compute.*deprecated") for the warning
check; keep references to ManagedAgent and LocalManagedAgent so the test still
validates the deprecation warning and the factory return type.
πͺ Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
βΉοΈ Review info
βοΈ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4a17fd3c-a616-48c3-9399-8fd9f01fc57d
π Files selected for processing (5)
examples/python/managed-agents/provider/runtime_hosted_anthropic.pyexamples/python/managed-agents/provider/runtime_local_gemini.pyexamples/python/managed-agents/provider/runtime_local_ollama.pyexamples/python/managed-agents/provider/runtime_local_openai.pytests/unit/integrations/test_backend_semantics.py
| try: | ||
| import google.generativeai | ||
| except ImportError: | ||
| print("[skip] google-generativeai SDK not installed") | ||
| sys.exit(0) |
There was a problem hiding this comment.
π§© Analysis chain
π Script executed:
#!/bin/bash
# Confirm whether LocalAgent / LocalAgentConfig routes `gemini/...` through litellm
# (in which case google-generativeai is not required) or imports google.generativeai directly.
fd -t f -e py 'local_agent|LocalAgent' | head -50
rg -nP --type=py -C3 '\b(google\.generativeai|google_generativeai|genai)\b'
rg -nP --type=py -C3 '\blitellm\b' -g '!**/tests/**' | head -80
ast-grep --pattern 'class LocalAgent($$$):
$$$'Repository: MervinPraison/PraisonAI
Length of output: 20396
π Script executed:
# Find LocalManagedAgent parent class and _resolve_model implementation
fd -t f -e py | xargs grep -l "class LocalManagedAgent"
rg -nP --type=py "_resolve_model|resolve.*model" -A5 -B2 | head -100
# Check if gemini/ is routed through litellm
rg -nP --type=py "gemini/" -C3 | head -80
# Look for how LocalAgent actually uses models
rg -nP --type=py "self\.model|config\.model" src/praisonai/praisonai/integrations/local_agent.py -A2 -B2 | head -50Repository: MervinPraison/PraisonAI
Length of output: 17938
π Script executed:
# Check how LocalManagedAgent actually uses the resolved model
cat -n src/praisonai/praisonai/integrations/managed_local.py | head -100
rg -nP --type=py "def.*run|def.*execute|def.*complete" src/praisonai/praisonai/integrations/managed_local.py | head -20
# Look for where model is used in completions/LLM calls
rg -nP --type=py "litellm|google\.generativeai|genai" src/praisonai/praisonai/integrations/managed_local.py -A3 -B3Repository: MervinPraison/PraisonAI
Length of output: 4290
π Script executed:
# Find where model is actually used in completion calls
rg -nP --type=py "completion|ChatCompletion|call.*model" src/praisonai/praisonai/integrations/managed_local.py -A5 | head -150
# Check if google.generativeai is imported anywhere
rg -nP --type=py "^import|^from" src/praisonai/praisonai/integrations/managed_local.py | head -50
# Find the actual LLM completion call
rg -nP --type=py "\.completion\(|\.call\(" src/praisonai/praisonai/integrations/managed_local.py -B5 -A10 | head -100Repository: MervinPraison/PraisonAI
Length of output: 518
π Script executed:
# Get full file to understand the actual completion flow
wc -l src/praisonai/praisonai/integrations/managed_local.py
# Find all imports including from external packages
head -100 src/praisonai/praisonai/integrations/managed_local.py
# Search for where model is used in call
rg -nP "self\.model|self\._.*model|resolved.*model" src/praisonai/praisonai/integrations/managed_local.py -B2 -A5 | head -150
# Look for litellm usage anywhere
rg -nP "litellm" . --type=py -l 2>/dev/null | head -20Repository: MervinPraison/PraisonAI
Length of output: 5971
π Script executed:
# Find the Agent class in praisonaiagents
fd -t f -e py "agent\.py" src/praisonai-agents/ | head -5
# Check the Agent init and model handling
rg -nP "class Agent" src/praisonai-agents/praisonaiagents/agent/ -A 20 | head -100
# Look for litellm usage in Agent-related files
rg -nP "litellm\.completion|from litellm" src/praisonai-agents/praisonaiagents/agent/ -B2 -A5 | head -150Repository: MervinPraison/PraisonAI
Length of output: 15765
Remove the unnecessary google-generativeai SDK check for litellm routing.
The example uses LocalAgent with model gemini/gemini-2.0-flash, which is a litellm routing prefix. litellm handles Gemini calls directly via REST API and does not depend on the google-generativeai SDK. The guard at lines 14β18 will incorrectly skip this example on otherwise-functional machines (litellm + GEMINI_API_KEY set, but without the SDK), since only litellm is required.
If the example should work with just litellm, drop the SDK check entirely (env-var check via litellm is sufficient). If you specifically want to enforce the SDK, update the comment to reflect that intention.
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/python/managed-agents/provider/runtime_local_gemini.py` around lines
14 - 18, Remove the unnecessary google.generativeai SDK import guard that
incorrectly exits when using litellm routing: delete the try/except block that
imports google.generativeai (the lines around the import and sys.exit) so the
example using LocalAgent with model "gemini/gemini-2.0-flash" can run with only
litellm and GEMINI_API_KEY; alternatively, if you intend to require the SDK,
replace that block with a clear comment stating the SDK is optional and that
litellm handles Gemini via REST so only litellm + GEMINI_API_KEY are required.
Greptile SummaryThis PR surgically fixes the two post-merge blockers from #1550: aligns the Confidence Score: 5/5Safe to merge β all findings are P2 style suggestions; both stated blockers are correctly resolved. The test fix is verified against the actual implementation (warning message matches the regex and the factory does return LocalManagedAgent). All four skip-guards correctly gate on env keys / SDK availability / service reachability before heavy imports. The one P2 note (unconditional import requests in the Ollama file) doesn't affect environments where requests is installed as a project dependency. examples/python/managed-agents/provider/runtime_local_ollama.py β minor: import requests should be wrapped in try/except for full consistency with the skip-guard pattern. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[python runtime_*.py] --> B{env key set?}
B -- No --> C[print skip + exit 0]
B -- Yes --> D{SDK importable?}
D -- No --> E[print skip + exit 0]
D -- Yes --> F{Service reachable?\nOllama only}
F -- No --> G[print skip + exit 0]
F -- Yes --> H[heavy imports + run example]
D -- N/A --> H
Reviews (1): Last reviewed commit: "fix: resolve P1 issues from local verifi..." | Re-trigger Greptile |
| Uses the new canonical LocalAgent class which clearly communicates that only the | ||
| agent loop runs locally. The LLM calls go to a local Ollama instance, no managed runtime involved. | ||
| """ | ||
| import os |
There was a problem hiding this comment.
Unconditional
requests import breaks the skip-guard pattern
import requests sits above the guard, so if requests is not installed the script raises a bare ImportError rather than printing a clean [skip] message and exiting 0 β exactly the failure mode the guards are meant to prevent. Wrapping it in a try/except is consistent with the SDK-import guards used in the other three files.
| import os | |
| import os | |
| import sys | |
| # Skip guards - exit cleanly if prerequisites not met | |
| # Check if Ollama daemon is running | |
| try: | |
| import requests | |
| except ImportError: | |
| print("[skip] requests library not installed") | |
| sys.exit(0) | |
| try: | |
| response = requests.get("http://localhost:11434/api/tags", timeout=2) |
|
@claude You are the FINAL architecture reviewer. If the branch is under MervinPraison/PraisonAI (not a fork), you are able to make modifications to this branch and push directly. SCOPE: Focus ONLY on Python packages (praisonaiagents, praisonai). Do NOT modify praisonai-rust or praisonai-ts. Read ALL comments above from Gemini, Qodo, CodeRabbit, and Copilot carefully before responding. Phase 1: Review per AGENTS.md
Phase 2: FIX Valid Issues Phase 3: Final Verdict |
|
Claude finished @MervinPraison's task ββ View job β’ Final Architecture Review - PR #1555 βTodo List:
Phase 1: Architecture Review per AGENTS.mdβ
Protocol-Driven Core: All changes are in wrapper layer ( β
Backward Compatible: Test correctly aligns with implementation - β Performance: Skip-guards are lightweight checks that exit early when prerequisites missing. No hot-path regressions. Phase 2: Implemented Fixes for Valid IssuesFixed all valid architectural issues identified by reviewers: Fixed in ae0f7bde:
Phase 3: Final Verdictπ’ APPROVED - READY TO MERGE Why this PR is architecturally sound:
Files modified:
This PR successfully resolves the P1 issues from #1550 while improving code quality based on comprehensive reviewer feedback. |
Follow-up to #1550 β Resolve P1 issues found during local verification
PR #1550 was merged on 2026-04-25 with two latent issues that surfaced during post-merge local validation:
test_managed_agent_compute_provider_errorsβ assertedpytest.raises(ValueError)but the implementation correctly returnsLocalManagedAgentwith aDeprecationWarning(the backward-compat behaviour). CI would fail the moment this test ran.runtime_*.pyfiles exited non-zero on a no-creds machine (ImportError, ValueError, UnboundLocalError). New contributors and docs builds would break.This PR fixes both, surgically. 5 files, +73 / -15.
Changes
1. Test alignment (
tests/unit/integrations/test_backend_semantics.py)Renamed
test_managed_agent_compute_provider_errorsβtest_managed_agent_compute_provider_warningsand updated assertions to match the implementation:pytest.warns(DeprecationWarning, match="compute")+isinstance(returned, LocalManagedAgent). Implementation untouched.2. Skip-guards on all 4 runtime examples
Added the standard 3-layer guard (env β SDK availability β service reachability) to:
runtime_hosted_anthropic.pyβ guards onANTHROPIC_API_KEY/CLAUDE_API_KEY+import anthropicruntime_local_openai.pyβ guards onOPENAI_API_KEY+import openairuntime_local_gemini.pyβ guards onGEMINI_API_KEY/GOOGLE_API_KEYruntime_local_ollama.pyβ pingshttp://localhost:11434/api/tagswith a short timeoutAll guards are before any heavy imports so a no-creds machine exits in < 200 ms.
Verification (local)
Before / after evidence
74c4cecc)15453cdd)pytest tests/unit/integrations/test_backend_semantics.pypython runtime_hosted_anthropic.pyImportError: anthropic SDK required[skip] ANTHROPIC_API_KEY not setpython runtime_local_openai.pyValueError: OPENAI_API_KEY required[skip] OPENAI_API_KEY not setpython runtime_local_gemini.pyUnboundLocalError: 'logging'[skip] GEMINI_API_KEY not setpython runtime_local_ollama.pyUnboundLocalError: 'logging'[skip] Ollama daemon not runningBackward compatibility
Zero public API changes. Test rename is internal. Skip-guards are no-ops when credentials are present, so existing CI / user behaviour with creds is unchanged.
References
src/praisonai-agents/.windsurf/workflows/create-examples-post.md(Phase A Step 2)Closes the two blockers from post-merge verification of #1550.
Summary by CodeRabbit
Chores
Tests