Skip to content

Fix: ManagedAgent(provider=) overload conflates hosted-runtime vs LLM-routing — propose HostedAgent / LocalAgent split #1549

@MervinPraison

Description

@MervinPraison

Overview

ManagedAgent(provider=...) currently overloads the provider= argument with two completely different meanings, and that leaks into naming across the entire managed-agent surface. When provider="anthropic" the factory returns a real managed runtime where the whole agent loop runs on Anthropic's cloud. When provider is anything else ("openai", "gemini", "ollama", "local", ...) the factory returns LocalManagedAgent — a local loop that just uses the string as a litellm routing hint for the LLM call. There is no managed runtime involved. This issue proposes (1) freezing the current provider= overload via deprecation, (2) splitting the concern into a runtime-provider axis and an LLM-provider axis, and (3) renaming the user-facing backend classes so the word "Managed" / "Sandboxed" no longer sits on both sides of the divide.

Background

Where the confusion came from

Recent examples added under examples/python/managed-agents/provider/ used ManagedAgent(provider="gemini"), ManagedAgent(provider="openai"), etc. to represent "the LLM runs in that cloud". This reads as "managed runtime on Google Cloud" to new users, but the code does not do that — Google (and OpenAI, Ollama) have no managed-runtime integration wired up. The factory silently falls into the else branch and returns a local-loop backend.

Related PR context:

  • feat: separate managed runtime vs sandboxed tool execution (fixes #1523) #1526 introduced ManagedRuntimeProtocol (core SDK) that is explicitly separate from ComputeProviderProtocol — confirming the two-axis model at the protocol layer, but the wrapper layer still conflates them in the class names and factory.
  • The honest-name alias SandboxedAgent was added for LocalManagedAgent to clarify "only tools can be sandboxed, loop stays local" — but the alias sits next to ManagedAgent in exports, which makes it look more remote than ManagedAgent rather than less. Users guess exactly the wrong semantics.

Why this is valuable

  • Non-developers following a one-line example must be able to tell from the name whether their code runs on their machine or in someone else's cloud (privacy, cost, latency all depend on it).
  • Phase 2 will add real managed runtimes for E2B and Modal. If the current overload isn't fixed first, ManagedAgent(provider="e2b") would have to mean either "hosted loop on E2B" or "local loop with E2B as LLM" (it wouldn't — E2B isn't an LLM — but the precedent is there), reopening the ambiguity.

Current ecosystem state

Only Anthropic exposes a Managed Agents API PraisonAI currently wires to. E2B and Modal have Sandbox APIs wired via compute= (tool-level only). E2B Managed Runtime and Modal Managed Functions (full hosted loops) are announced but not yet integrated — see praisonai/praisonai/integrations/sandboxed_agent.py lines 10-13 for the forward-looking docstring naming E2BManagedAgent / ModalManagedAgent.

Architecture Analysis

Current implementation

The system has two orthogonal axes at the protocol layer and one collapsed axis at the class-name / factory layer:

Tools local Tools in cloud (E2B / Modal / Fly / Daytona / Docker)
Loop local LocalManagedAgent / SandboxedAgent (no compute=) LocalManagedAgent(compute="e2b") / SandboxedAgent(compute="e2b")
Loop hosted AnthropicManagedAgent (tools co-located with provider)

The provider= overload

src/praisonai/praisonai/integrations/managed_agents.py (lines 1073-1118):

def ManagedAgent(provider=None, **kwargs):
    if provider is None:
        provider = "anthropic" if os.getenv("ANTHROPIC_API_KEY") else "local"
    if provider == "anthropic":
        return AnthropicManagedAgent(provider=provider, **kwargs)  # REAL managed runtime
    else:
        from .managed_local import LocalManagedAgent
        return LocalManagedAgent(provider=provider, **kwargs)       # local loop, LLM-only routing

And src/praisonai/praisonai/integrations/managed_local.py (lines 275-285):

def _resolve_model(self) -> str:
    model = self._cfg.get("model", "gpt-4o")
    if self.provider == "ollama" and "/" not in model:
        model = f"ollama/{model}"
    elif self.provider == "gemini" and not model.startswith("gemini"):
        model = f"gemini/{model}"
    return model

So provider= is doing two jobs:

  • When "anthropic" → chooses the runtime provider.
  • Otherwise → becomes an LLM routing prefix for litellm.

Key file locations

File Purpose Lines
src/praisonai-agents/praisonaiagents/managed/protocols.py ComputeProviderProtocol (tool sandbox) and ManagedRuntimeProtocol (hosted loop) — core SDK 91, 223
src/praisonai-agents/praisonaiagents/agent/protocols.py ManagedBackendProtocol — agent↔backend contract 306
src/praisonai-agents/praisonaiagents/config/feature_configs.py ExecutionConfig — local execution limits (max_iter, rate, budget, `code_sandbox_mode: "sandbox" "direct"`). Has no cloud-compute knob.
src/praisonai/praisonai/integrations/managed_agents.py AnthropicManagedAgent, ManagedConfig, ManagedAgent() factory 237, 199, 1073
src/praisonai/praisonai/integrations/managed_local.py LocalManagedAgent (local loop + optional compute=), silent SandboxedAgent alias 194, 1048
src/praisonai/praisonai/integrations/sandboxed_agent.py Thin re-export of SandboxedAgent with a docstring explaining the honest meaning 1-40
src/praisonai/praisonai/integrations/compute/__init__.py Compute provider adapters: DockerCompute, LocalCompute, DaytonaCompute, E2BCompute, ModalCompute, FlyioCompute 8-36
src/praisonai/praisonai/integrations/__init__.py Wrapper public exports (all names above) 30-110
src/praisonai/praisonai/__init__.py Top-level praisonai re-exports (mirror) 105-125
examples/python/managed-agents/provider/runtime_{openai,gemini,ollama}.py Misleading examples — call ManagedAgent(provider="...") with an LLM name
examples/python/managed-agents/provider/runtime_anthropic.py Correct example — real hosted loop

Gap Analysis Summary

Critical gaps

Gap Impact Effort
provider= on ManagedAgent() factory is semantically overloaded (runtime-provider OR LLM-routing hint) Users get the wrong mental model of where their code is running Low — add a new arg, deprecate the overload
runtime_openai.py / runtime_gemini.py / runtime_ollama.py examples teach the broken pattern Docs & recent blog post on mer.vin cement the confusion Low — rewrite examples; alias file names
LocalManagedAgent / SandboxedAgent / AnthropicManagedAgent all implement the same ManagedBackendProtocol, but only the Anthropic one is "managed" in the cloud sense Name "Managed" tells the user nothing useful Medium — rename + alias

Feature gaps

Feature Current Support Gap
Runtime provider selection (anthropic / e2b-hosted / modal-hosted / flyio-hosted) Only "anthropic" is a real runtime Need a named axis HostedAgent(provider=...) that will extend cleanly to E2B-Managed, Modal-Managed, Fly-Managed in Phase 2
LLM selection for local loop Works via model= and hijacks provider= for litellm routing model= is enough (it already accepts "gemini/gemini-2.0-flash", "ollama/llama3", etc.). Drop the provider= hijack.
Error when user requests non-existent managed runtime Silent fallthrough to LocalManagedAgent ManagedAgent(provider="modal") today returns a local loop with "modal" as an LLM hint — garbage. Should raise ValueError with clear message until the Phase 2 runtime lands.

Proposed Implementation

Phase 1 — Fix the semantics without breaking anyone

A. Core SDK — no change. ManagedRuntimeProtocol and ComputeProviderProtocol are correct. ExecutionConfig stays local-only (cloud routing belongs in wrapper).

B. Wrapper — add new canonical classes, keep every current name as a silent alias.

# New canonical names
from praisonai.integrations import HostedAgent, HostedAgentConfig, LocalAgent, LocalAgentConfig

# 1. Hosted loop — entire agent runs in a remote managed runtime
agent = Agent(name="a", backend=HostedAgent(
    provider="anthropic",                    # only "anthropic" supported today
    config=HostedAgentConfig(
        model="claude-3-5-sonnet-latest",
        system="You are a concise assistant.",
    ),
))

# 2. Local loop, tools optional-sandboxed in a cloud compute
agent = Agent(name="b", backend=LocalAgent(
    compute="e2b",                           # or "modal", "flyio", "daytona", "docker", None
    config=LocalAgentConfig(
        model="gpt-4o-mini",                 # LLM choice here — not `provider=`
        system="You are a concise assistant.",
    ),
))

# 3. Smallest footprint: local loop + local subprocess
agent = Agent(name="c", backend=LocalAgent(
    config=LocalAgentConfig(model="gpt-4o-mini"),
))

C. ManagedAgent() factory — narrowed meaning + deprecation.

def ManagedAgent(provider="anthropic", **kwargs):
    """Deprecated factory. Use HostedAgent or LocalAgent explicitly.

    - provider="anthropic" → HostedAgent(provider="anthropic", ...)
    - provider in {"openai","gemini","ollama","local"} → LocalAgent(...)
      (DeprecationWarning: "use LocalAgent directly; put LLM name in model=")
    - provider in {"e2b","modal","flyio","daytona","docker"} → raise ValueError
      ("Cloud compute belongs on LocalAgent(compute=...). Hosted runtimes for
       these providers are not yet available.")
    - any other provider → raise ValueError.
    """

D. Config consolidation — keep the two knobs in their proper layer.

  • ExecutionConfig (Core SDK) continues to own: max_iter, max_rpm, max_execution_time, code_execution, code_mode, code_sandbox_mode, rate_limiter, max_budget. Do not add compute= here (would leak wrapper into core and break protocol-driven-core invariant).
  • compute= stays a ctor kwarg on LocalAgent (wrapper).
  • provider= stays a ctor kwarg on HostedAgent (wrapper).

Phase 2 — Wire real managed runtimes for E2B and Modal

Out of scope for this issue. Once those runtimes land, HostedAgent(provider="e2b"|"modal"|"flyio") routes to them.

Files to Create / Modify

New files

File Purpose
src/praisonai/praisonai/integrations/hosted_agent.py New canonical HostedAgent / HostedAgentConfig. Currently aliases AnthropicManagedAgent / ManagedConfig. Docstring explains the runtime-provider axis only.
src/praisonai/praisonai/integrations/local_agent.py New canonical LocalAgent / LocalAgentConfig. Currently aliases LocalManagedAgent / LocalManagedConfig. Docstring explains loop local, compute= optional, and forbids provider= on the ctor (the factory still accepts it for legacy routing only).
tests/unit/integrations/test_backend_semantics.py Pin the invariants: (a) isinstance(HostedAgent(provider="anthropic"), ManagedRuntimeProtocol), (b) LocalAgent() has compute_provider is None, (c) LocalAgent(compute="e2b").compute_provider.provider_name == "e2b", (d) ManagedAgent(provider="modal") raises ValueError.
tests/integration/test_local_agent_real.py Real-agentic test: Agent(backend=LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))).start("Capital of France?") — skips if OPENAI_API_KEY missing.

Modified files

File Change
src/praisonai/praisonai/integrations/managed_agents.py (lines 1073-1118) Factory emits DeprecationWarning for the LLM-routing overload; raises ValueError for compute-provider names; preserves "anthropic" path.
src/praisonai/praisonai/integrations/managed_local.py (lines 275-285, 1048-1049) Remove the provider-based litellm routing (make model= carry the prefix, as litellm already supports). Keep SandboxedAgent alias pointing at the same class, but add LocalAgent = LocalManagedAgent next to it.
src/praisonai/praisonai/integrations/__init__.py (lines 30-110) Export HostedAgent, HostedAgentConfig, LocalAgent, LocalAgentConfig. Keep every current name.
src/praisonai/praisonai/__init__.py (lines 105-125) Mirror the new exports in the top-level lazy __getattr__.
examples/python/managed-agents/provider/runtime_{openai,gemini,ollama}.py Replace ManagedAgent(provider="openai") with LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini")); rename files to runtime_local_{openai,gemini,ollama}.py to reflect they are local-loop variants. Keep old filenames as thin re-exports that print("deprecated, see runtime_local_*") and exec(open(new_path).read()).
examples/python/managed-agents/provider/runtime_anthropic.py Switch to HostedAgent(provider="anthropic", config=HostedAgentConfig(...)). Same runtime behaviour, new name.
examples/python/managed-agents/provider/all_runtimes.py Update list of files.
src/praisonai-agents/AGENTS.md Add a glossary row mapping old → new names and a 2×2 table.
src/praisonai-agents/.windsurf/workflows/create-examples-post.md Update the workflow to reference HostedAgent / LocalAgent instead of SandboxedAgent in examples.
/tmp → mer.vin post 50151 Optional follow-up to update the blog post once the rename lands.

Technical Considerations

Dependencies

No new runtime deps. anthropic, e2b, modal, daytona-sdk, aiohttp (Fly.io) stay optional and lazy, same as today.

Performance impact

All new modules follow the existing lazy-import pattern in src/praisonai/praisonai/__init__.py:__getattr__. No module-level imports of heavy SDKs. Import-time cost of the wrapper is unchanged.

Safety / approval

  • ManagedAgent(provider="e2b"/"modal"/...) today silently produces a local loop with a broken LLM string. After this change it raises ValueError with an actionable message pointing at LocalAgent(compute=...)safer by default.
  • DeprecationWarning for the LLM-routing overload lets downstream users migrate without breakage.

Multi-agent safety

No change to concurrency model. HostedAgent and LocalAgent both satisfy ManagedBackendProtocol, so Agent(backend=…) delegation is unchanged.

Backward compatibility

Hard requirement: every currently exported name must still import and behave exactly as today. All of these continue to work:

  • from praisonai import ManagedAgent, ManagedConfig, AnthropicManagedAgent, LocalManagedAgent, LocalManagedConfig, SandboxedAgent, SandboxedAgentConfig, ManagedAgentIntegration, ManagedBackendConfig
  • ManagedAgent(provider="anthropic"), ManagedAgent(provider="openai") (with DeprecationWarning), ManagedAgent() auto-detect
  • Existing blog post (mer.vin post 50151) and all examples/python/managed-agents/provider/*_compute.py files keep working

Acceptance Criteria

  • New HostedAgent, HostedAgentConfig, LocalAgent, LocalAgentConfig classes are importable from praisonai and praisonai.integrations.
  • HostedAgent(provider="anthropic") returns an instance of the existing AnthropicManagedAgent class (or a subclass).
  • LocalAgent() returns an instance equivalent to LocalManagedAgent() today.
  • LocalAgent(compute="e2b").compute_provider.provider_name == "e2b".
  • ManagedAgent(provider="modal" | "e2b" | "flyio" | "daytona" | "docker") raises ValueError with a message pointing at LocalAgent(compute=...).
  • ManagedAgent(provider="openai" | "gemini" | "ollama" | "local") still works and emits a single DeprecationWarning per process.
  • Every old import path (ManagedAgent, LocalManagedAgent, SandboxedAgent, AnthropicManagedAgent, plus all Configs and Integration aliases) continues to work without deprecation warnings.
  • isinstance(hosted_backend, ManagedRuntimeProtocol) is True for HostedAgent(provider="anthropic"); isinstance(local_backend, ManagedRuntimeProtocol) is False for LocalAgent() (structural check — documents the semantic difference).
  • Real agentic test: Agent(backend=LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))).start("Capital of France?") returns non-empty text when OPENAI_API_KEY is set.
  • All existing unit tests under tests/ pass unchanged. New tests in tests/unit/integrations/test_backend_semantics.py pass.
  • examples/python/managed-agents/provider/runtime_anthropic.py uses HostedAgent; runtime_local_{openai,gemini,ollama}.py use LocalAgent. all_runtimes.py exits 0 with all new filenames.
  • AGENTS.md has a short "Picking a backend: Hosted vs Local" section with the 2×2 table.
  • No new module-level imports of optional heavy deps (anthropic, e2b, modal, etc.). python -c "import praisonai" import time is unchanged within ±5 %.

Implementation Notes

Key files to read first

  1. src/praisonai/praisonai/integrations/managed_agents.py (1123 lines) — factory + AnthropicManagedAgent + ManagedConfig
  2. src/praisonai/praisonai/integrations/managed_local.py (1050 lines) — LocalManagedAgent + existing SandboxedAgent alias + _resolve_compute
  3. src/praisonai-agents/praisonaiagents/managed/protocols.py (357 lines) — ManagedRuntimeProtocol, ComputeProviderProtocol, ComputeConfig
  4. src/praisonai-agents/praisonaiagents/agent/protocols.py (445 lines) — ManagedBackendProtocol (the delegation contract Agent uses)
  5. src/praisonai/praisonai/integrations/__init__.py and src/praisonai/praisonai/__init__.py — wrapper public exports (lazy __getattr__ pattern to preserve)

Critical integration points

  1. Agent.__init__ in src/praisonai-agents/praisonaiagents/agent/agent.py accepts any backend= that satisfies ManagedBackendProtocol. Both new classes must keep that interface intact.
  2. AgentOS / AgentApp in src/praisonai/praisonai/app/ consume the wrapper exports — new names must be importable from praisonai top-level.
  3. src/praisonai-ts has its own ManagedAgent surface — TypeScript mirror is out of scope for this issue but should be tracked in a follow-up.

Testing commands

# Backward-compat smoke
python -c "
from praisonai import ManagedAgent, LocalManagedAgent, SandboxedAgent, AnthropicManagedAgent
from praisonai.integrations import HostedAgent, LocalAgent, HostedAgentConfig, LocalAgentConfig
print('imports OK')
"

# New semantics unit tests
cd src/praisonai-agents && pytest tests/unit/integrations/test_backend_semantics.py -v

# Regression — every existing test in the agent subtree
cd src/praisonai-agents && pytest tests/unit/agent/ -q --no-header

# Examples — all 9 skip cleanly when no creds
PYTHONPATH=src/praisonai-agents:src/praisonai \
  python examples/python/managed-agents/provider/all_runtimes.py

# Real agentic test (requires OPENAI_API_KEY)
OPENAI_API_KEY=sk-... \
  python -c "
from praisonaiagents import Agent
from praisonai.integrations import LocalAgent, LocalAgentConfig
r = Agent(name='t', backend=LocalAgent(
    config=LocalAgentConfig(model='gpt-4o-mini', system='Answer in one word.')
)).start('Capital of France?')
print(r); assert r.strip()
"

# Deprecation warning fires once for the LLM-routing overload
python -W error::DeprecationWarning -c "
from praisonai import ManagedAgent
try:
    ManagedAgent(provider='openai')
    print('FAIL: expected DeprecationWarning')
except DeprecationWarning as e:
    print(f'OK: {e}')
"

# Hard-error for compute-as-provider misuse
python -c "
from praisonai import ManagedAgent
try:
    ManagedAgent(provider='modal')
    print('FAIL')
except ValueError as e:
    print(f'OK: {e}')
"

References

  • PR feat: separate managed runtime vs sandboxed tool execution (fixes #1523) #1526ManagedRuntimeProtocol introduction (splits hosted-loop from compute-sandbox at the protocol layer)
  • src/praisonai/praisonai/integrations/sandboxed_agent.py — existing docstring explicitly documents the "loop stays local" invariant for SandboxedAgent / LocalManagedAgent
  • mer.vin post 50151 — uses today's names; will be refreshed after the rename lands
  • src/praisonai-agents/praisonaiagents/managed/protocols.py:223-245ManagedRuntimeProtocol docstring that states the "entire agent loop runs remotely" guarantee

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingclaudeAuto-trigger Claude analysisdocumentationImprovements or additions to documentationjavascriptPull requests that update javascript codeperformance

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions