You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ManagedAgent(provider=...) currently overloads the provider= argument with two completely different meanings, and that leaks into naming across the entire managed-agent surface. When provider="anthropic" the factory returns a real managed runtime where the whole agent loop runs on Anthropic's cloud. When provider is anything else ("openai", "gemini", "ollama", "local", ...) the factory returns LocalManagedAgent — a local loop that just uses the string as a litellm routing hint for the LLM call. There is no managed runtime involved. This issue proposes (1) freezing the current provider= overload via deprecation, (2) splitting the concern into a runtime-provider axis and an LLM-provider axis, and (3) renaming the user-facing backend classes so the word "Managed" / "Sandboxed" no longer sits on both sides of the divide.
Background
Where the confusion came from
Recent examples added under examples/python/managed-agents/provider/ used ManagedAgent(provider="gemini"), ManagedAgent(provider="openai"), etc. to represent "the LLM runs in that cloud". This reads as "managed runtime on Google Cloud" to new users, but the code does not do that — Google (and OpenAI, Ollama) have no managed-runtime integration wired up. The factory silently falls into the else branch and returns a local-loop backend.
The honest-name alias SandboxedAgent was added for LocalManagedAgent to clarify "only tools can be sandboxed, loop stays local" — but the alias sits next to ManagedAgent in exports, which makes it look more remote than ManagedAgent rather than less. Users guess exactly the wrong semantics.
Why this is valuable
Non-developers following a one-line example must be able to tell from the name whether their code runs on their machine or in someone else's cloud (privacy, cost, latency all depend on it).
Phase 2 will add real managed runtimes for E2B and Modal. If the current overload isn't fixed first, ManagedAgent(provider="e2b") would have to mean either "hosted loop on E2B" or "local loop with E2B as LLM" (it wouldn't — E2B isn't an LLM — but the precedent is there), reopening the ambiguity.
Current ecosystem state
Only Anthropic exposes a Managed Agents API PraisonAI currently wires to. E2B and Modal have Sandbox APIs wired via compute= (tool-level only). E2B Managed Runtime and Modal Managed Functions (full hosted loops) are announced but not yet integrated — see praisonai/praisonai/integrations/sandboxed_agent.py lines 10-13 for the forward-looking docstring naming E2BManagedAgent / ModalManagedAgent.
Architecture Analysis
Current implementation
The system has two orthogonal axes at the protocol layer and one collapsed axis at the class-name / factory layer:
provider= on ManagedAgent() factory is semantically overloaded (runtime-provider OR LLM-routing hint)
Users get the wrong mental model of where their code is running
Low — add a new arg, deprecate the overload
runtime_openai.py / runtime_gemini.py / runtime_ollama.py examples teach the broken pattern
Docs & recent blog post on mer.vin cement the confusion
Low — rewrite examples; alias file names
LocalManagedAgent / SandboxedAgent / AnthropicManagedAgent all implement the same ManagedBackendProtocol, but only the Anthropic one is "managed" in the cloud sense
Need a named axis HostedAgent(provider=...) that will extend cleanly to E2B-Managed, Modal-Managed, Fly-Managed in Phase 2
LLM selection for local loop
Works via model=and hijacks provider= for litellm routing
model= is enough (it already accepts "gemini/gemini-2.0-flash", "ollama/llama3", etc.). Drop the provider= hijack.
Error when user requests non-existent managed runtime
Silent fallthrough to LocalManagedAgent
ManagedAgent(provider="modal") today returns a local loop with "modal" as an LLM hint — garbage. Should raise ValueError with clear message until the Phase 2 runtime lands.
Proposed Implementation
Phase 1 — Fix the semantics without breaking anyone
A. Core SDK — no change.ManagedRuntimeProtocol and ComputeProviderProtocol are correct. ExecutionConfig stays local-only (cloud routing belongs in wrapper).
B. Wrapper — add new canonical classes, keep every current name as a silent alias.
# New canonical namesfrompraisonai.integrationsimportHostedAgent, HostedAgentConfig, LocalAgent, LocalAgentConfig# 1. Hosted loop — entire agent runs in a remote managed runtimeagent=Agent(name="a", backend=HostedAgent(
provider="anthropic", # only "anthropic" supported todayconfig=HostedAgentConfig(
model="claude-3-5-sonnet-latest",
system="You are a concise assistant.",
),
))
# 2. Local loop, tools optional-sandboxed in a cloud computeagent=Agent(name="b", backend=LocalAgent(
compute="e2b", # or "modal", "flyio", "daytona", "docker", Noneconfig=LocalAgentConfig(
model="gpt-4o-mini", # LLM choice here — not `provider=`system="You are a concise assistant.",
),
))
# 3. Smallest footprint: local loop + local subprocessagent=Agent(name="c", backend=LocalAgent(
config=LocalAgentConfig(model="gpt-4o-mini"),
))
C. ManagedAgent() factory — narrowed meaning + deprecation.
defManagedAgent(provider="anthropic", **kwargs):
"""Deprecated factory. Use HostedAgent or LocalAgent explicitly. - provider="anthropic" → HostedAgent(provider="anthropic", ...) - provider in {"openai","gemini","ollama","local"} → LocalAgent(...) (DeprecationWarning: "use LocalAgent directly; put LLM name in model=") - provider in {"e2b","modal","flyio","daytona","docker"} → raise ValueError ("Cloud compute belongs on LocalAgent(compute=...). Hosted runtimes for these providers are not yet available.") - any other provider → raise ValueError. """
D. Config consolidation — keep the two knobs in their proper layer.
ExecutionConfig (Core SDK) continues to own: max_iter, max_rpm, max_execution_time, code_execution, code_mode, code_sandbox_mode, rate_limiter, max_budget. Do not add compute= here (would leak wrapper into core and break protocol-driven-core invariant).
compute= stays a ctor kwarg on LocalAgent (wrapper).
provider= stays a ctor kwarg on HostedAgent (wrapper).
Phase 2 — Wire real managed runtimes for E2B and Modal
Out of scope for this issue. Once those runtimes land, HostedAgent(provider="e2b"|"modal"|"flyio") routes to them.
New canonical LocalAgent / LocalAgentConfig. Currently aliases LocalManagedAgent / LocalManagedConfig. Docstring explains loop local, compute= optional, and forbids provider= on the ctor (the factory still accepts it for legacy routing only).
tests/unit/integrations/test_backend_semantics.py
Pin the invariants: (a) isinstance(HostedAgent(provider="anthropic"), ManagedRuntimeProtocol), (b) LocalAgent() has compute_provider is None, (c) LocalAgent(compute="e2b").compute_provider.provider_name == "e2b", (d) ManagedAgent(provider="modal") raises ValueError.
tests/integration/test_local_agent_real.py
Real-agentic test: Agent(backend=LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))).start("Capital of France?") — skips if OPENAI_API_KEY missing.
Remove the provider-based litellm routing (make model= carry the prefix, as litellm already supports). Keep SandboxedAgent alias pointing at the same class, but add LocalAgent = LocalManagedAgent next to it.
Replace ManagedAgent(provider="openai") with LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini")); rename files to runtime_local_{openai,gemini,ollama}.py to reflect they are local-loop variants. Keep old filenames as thin re-exports that print("deprecated, see runtime_local_*") and exec(open(new_path).read()).
Update the workflow to reference HostedAgent / LocalAgent instead of SandboxedAgent in examples.
/tmp → mer.vin post 50151
Optional follow-up to update the blog post once the rename lands.
Technical Considerations
Dependencies
No new runtime deps. anthropic, e2b, modal, daytona-sdk, aiohttp (Fly.io) stay optional and lazy, same as today.
Performance impact
All new modules follow the existing lazy-import pattern in src/praisonai/praisonai/__init__.py:__getattr__. No module-level imports of heavy SDKs. Import-time cost of the wrapper is unchanged.
Safety / approval
ManagedAgent(provider="e2b"/"modal"/...) today silently produces a local loop with a broken LLM string. After this change it raises ValueError with an actionable message pointing at LocalAgent(compute=...) — safer by default.
DeprecationWarning for the LLM-routing overload lets downstream users migrate without breakage.
Multi-agent safety
No change to concurrency model. HostedAgent and LocalAgent both satisfy ManagedBackendProtocol, so Agent(backend=…) delegation is unchanged.
Backward compatibility
Hard requirement: every currently exported name must still import and behave exactly as today. All of these continue to work:
ManagedAgent(provider="modal" | "e2b" | "flyio" | "daytona" | "docker") raises ValueError with a message pointing at LocalAgent(compute=...).
ManagedAgent(provider="openai" | "gemini" | "ollama" | "local") still works and emits a single DeprecationWarning per process.
Every old import path (ManagedAgent, LocalManagedAgent, SandboxedAgent, AnthropicManagedAgent, plus all Configs and Integration aliases) continues to work without deprecation warnings.
isinstance(hosted_backend, ManagedRuntimeProtocol) is True for HostedAgent(provider="anthropic"); isinstance(local_backend, ManagedRuntimeProtocol) is False for LocalAgent() (structural check — documents the semantic difference).
Real agentic test: Agent(backend=LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))).start("Capital of France?") returns non-empty text when OPENAI_API_KEY is set.
All existing unit tests under tests/ pass unchanged. New tests in tests/unit/integrations/test_backend_semantics.py pass.
examples/python/managed-agents/provider/runtime_anthropic.py uses HostedAgent; runtime_local_{openai,gemini,ollama}.py use LocalAgent. all_runtimes.py exits 0 with all new filenames.
AGENTS.md has a short "Picking a backend: Hosted vs Local" section with the 2×2 table.
No new module-level imports of optional heavy deps (anthropic, e2b, modal, etc.). python -c "import praisonai" import time is unchanged within ±5 %.
src/praisonai-agents/praisonaiagents/agent/protocols.py (445 lines) — ManagedBackendProtocol (the delegation contract Agent uses)
src/praisonai/praisonai/integrations/__init__.py and src/praisonai/praisonai/__init__.py — wrapper public exports (lazy __getattr__ pattern to preserve)
Critical integration points
Agent.__init__ in src/praisonai-agents/praisonaiagents/agent/agent.py accepts any backend= that satisfies ManagedBackendProtocol. Both new classes must keep that interface intact.
AgentOS / AgentApp in src/praisonai/praisonai/app/ consume the wrapper exports — new names must be importable from praisonai top-level.
src/praisonai-ts has its own ManagedAgent surface — TypeScript mirror is out of scope for this issue but should be tracked in a follow-up.
Testing commands
# Backward-compat smoke
python -c "from praisonai import ManagedAgent, LocalManagedAgent, SandboxedAgent, AnthropicManagedAgentfrom praisonai.integrations import HostedAgent, LocalAgent, HostedAgentConfig, LocalAgentConfigprint('imports OK')"# New semantics unit testscd src/praisonai-agents && pytest tests/unit/integrations/test_backend_semantics.py -v
# Regression — every existing test in the agent subtreecd src/praisonai-agents && pytest tests/unit/agent/ -q --no-header
# Examples — all 9 skip cleanly when no creds
PYTHONPATH=src/praisonai-agents:src/praisonai \
python examples/python/managed-agents/provider/all_runtimes.py
# Real agentic test (requires OPENAI_API_KEY)
OPENAI_API_KEY=sk-... \
python -c "from praisonaiagents import Agentfrom praisonai.integrations import LocalAgent, LocalAgentConfigr = Agent(name='t', backend=LocalAgent( config=LocalAgentConfig(model='gpt-4o-mini', system='Answer in one word.'))).start('Capital of France?')print(r); assert r.strip()"# Deprecation warning fires once for the LLM-routing overload
python -W error::DeprecationWarning -c "from praisonai import ManagedAgenttry: ManagedAgent(provider='openai') print('FAIL: expected DeprecationWarning')except DeprecationWarning as e: print(f'OK: {e}')"# Hard-error for compute-as-provider misuse
python -c "from praisonai import ManagedAgenttry: ManagedAgent(provider='modal') print('FAIL')except ValueError as e: print(f'OK: {e}')"
src/praisonai/praisonai/integrations/sandboxed_agent.py — existing docstring explicitly documents the "loop stays local" invariant for SandboxedAgent / LocalManagedAgent
mer.vin post 50151 — uses today's names; will be refreshed after the rename lands
src/praisonai-agents/praisonaiagents/managed/protocols.py:223-245 — ManagedRuntimeProtocol docstring that states the "entire agent loop runs remotely" guarantee
Overview
ManagedAgent(provider=...)currently overloads theprovider=argument with two completely different meanings, and that leaks into naming across the entire managed-agent surface. Whenprovider="anthropic"the factory returns a real managed runtime where the whole agent loop runs on Anthropic's cloud. Whenprovideris anything else ("openai","gemini","ollama","local", ...) the factory returnsLocalManagedAgent— a local loop that just uses the string as a litellm routing hint for the LLM call. There is no managed runtime involved. This issue proposes (1) freezing the currentprovider=overload via deprecation, (2) splitting the concern into a runtime-provider axis and an LLM-provider axis, and (3) renaming the user-facing backend classes so the word "Managed" / "Sandboxed" no longer sits on both sides of the divide.Background
Where the confusion came from
Recent examples added under
examples/python/managed-agents/provider/usedManagedAgent(provider="gemini"),ManagedAgent(provider="openai"), etc. to represent "the LLM runs in that cloud". This reads as "managed runtime on Google Cloud" to new users, but the code does not do that — Google (and OpenAI, Ollama) have no managed-runtime integration wired up. The factory silently falls into theelsebranch and returns a local-loop backend.Related PR context:
ManagedRuntimeProtocol(core SDK) that is explicitly separate fromComputeProviderProtocol— confirming the two-axis model at the protocol layer, but the wrapper layer still conflates them in the class names and factory.SandboxedAgentwas added forLocalManagedAgentto clarify "only tools can be sandboxed, loop stays local" — but the alias sits next toManagedAgentin exports, which makes it look more remote thanManagedAgentrather than less. Users guess exactly the wrong semantics.Why this is valuable
ManagedAgent(provider="e2b")would have to mean either "hosted loop on E2B" or "local loop with E2B as LLM" (it wouldn't — E2B isn't an LLM — but the precedent is there), reopening the ambiguity.Current ecosystem state
Only Anthropic exposes a Managed Agents API PraisonAI currently wires to. E2B and Modal have Sandbox APIs wired via
compute=(tool-level only). E2B Managed Runtime and Modal Managed Functions (full hosted loops) are announced but not yet integrated — seepraisonai/praisonai/integrations/sandboxed_agent.pylines 10-13 for the forward-looking docstring namingE2BManagedAgent/ModalManagedAgent.Architecture Analysis
Current implementation
The system has two orthogonal axes at the protocol layer and one collapsed axis at the class-name / factory layer:
LocalManagedAgent/SandboxedAgent(nocompute=)LocalManagedAgent(compute="e2b")/SandboxedAgent(compute="e2b")AnthropicManagedAgent(tools co-located with provider)The
provider=overloadsrc/praisonai/praisonai/integrations/managed_agents.py(lines 1073-1118):And
src/praisonai/praisonai/integrations/managed_local.py(lines 275-285):So
provider=is doing two jobs:"anthropic"→ chooses the runtime provider.Key file locations
src/praisonai-agents/praisonaiagents/managed/protocols.pyComputeProviderProtocol(tool sandbox) andManagedRuntimeProtocol(hosted loop) — core SDKsrc/praisonai-agents/praisonaiagents/agent/protocols.pyManagedBackendProtocol— agent↔backend contractsrc/praisonai-agents/praisonaiagents/config/feature_configs.pyExecutionConfig— local execution limits (max_iter, rate, budget, `code_sandbox_mode: "sandbox"src/praisonai/praisonai/integrations/managed_agents.pyAnthropicManagedAgent,ManagedConfig,ManagedAgent()factorysrc/praisonai/praisonai/integrations/managed_local.pyLocalManagedAgent(local loop + optionalcompute=), silentSandboxedAgentaliassrc/praisonai/praisonai/integrations/sandboxed_agent.pySandboxedAgentwith a docstring explaining the honest meaningsrc/praisonai/praisonai/integrations/compute/__init__.pyDockerCompute,LocalCompute,DaytonaCompute,E2BCompute,ModalCompute,FlyioComputesrc/praisonai/praisonai/integrations/__init__.pysrc/praisonai/praisonai/__init__.pypraisonaire-exports (mirror)examples/python/managed-agents/provider/runtime_{openai,gemini,ollama}.pyManagedAgent(provider="...")with an LLM nameexamples/python/managed-agents/provider/runtime_anthropic.pyGap Analysis Summary
Critical gaps
provider=onManagedAgent()factory is semantically overloaded (runtime-provider OR LLM-routing hint)runtime_openai.py/runtime_gemini.py/runtime_ollama.pyexamples teach the broken patternLocalManagedAgent/SandboxedAgent/AnthropicManagedAgentall implement the sameManagedBackendProtocol, but only the Anthropic one is "managed" in the cloud senseFeature gaps
"anthropic"is a real runtimeHostedAgent(provider=...)that will extend cleanly to E2B-Managed, Modal-Managed, Fly-Managed in Phase 2model=and hijacksprovider=for litellm routingmodel=is enough (it already accepts"gemini/gemini-2.0-flash","ollama/llama3", etc.). Drop theprovider=hijack.LocalManagedAgentManagedAgent(provider="modal")today returns a local loop with"modal"as an LLM hint — garbage. Should raiseValueErrorwith clear message until the Phase 2 runtime lands.Proposed Implementation
Phase 1 — Fix the semantics without breaking anyone
A. Core SDK — no change.
ManagedRuntimeProtocolandComputeProviderProtocolare correct.ExecutionConfigstays local-only (cloud routing belongs in wrapper).B. Wrapper — add new canonical classes, keep every current name as a silent alias.
C.
ManagedAgent()factory — narrowed meaning + deprecation.D. Config consolidation — keep the two knobs in their proper layer.
ExecutionConfig(Core SDK) continues to own:max_iter,max_rpm,max_execution_time,code_execution,code_mode,code_sandbox_mode,rate_limiter,max_budget. Do not addcompute=here (would leak wrapper into core and break protocol-driven-core invariant).compute=stays a ctor kwarg onLocalAgent(wrapper).provider=stays a ctor kwarg onHostedAgent(wrapper).Phase 2 — Wire real managed runtimes for E2B and Modal
Out of scope for this issue. Once those runtimes land,
HostedAgent(provider="e2b"|"modal"|"flyio")routes to them.Files to Create / Modify
New files
src/praisonai/praisonai/integrations/hosted_agent.pyHostedAgent/HostedAgentConfig. Currently aliasesAnthropicManagedAgent/ManagedConfig. Docstring explains the runtime-provider axis only.src/praisonai/praisonai/integrations/local_agent.pyLocalAgent/LocalAgentConfig. Currently aliasesLocalManagedAgent/LocalManagedConfig. Docstring explains loop local,compute=optional, and forbidsprovider=on the ctor (the factory still accepts it for legacy routing only).tests/unit/integrations/test_backend_semantics.pyisinstance(HostedAgent(provider="anthropic"), ManagedRuntimeProtocol), (b)LocalAgent()hascompute_provider is None, (c)LocalAgent(compute="e2b").compute_provider.provider_name == "e2b", (d)ManagedAgent(provider="modal")raisesValueError.tests/integration/test_local_agent_real.pyAgent(backend=LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))).start("Capital of France?")— skips ifOPENAI_API_KEYmissing.Modified files
src/praisonai/praisonai/integrations/managed_agents.py(lines 1073-1118)DeprecationWarningfor the LLM-routing overload; raisesValueErrorfor compute-provider names; preserves"anthropic"path.src/praisonai/praisonai/integrations/managed_local.py(lines 275-285, 1048-1049)provider-based litellm routing (makemodel=carry the prefix, as litellm already supports). KeepSandboxedAgentalias pointing at the same class, but addLocalAgent = LocalManagedAgentnext to it.src/praisonai/praisonai/integrations/__init__.py(lines 30-110)HostedAgent,HostedAgentConfig,LocalAgent,LocalAgentConfig. Keep every current name.src/praisonai/praisonai/__init__.py(lines 105-125)__getattr__.examples/python/managed-agents/provider/runtime_{openai,gemini,ollama}.pyManagedAgent(provider="openai")withLocalAgent(config=LocalAgentConfig(model="gpt-4o-mini")); rename files toruntime_local_{openai,gemini,ollama}.pyto reflect they are local-loop variants. Keep old filenames as thin re-exports thatprint("deprecated, see runtime_local_*")andexec(open(new_path).read()).examples/python/managed-agents/provider/runtime_anthropic.pyHostedAgent(provider="anthropic", config=HostedAgentConfig(...)). Same runtime behaviour, new name.examples/python/managed-agents/provider/all_runtimes.pysrc/praisonai-agents/AGENTS.mdsrc/praisonai-agents/.windsurf/workflows/create-examples-post.mdHostedAgent/LocalAgentinstead ofSandboxedAgentin examples./tmp→ mer.vin post 50151Technical Considerations
Dependencies
No new runtime deps.
anthropic,e2b,modal,daytona-sdk,aiohttp(Fly.io) stay optional and lazy, same as today.Performance impact
All new modules follow the existing lazy-import pattern in
src/praisonai/praisonai/__init__.py:__getattr__. No module-level imports of heavy SDKs. Import-time cost of the wrapper is unchanged.Safety / approval
ManagedAgent(provider="e2b"/"modal"/...)today silently produces a local loop with a broken LLM string. After this change it raisesValueErrorwith an actionable message pointing atLocalAgent(compute=...)— safer by default.DeprecationWarningfor the LLM-routing overload lets downstream users migrate without breakage.Multi-agent safety
No change to concurrency model.
HostedAgentandLocalAgentboth satisfyManagedBackendProtocol, soAgent(backend=…)delegation is unchanged.Backward compatibility
Hard requirement: every currently exported name must still import and behave exactly as today. All of these continue to work:
from praisonai import ManagedAgent, ManagedConfig, AnthropicManagedAgent, LocalManagedAgent, LocalManagedConfig, SandboxedAgent, SandboxedAgentConfig, ManagedAgentIntegration, ManagedBackendConfigManagedAgent(provider="anthropic"),ManagedAgent(provider="openai")(with DeprecationWarning),ManagedAgent()auto-detectexamples/python/managed-agents/provider/*_compute.pyfiles keep workingAcceptance Criteria
HostedAgent,HostedAgentConfig,LocalAgent,LocalAgentConfigclasses are importable frompraisonaiandpraisonai.integrations.HostedAgent(provider="anthropic")returns an instance of the existingAnthropicManagedAgentclass (or a subclass).LocalAgent()returns an instance equivalent toLocalManagedAgent()today.LocalAgent(compute="e2b").compute_provider.provider_name == "e2b".ManagedAgent(provider="modal" | "e2b" | "flyio" | "daytona" | "docker")raisesValueErrorwith a message pointing atLocalAgent(compute=...).ManagedAgent(provider="openai" | "gemini" | "ollama" | "local")still works and emits a singleDeprecationWarningper process.ManagedAgent,LocalManagedAgent,SandboxedAgent,AnthropicManagedAgent, plus all Configs and Integration aliases) continues to work without deprecation warnings.isinstance(hosted_backend, ManagedRuntimeProtocol)is True forHostedAgent(provider="anthropic");isinstance(local_backend, ManagedRuntimeProtocol)is False forLocalAgent()(structural check — documents the semantic difference).Agent(backend=LocalAgent(config=LocalAgentConfig(model="gpt-4o-mini"))).start("Capital of France?")returns non-empty text whenOPENAI_API_KEYis set.tests/pass unchanged. New tests intests/unit/integrations/test_backend_semantics.pypass.examples/python/managed-agents/provider/runtime_anthropic.pyusesHostedAgent;runtime_local_{openai,gemini,ollama}.pyuseLocalAgent.all_runtimes.pyexits 0 with all new filenames.AGENTS.mdhas a short "Picking a backend: Hosted vs Local" section with the 2×2 table.python -c "import praisonai"import time is unchanged within ±5 %.Implementation Notes
Key files to read first
src/praisonai/praisonai/integrations/managed_agents.py(1123 lines) — factory +AnthropicManagedAgent+ManagedConfigsrc/praisonai/praisonai/integrations/managed_local.py(1050 lines) —LocalManagedAgent+ existingSandboxedAgentalias +_resolve_computesrc/praisonai-agents/praisonaiagents/managed/protocols.py(357 lines) —ManagedRuntimeProtocol,ComputeProviderProtocol,ComputeConfigsrc/praisonai-agents/praisonaiagents/agent/protocols.py(445 lines) —ManagedBackendProtocol(the delegation contractAgentuses)src/praisonai/praisonai/integrations/__init__.pyandsrc/praisonai/praisonai/__init__.py— wrapper public exports (lazy__getattr__pattern to preserve)Critical integration points
Agent.__init__insrc/praisonai-agents/praisonaiagents/agent/agent.pyaccepts anybackend=that satisfiesManagedBackendProtocol. Both new classes must keep that interface intact.AgentOS/AgentAppinsrc/praisonai/praisonai/app/consume the wrapper exports — new names must be importable frompraisonaitop-level.src/praisonai-tshas its ownManagedAgentsurface — TypeScript mirror is out of scope for this issue but should be tracked in a follow-up.Testing commands
References
ManagedRuntimeProtocolintroduction (splits hosted-loop from compute-sandbox at the protocol layer)src/praisonai/praisonai/integrations/sandboxed_agent.py— existing docstring explicitly documents the "loop stays local" invariant forSandboxedAgent/LocalManagedAgentsrc/praisonai-agents/praisonaiagents/managed/protocols.py:223-245—ManagedRuntimeProtocoldocstring that states the "entire agent loop runs remotely" guarantee