A Python reimplementation of the Claude Code agent architecture β local models, full control, zero dependencies.
April 2026 β Major Update
| Feature | Details | |
|---|---|---|
| π | Interactive Chat Mode | New agent-chat command β multi-turn REPL with /exit to quit |
| π | Streaming Output | Token-by-token streaming with --stream flag |
| π | Plugin Runtime | Full manifest-based plugin system β hooks, tool aliases, virtual tools, tool blocking |
| π | Nested Agent Delegation | Delegate subtasks to child agents with dependency-aware topological batching |
| π | Agent Manager | Lineage tracking, group membership, batch summaries for nested agents |
| π | Cost Tracking & Budgets | Token budgets, cost budgets, tool-call limits, model-call limits, session-turn limits |
| π | Structured Output | JSON schema response mode with --response-schema-file |
| π | Context Compaction | Auto-snip, auto-compact, and reactive compaction on prompt-too-long errors |
| π | File History Replay | Journaling of file edits with snapshot IDs, replay summaries on session resume |
| π | Truncation Continuation | Automatic continuation when model response is cut off (finish_reason=length) |
| π | Ollama Support | Works out of the box with Ollama's OpenAI-compatible API |
| π | LiteLLM Proxy Support | Route through LiteLLM Proxy to any provider |
| π | OpenRouter Support | Cloud API gateway β access OpenAI, Anthropic, Google models via one endpoint |
| π | Query Engine | Runtime event counters, transcript summaries, orchestration reports |
| π | Remote Runtime | Manifest-backed local remote profiles, connect/disconnect state, and remote CLI/slash flows |
| π | Hook & Policy Runtime | Local .claw-policy.json / hook manifests with trust reporting, safe env, tool blocking, and budget overrides |
| π | Task & Plan Runtime | Persistent local tasks and plans with plan-to-task sync and dependency-aware task execution |
| π | MCP Transport | Real stdio MCP transport for initialize, resource listing/reading, and tool listing/calling |
| π | Search Runtime | Provider-backed web_search with local manifests, activation state, and /search flows |
| π | Config & Account Runtime | Local config/settings mutation plus manifest-backed account profiles and login/logout state |
| π | Ask-User Runtime | Queued or interactive local ask-user flow with history, slash commands, and agent tool support |
| π | Team Runtime | Persisted local teams and message history with team/message tools and slash/CLI inspection |
| π | Notebook Edit Tool | Native .ipynb cell editing through the real agent tool registry |
| π | Workflow Runtime | Manifest-backed local workflows with workflow tools, slash commands, and run history |
| π | Remote Trigger Runtime | Local remote triggers with create/update/run flows similar to the npm remote trigger surface |
| π | Worktree Runtime | Managed git worktrees with mid-session cwd switching, slash commands, and CLI flows |
| π | Tokenizer-Aware Context | Cached tokenizer backends with heuristic fallback for /context, /status, and compaction |
| π | Prompt Budget Preflight | Preflight prompt-length validation, token-budget reporting, and auto-compact/context collapse before backend failures |
| π | LSP Runtime | Local LSP-style code intelligence for definitions, references, hover, symbols, call hierarchy, and diagnostics |
| π | Daemon Commands | Local daemon start/ps/logs/attach/kill wrapper over background agent sessions |
| π | Background Sessions | Local agent-bg, agent-ps, agent-logs, agent-attach, and agent-kill flows |
| π | Testing Guide | Comprehensive TESTING_GUIDE.md with commands for every feature |
| π | Parity Checklist | Full PARITY_CHECKLIST.md tracking implementation status vs npm source |
This repository reimplements the Claude Code npm agent architecture entirely in Python, designed to run with local open-source models via an OpenAI-compatible API server.
Built on the public porting workspace from instructkr/claw-code, the active development lives at HarnessLab/claw-code-agent.
Goal: Not to ship the original npm source, but to reimplement the full agent flow in Python β prompt assembly, context building, slash commands, tool calling, session persistence, and local model execution.
Zero external dependencies β just Python's standard library.
| Feature | Description |
|---|---|
| π€ Agent Loop | Full agentic coding loop with tool calling and iterative reasoning |
| π¬ Interactive Chat | Multi-turn REPL via agent-chat with session continuity |
| π§° Core Tools | File read / write / edit, glob search, grep search, shell execution |
| π Plugin Runtime | Manifest-based plugins with hooks, aliases, virtual tools, and tool blocking |
| πͺ Nested Delegation | Delegate subtasks to child agents with dependency-aware topological batching |
| π‘ Streaming | Token-by-token streaming output with --stream |
| π¬ Slash Commands | Local commands for context, config, account, search, MCP, remote, tasks, plan, hooks, and model control |
| π Remote Runtime | Manifest-backed remote profiles with local remote-mode, ssh-mode, teleport-mode, and connect/disconnect state |
| π§ Task & Plan Runtime | Persistent tasks and plans with sync, next-task selection, and blocked/unblocked state |
| π°οΈ MCP Runtime | Local MCP manifests plus real stdio MCP transport for resources and tools |
| π Search Runtime | Provider-backed web_search plus provider activation and status reporting |
| βοΈ Config & Account Runtime | Local config mutation, settings inspection, account profiles, and login/logout state |
| π Ask-User Runtime | Queued answer or interactive user-question flow with history tracking |
| π₯ Team Runtime | Persisted local teams plus message history, handoff notes, and collaboration metadata |
| π Notebook Editing | Native Jupyter notebook cell editing through notebook_edit |
| πͺ΅ Worktree Runtime | Managed git worktrees with worktree_enter, worktree_exit, and live cwd switching |
| π§ Workflow Runtime | Manifest-backed workflows with slash commands, CLI inspection, and recorded runs |
| β° Remote Triggers | Local remote triggers with create/update/run flows and npm-style trigger actions |
| πͺ Hook & Policy Runtime | Trust reporting, safe env, managed settings, tool blocking, and budget overrides |
| π§ LSP Code Intelligence | Local LSP-style definitions, references, hover, symbols, diagnostics, and call hierarchy |
| π§ Context Engine | Automatic context building with CLAUDE.md discovery, compaction, and snipping |
| π’ Tokenizer-Aware Accounting | Model-aware token counting with cached tokenizer backends and fallback heuristics |
| π Prompt Budgeting | Soft/hard prompt-window checks, token-budget reports, and preflight context collapse |
| π Session Persistence | Save and resume agent sessions with file-history replay |
| ποΈ Background Sessions | agent-bg and local daemon wrappers for background runs, logs, attach, and kill |
| π° Cost & Budget Control | Token budgets, cost limits, tool-call caps, model-call caps |
| π Structured Output | JSON schema response mode for programmatic use |
| π Permission System | Granular control: --allow-write, --allow-shell, --unsafe |
| ποΈ OpenAI-Compatible | Works with vLLM, Ollama, LiteLLM Proxy, OpenRouter β any OpenAI-compatible API |
| π Qwen3-Coder | First-class support for Qwen3-Coder-30B-A3B-Instruct via vLLM |
| π¦ Zero Dependencies | Pure Python standard library β nothing to install |
| Document | Description |
|---|---|
| TESTING_GUIDE.md | Step-by-step commands to verify every feature |
| PARITY_CHECKLIST.md | Full implementation status vs the npm source |
- Python CLI agent loop
- Interactive chat mode (
agent-chat) with multi-turn REPL - OpenAI-compatible local model backend
- Qwen3-Coder support through vLLM with
qwen3_xmltool parser - Ollama, LiteLLM Proxy, and OpenRouter backends
- Core tools:
list_dir,read_file,write_file,edit_file,glob_search,grep_search,bash - Context building and
/context-style usage reporting - Slash commands:
/help,/context,/context-raw,/prompt,/permissions,/model,/tools,/memory,/status,/clear - Session persistence and
agent-resumeflow - Permission system (read-only, write, shell, unsafe tiers)
- Streaming token-by-token assistant output
- Truncated-response continuation flow
- Auto-snip and auto-compact context reduction
- Reactive compaction retry on prompt-too-long errors
- Preflight prompt-length validation and token-budget reporting
- Preflight auto-compact/context collapse before backend prompt-too-long failures
- Cost tracking and usage budget enforcement
- Token, tool-call, model-call, and session-turn budgets
- Structured output / JSON schema response mode
- File history journaling with snapshot IDs and replay summaries
- Nested agent delegation with dependency-aware topological batching
- Agent manager with lineage tracking and group membership
- Local daemon-style background command family
- Local background session workflows:
agent-bg,agent-ps,agent-logs,agent-attach,agent-kill - Local remote runtime: manifest discovery, profile listing, connect/disconnect persistence, and CLI/slash flows
- Local hook and policy runtime with trust reporting, safe env, tool blocking, and budget overrides
- Local config runtime: config discovery, effective settings, source inspection, and config mutation
- Local LSP runtime: definitions, references, hover, symbols, diagnostics, and call hierarchy
- Local account runtime: profile discovery, login/logout state, and account CLI/slash flows
- Local ask-user runtime: queued answers, history, and ask-user CLI/slash flows
- Local team runtime: persisted teams, team messages, and team CLI/slash flows
- Local search runtime with provider discovery, activation, and provider-backed
web_search - Local MCP runtime: manifest resources, stdio transport, MCP resources, and MCP tool calls
- Local task and plan runtimes with plan sync and dependency-aware task execution
- Notebook edit tool in the real Python tool registry
- Local workflow runtime with workflow list/get/run tools and CLI/slash flows
- Local remote trigger runtime with create/update/run flows and CLI/slash inspection
- Local managed git worktree runtime with live cwd switching and worktree CLI/slash flows
- Tokenizer-aware context accounting with cached tokenizer backends and heuristic fallback
- Plugin runtime: manifest discovery, hooks, aliases, virtual tools, tool blocking
- Plugin lifecycle hooks: resume, persist, delegate phases
- Plugin session-state persistence and resume restoration
- Query engine facade driving the real Python runtime
- Compaction metadata with lineage IDs and revision summaries
- Extended runtime tools:
web_fetch,web_search,tool_search,sleep - Unit tests for the Python runtime
-
pyproject.tomlpackaging withsetuptools
- Full MCP parity beyond the current stdio transport and local manifest/resource/tool support
- Full slash-command parity with npm runtime
- Full interactive REPL / TUI behavior
- Full tokenizer/chat-message framing parity beyond the current tokenizer-aware accounting
- Hooks system parity
- Real remote transport/runtime parity beyond the current local remote-profile runtime
- Voice and VIM modes
- Editor and platform integrations
- Background and team features
claw-code/
βββ README.md
βββ TESTING_GUIDE.md # How to test every feature
βββ PARITY_CHECKLIST.md # Implementation status vs npm source
βββ pyproject.toml
βββ .gitignore
βββ images/
β βββ logo.png
βββ src/ # Python implementation
β βββ main.py # CLI entry point & argument parsing
β βββ agent_runtime.py # Core agent loop (LocalCodingAgent)
β βββ agent_tools.py # Tool definitions & execution engine
β βββ agent_prompting.py # System prompt assembly
β βββ agent_context.py # Context building & CLAUDE.md discovery
β βββ agent_context_usage.py # Context usage estimation & reporting
β βββ agent_session.py # Session state management
β βββ agent_slash_commands.py # Local slash command processing
β βββ agent_manager.py # Nested agent lineage & group tracking
β βββ agent_types.py # Shared dataclasses & type definitions
β βββ openai_compat.py # OpenAI-compatible API client (streaming)
β βββ plugin_runtime.py # Plugin manifest, hooks, aliases, virtual tools
β βββ agent_plugin_cache.py # Plugin discovery & prompt injection cache
β βββ session_store.py # Session serialization & persistence
β βββ transcript.py # Transcript block export & mutation tracking
β βββ query_engine.py # Query engine facade & runtime orchestration
β βββ mcp_runtime.py # Local MCP discovery and stdio MCP transport
β βββ search_runtime.py # Search providers and provider-backed web_search
β βββ remote_runtime.py # Local remote profiles, connect/disconnect state, remote CLI support
β βββ background_runtime.py # Local background sessions and daemon support
β βββ account_runtime.py # Local account profiles, login/logout state, account CLI support
β βββ ask_user_runtime.py # Local ask-user queued answers and interaction history
β βββ config_runtime.py # Local workspace config/settings discovery and mutation
β βββ lsp_runtime.py # Local LSP-style code intelligence and diagnostics
β βββ token_budget.py # Prompt-window budgeting and preflight prompt-length validation
β βββ plan_runtime.py # Persistent plan runtime and plan sync
β βββ task_runtime.py # Persistent task runtime and task execution
β βββ task.py # Task state model and task dataclasses
β βββ team_runtime.py # Local teams, messages, and collaboration metadata
β βββ workflow_runtime.py # Local workflow manifests and recorded workflow runs
β βββ remote_trigger_runtime.py # Local remote trigger manifests and trigger run history
β βββ worktree_runtime.py # Managed git worktree sessions and cwd switching
β βββ hook_policy.py # Hook/policy manifests, trust, and safe env handling
β βββ tokenizer_runtime.py # Tokenizer-aware context accounting backends
β βββ permissions.py # Tool permission filtering
β βββ cost_tracker.py # Cost & budget enforcement
β βββ commands.py # Mirrored command inventory
β βββ tools.py # Mirrored tool inventory
β βββ runtime.py # Mirrored runtime facade
β βββ reference_data/ # Mirrored inventory snapshots
βββ tests/ # Unit tests
βββ test_agent_runtime.py
βββ test_agent_context.py
βββ test_agent_context_usage.py
βββ test_agent_prompting.py
βββ test_agent_slash_commands.py
βββ test_main.py
βββ test_query_engine_runtime.py
βββ test_porting_workspace.py
| Requirement | Details |
|---|---|
| π Python | 3.10 or higher |
| π Dependencies | None β pure Python standard library |
| π₯οΈ Model Server | vLLM, Ollama, LiteLLM Proxy, or OpenRouter, with tool calling support |
| π§ Model | Qwen/Qwen3-Coder-30B-A3B-Instruct (recommended) |
vLLM must be started with automatic tool choice enabled. Use the qwen3_xml parser for Qwen3-Coder tool calling:
python -m vllm.entrypoints.openai.api_server \
--model Qwen/Qwen3-Coder-30B-A3B-Instruct \
--host 127.0.0.1 \
--port 8000 \
--enable-auto-tool-choice \
--tool-call-parser qwen3_xmlVerify the server is running:
curl http://127.0.0.1:8000/v1/modelsπ References: vLLM Tool Calling Docs Β· OpenAI-Compatible Server
claw-code-agent can also work with Ollama because the runtime targets an OpenAI-compatible API. Use a model that supports tool calling well.
Example:
ollama serve
ollama pull qwen3Then configure:
export OPENAI_BASE_URL=http://127.0.0.1:11434/v1
export OPENAI_API_KEY=ollama
export OPENAI_MODEL=qwen3Notes:
- prefer tool-capable models such as
qwen3 - plain chat-only models are not enough for full agent behavior
- Ollama does not use the
vLLMparser flags shown above
π References: Ollama OpenAI Compatibility Β· Ollama Tool Calling
claw-code-agent can also work through LiteLLM Proxy because the runtime targets an OpenAI-compatible chat completions API. The routed model still needs to support tool calling for full agent behavior.
Quick start example:
pip install 'litellm[proxy]'
litellm --model ollama/qwen3LiteLLM Proxy runs on port 4000 by default. Then configure:
export OPENAI_BASE_URL=http://127.0.0.1:4000
export OPENAI_API_KEY=anything
export OPENAI_MODEL=ollama/qwen3Notes:
- LiteLLM Proxy gives you an OpenAI-style gateway in front of many providers
- tool use still depends on the underlying routed model and provider behavior
- if you configure a LiteLLM master key, use that instead of
anything
π References: LiteLLM Docs Β· LiteLLM Proxy Quick Start
claw-code-agent can also work with OpenRouter, a cloud API gateway that provides access to models from OpenAI, Anthropic, Google, Meta, and others through a single OpenAI-compatible endpoint. No local model server required.
Configure:
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_API_KEY=sk-or-v1-your-key-here
export OPENAI_MODEL=openai/gpt-4o-miniNotes:
- sign up at openrouter.ai and create an API key under Keys
- model names use the
provider/modelformat (e.g.anthropic/claude-sonnet-4,openai/gpt-4o,google/gemini-2.5-pro) - tool calling support varies by model β check the model list for capabilities
- this sends your conversation (including file contents and shell output) to OpenRouter and the upstream provider β do not use with repos containing secrets or sensitive data
π References: OpenRouter Docs Β· Supported Models Β· API Keys
export OPENAI_BASE_URL=http://127.0.0.1:8000/v1
export OPENAI_API_KEY=local-token
export OPENAI_MODEL=Qwen/Qwen3-Coder-30B-A3B-InstructIf you want to try another model, keep the same vLLM server setup and change the --model value when you launch vLLM.
Example:
python -m vllm.entrypoints.openai.api_server \
--model your-model-name \
--host 127.0.0.1 \
--port 8000 \
--enable-auto-tool-choice \
--tool-call-parser your_parserThen update:
export OPENAI_MODEL=your-model-nameNotes:
- the documented path in this repository is
vLLM - the model must support tool calling well enough for agent use
- some model families require a different
--tool-call-parser - slash commands such as
/help,/context, and/toolsare local and do not require the model server
# Read-only question
python3 -m src.main agent \
"Read src/agent_runtime.py and summarize how the loop works." \
--cwd .
# Write-enabled task
python3 -m src.main agent \
"Create TEST_QWEN_AGENT.md with one line: test ok" \
--cwd . --allow-write
# Shell-enabled task
python3 -m src.main agent \
"Run pwd and ls src, then summarize the result." \
--cwd . --allow-shell
# Interactive chat mode
python3 -m src.main agent-chat --cwd .
# Streaming output
python3 -m src.main agent \
"Explain the current architecture." \
--cwd . --stream| Command | Description |
|---|---|
agent <prompt> |
Run the agent with a prompt |
agent-chat [prompt] |
Start interactive multi-turn chat mode |
agent-bg <prompt> |
Run the agent in a local background session |
agent-ps |
List local background sessions |
agent-logs <id> |
Show background session logs |
agent-attach <id> |
Show the current background output snapshot |
agent-kill <id> |
Stop a background session |
daemon <subcommand> |
Daemon-style wrapper over local background sessions |
agent-prompt |
Show the assembled system prompt |
agent-context |
Show estimated context usage |
agent-context-raw |
Show the raw context snapshot |
token-budget |
Show prompt-window budget, reserves, and soft/hard input limits |
agent-resume <id> <prompt> |
Resume a saved session |
| Command | Description |
|---|---|
search-status / search-providers / search-activate / search |
Inspect and use the local search runtime |
mcp-status / mcp-resources / mcp-resource / mcp-tools / mcp-call-tool |
Inspect and use the local MCP runtime |
remote-status / remote-profiles / remote-disconnect |
Inspect local remote runtime state |
remote-mode / ssh-mode / teleport-mode / direct-connect-mode / deep-link-mode |
Activate local remote runtime modes |
config-status / config-effective / config-source / config-get / config-set |
Inspect and mutate local config/settings |
account-status / account-profiles / account-login / account-logout |
Inspect and mutate local account state |
| Flag | Description |
|---|---|
--cwd <path> |
Set the workspace directory |
--model <name> |
Override the model name |
--base-url <url> |
Override the API base URL |
--allow-write |
Allow the agent to modify files |
--allow-shell |
Allow the agent to execute shell commands |
--unsafe |
Allow destructive shell operations |
--stream |
Enable token-by-token streaming output |
--show-transcript |
Print the full message transcript |
--scratchpad-root <path> |
Override the scratchpad directory |
--system-prompt <text> |
Set a custom system prompt |
--append-system-prompt <text> |
Append to the system prompt |
--override-system-prompt <text> |
Replace the generated system prompt |
--add-dir <path> |
Add extra directories to context |
| Flag | Description |
|---|---|
--max-total-tokens <n> |
Total token budget |
--max-input-tokens <n> |
Input token budget |
--max-output-tokens <n> |
Output token budget |
--max-reasoning-tokens <n> |
Reasoning token budget |
--max-budget-usd <n> |
Maximum cost in USD |
--max-tool-calls <n> |
Maximum tool calls per run |
--max-delegated-tasks <n> |
Maximum delegated subtasks |
--max-model-calls <n> |
Maximum model API calls |
--max-session-turns <n> |
Maximum session turns |
--input-cost-per-million <n> |
Input token pricing |
--output-cost-per-million <n> |
Output token pricing |
| Flag | Description |
|---|---|
--auto-snip-threshold <n> |
Auto-snip older messages at this token count |
--auto-compact-threshold <n> |
Auto-compact at this token count |
--compact-preserve-messages <n> |
Messages to preserve during compaction |
--disable-claude-md |
Disable CLAUDE.md discovery |
| Flag | Description |
|---|---|
--response-schema-file <path> |
JSON schema file for structured output |
--response-schema-name <name> |
Schema name identifier |
--response-schema-strict |
Enforce strict schema validation |
These are handled locally before the model loop:
| Command | Aliases | Description |
|---|---|---|
/help |
/commands |
Show built-in slash commands |
/context |
/usage |
Show estimated session context usage |
/context-raw |
/env |
Show raw environment & context snapshot |
/token-budget |
/budget |
Show prompt-window budget, reserves, and soft/hard input limits |
/mcp |
β | Show MCP runtime status, tools, or a single MCP tool |
/resources |
β | List MCP resources |
/resource |
β | Read an MCP resource by URI |
/search |
β | Show search status, providers, activate a provider, or run a search |
/remote |
β | Show local remote status or activate a target |
/remotes |
β | List local remote profiles |
/ssh |
β | Activate an SSH-style remote profile |
/teleport |
β | Activate a teleport-style remote profile |
/direct-connect |
β | Activate a direct-connect remote profile |
/deep-link |
β | Activate a deep-link remote profile |
/disconnect |
/remote-disconnect |
Disconnect the active remote runtime target |
/account |
β | Show account runtime status or profiles |
/login |
β | Activate a local account profile or identity |
/logout |
β | Clear the active account session |
/config |
/settings |
Inspect effective config, sources, or a single config value |
/plan |
/planner |
Show the local plan runtime state |
/tasks |
/todo |
Show the local task list |
/task |
β | Show a task by id |
/task-next |
/next-task |
Show the next actionable tasks |
/prompt |
/system-prompt |
Render the effective system prompt |
/hooks |
/policy |
Show local hook/policy manifests |
/trust |
β | Show trust mode, managed settings, and safe env values |
/permissions |
β | Show active tool permission mode |
/model |
β | Show or update the active model |
/tools |
β | List registered tools with permission status |
/memory |
β | Show loaded CLAUDE.md memory bundle |
/status |
/session |
Show runtime/session status summary |
/clear |
β | Clear ephemeral runtime state |
python3 -m src.main agent "/help"
python3 -m src.main agent "/context" --cwd .
python3 -m src.main agent "/token-budget" --cwd .
python3 -m src.main agent "/tools" --cwd .
python3 -m src.main agent "/status" --cwd .python3 -m src.main summary # Workspace summary
python3 -m src.main manifest # Workspace manifest
python3 -m src.main commands --limit 10 # Command inventory
python3 -m src.main tools --limit 10 # Tool inventoryThe runtime currently includes core and extended tools:
| Tool | Description | Permission |
|---|---|---|
list_dir |
List files and directories | π’ Always |
read_file |
Read file contents (with line ranges) | π’ Always |
write_file |
Write or create files | π‘ --allow-write |
edit_file |
Edit files via exact string matching | π‘ --allow-write |
glob_search |
Find files by glob pattern | π’ Always |
grep_search |
Search file contents by regex | π’ Always |
bash |
Execute shell commands | π΄ --allow-shell |
web_fetch |
Fetch local or remote text content by URL | π’ Always |
search_status / search_list_providers / search_activate_provider / web_search |
Search runtime status and provider-backed web search | π’ Always |
tool_search |
Search the current Python tool registry | π’ Always |
sleep |
Bounded local wait tool | π’ Always |
config_list / config_get / config_set |
Inspect and mutate local workspace config | config_set is π‘ --allow-write |
account_status / account_list_profiles / account_login / account_logout |
Inspect and mutate local account state | π’ Always |
remote_status / remote_list_profiles / remote_connect / remote_disconnect |
Inspect and mutate local remote runtime state | π’ Always |
mcp_list_resources / mcp_read_resource / mcp_list_tools / mcp_call_tool |
Use local MCP resources and transport-backed MCP tools | π’ Always |
plan_get / update_plan / plan_clear |
Inspect and mutate the local plan runtime | update_plan is π‘ --allow-write |
task_next / task_list / task_get / task_create / task_update / task_start / task_complete / task_block / task_cancel / todo_write |
Persistent local task and todo management | write-like task mutations are π‘ --allow-write |
delegate_agent |
Delegate work to nested child agents | π’ Always |
Claw Code Agent supports a manifest-based plugin runtime. Drop a plugin.json in a plugins/ subdirectory:
{
"name": "my-plugin",
"hooks": {
"beforePrompt": "Inject guidance into the system prompt.",
"afterTurn": "Run after each agent turn.",
"onResume": "Reapply state on session resume.",
"beforePersist": "Save state before session is saved.",
"beforeDelegate": "Inject guidance before child agents.",
"afterDelegate": "Process child agent results."
},
"toolAliases": [
{ "name": "my_read", "baseTool": "read_file", "description": "Custom read alias." }
],
"virtualTools": [
{ "name": "my_tool", "description": "A virtual tool.", "responseTemplate": "result: {input}" }
]
}See TESTING_GUIDE.md Section 19 for full plugin testing commands.
The agent can delegate subtasks to child agents with full context carryover:
python3 -m src.main agent \
"Delegate a subtask to inspect src/agent_runtime.py and return a summary." \
--cwd . --show-transcriptFeatures:
- Sequential and parallel subtask execution
- Dependency-aware topological batching
- Child-session save and resume
- Agent manager lineage tracking
See TESTING_GUIDE.md Section 20 for delegation testing commands.
Each agent run automatically saves a resumable session:
session_id=4f2c8c6f9c0e4d7c9c7b1b2a3d4e5f67
session_path=.port_sessions/agent/4f2c8c6f...
Resume a previous session:
python3 -m src.main agent-resume \
4f2c8c6f9c0e4d7c9c7b1b2a3d4e5f67 \
"Continue the previous task and finish the missing parts."Resume directly into interactive chat:
python3 -m src.main agent-chat \
--resume-session-id <session-id> \
--cwd .Inspect saved sessions:
ls -lt .port_sessions/agentNote: Run
agent-resumefrom the sameclaw-code/directory where the session was created. A resumed session continues from the saved transcript, not from scratch.
Run the full test suite:
python3 -m unittest discover -s tests -vSmoke tests:
python3 -m src.main agent "/help"
python3 -m src.main agent-context --cwd .
python3 -m src.main agent \
"Read src/agent_session.py and summarize the message flow." \
--cwd .π Full testing guide: See TESTING_GUIDE.md for step-by-step commands covering the full implemented runtime surface.
Claw Code Agent uses a tiered permission system to keep the agent safe by default:
| Tier | Capability | Flag Required |
|---|---|---|
| Read-only | List, read, glob, grep | None (default) |
| Write | + file creation and editing | --allow-write |
| Shell | + shell command execution | --allow-shell |
| Unsafe | + destructive shell operations | --unsafe |
The full implementation checklist tracking parity against the npm src lives in PARITY_CHECKLIST.md.
It covers: core runtime, CLI modes, prompt assembly, context/memory, slash commands, tools, permissions, plugins, MCP, REPL/TUI, remote features, editor integrations, and internal subsystems.
- This repository is a Python reimplementation inspired by the Claude Code npm architecture.
- It does not ship the original npm source.
- It is not affiliated with or endorsed by Anthropic.
Built with π Python Β· Powered by π HarnessLab Team.

