Skip to content

HarnessLab/claw-code-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

62 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Claw Code Agent logo

Claw Code Agent

A Python reimplementation of the Claude Code agent architecture β€” local models, full control, zero dependencies.

Python 3.10+ GitHub vLLM Qwen3-Coder Zero Dependencies Alpha License


πŸ“’ What's New

April 2026 β€” Major Update

Feature Details
πŸ†• Interactive Chat Mode New agent-chat command β€” multi-turn REPL with /exit to quit
πŸ†• Streaming Output Token-by-token streaming with --stream flag
πŸ†• Plugin Runtime Full manifest-based plugin system β€” hooks, tool aliases, virtual tools, tool blocking
πŸ†• Nested Agent Delegation Delegate subtasks to child agents with dependency-aware topological batching
πŸ†• Agent Manager Lineage tracking, group membership, batch summaries for nested agents
πŸ†• Cost Tracking & Budgets Token budgets, cost budgets, tool-call limits, model-call limits, session-turn limits
πŸ†• Structured Output JSON schema response mode with --response-schema-file
πŸ†• Context Compaction Auto-snip, auto-compact, and reactive compaction on prompt-too-long errors
πŸ†• File History Replay Journaling of file edits with snapshot IDs, replay summaries on session resume
πŸ†• Truncation Continuation Automatic continuation when model response is cut off (finish_reason=length)
πŸ†• Ollama Support Works out of the box with Ollama's OpenAI-compatible API
πŸ†• LiteLLM Proxy Support Route through LiteLLM Proxy to any provider
πŸ†• OpenRouter Support Cloud API gateway β€” access OpenAI, Anthropic, Google models via one endpoint
πŸ†• Query Engine Runtime event counters, transcript summaries, orchestration reports
πŸ†• Remote Runtime Manifest-backed local remote profiles, connect/disconnect state, and remote CLI/slash flows
πŸ†• Hook & Policy Runtime Local .claw-policy.json / hook manifests with trust reporting, safe env, tool blocking, and budget overrides
πŸ†• Task & Plan Runtime Persistent local tasks and plans with plan-to-task sync and dependency-aware task execution
πŸ†• MCP Transport Real stdio MCP transport for initialize, resource listing/reading, and tool listing/calling
πŸ†• Search Runtime Provider-backed web_search with local manifests, activation state, and /search flows
πŸ†• Config & Account Runtime Local config/settings mutation plus manifest-backed account profiles and login/logout state
πŸ†• Ask-User Runtime Queued or interactive local ask-user flow with history, slash commands, and agent tool support
πŸ†• Team Runtime Persisted local teams and message history with team/message tools and slash/CLI inspection
πŸ†• Notebook Edit Tool Native .ipynb cell editing through the real agent tool registry
πŸ†• Workflow Runtime Manifest-backed local workflows with workflow tools, slash commands, and run history
πŸ†• Remote Trigger Runtime Local remote triggers with create/update/run flows similar to the npm remote trigger surface
πŸ†• Worktree Runtime Managed git worktrees with mid-session cwd switching, slash commands, and CLI flows
πŸ†• Tokenizer-Aware Context Cached tokenizer backends with heuristic fallback for /context, /status, and compaction
πŸ†• Prompt Budget Preflight Preflight prompt-length validation, token-budget reporting, and auto-compact/context collapse before backend failures
πŸ†• LSP Runtime Local LSP-style code intelligence for definitions, references, hover, symbols, call hierarchy, and diagnostics
πŸ†• Daemon Commands Local daemon start/ps/logs/attach/kill wrapper over background agent sessions
πŸ†• Background Sessions Local agent-bg, agent-ps, agent-logs, agent-attach, and agent-kill flows
πŸ†• Testing Guide Comprehensive TESTING_GUIDE.md with commands for every feature
πŸ†• Parity Checklist Full PARITY_CHECKLIST.md tracking implementation status vs npm source

πŸ“– About

This repository reimplements the Claude Code npm agent architecture entirely in Python, designed to run with local open-source models via an OpenAI-compatible API server.

Built on the public porting workspace from instructkr/claw-code, the active development lives at HarnessLab/claw-code-agent.

Goal: Not to ship the original npm source, but to reimplement the full agent flow in Python β€” prompt assembly, context building, slash commands, tool calling, session persistence, and local model execution.

Zero external dependencies β€” just Python's standard library.

Claw Code Agent demo


✨ Key Features

Feature Description
πŸ€– Agent Loop Full agentic coding loop with tool calling and iterative reasoning
πŸ’¬ Interactive Chat Multi-turn REPL via agent-chat with session continuity
🧰 Core Tools File read / write / edit, glob search, grep search, shell execution
πŸ”Œ Plugin Runtime Manifest-based plugins with hooks, aliases, virtual tools, and tool blocking
πŸͺ† Nested Delegation Delegate subtasks to child agents with dependency-aware topological batching
πŸ“‘ Streaming Token-by-token streaming output with --stream
πŸ’¬ Slash Commands Local commands for context, config, account, search, MCP, remote, tasks, plan, hooks, and model control
🌐 Remote Runtime Manifest-backed remote profiles with local remote-mode, ssh-mode, teleport-mode, and connect/disconnect state
🧭 Task & Plan Runtime Persistent tasks and plans with sync, next-task selection, and blocked/unblocked state
πŸ›°οΈ MCP Runtime Local MCP manifests plus real stdio MCP transport for resources and tools
πŸ”Ž Search Runtime Provider-backed web_search plus provider activation and status reporting
βš™οΈ Config & Account Runtime Local config mutation, settings inspection, account profiles, and login/logout state
πŸ™‹ Ask-User Runtime Queued answer or interactive user-question flow with history tracking
πŸ‘₯ Team Runtime Persisted local teams plus message history, handoff notes, and collaboration metadata
πŸ““ Notebook Editing Native Jupyter notebook cell editing through notebook_edit
πŸͺ΅ Worktree Runtime Managed git worktrees with worktree_enter, worktree_exit, and live cwd switching
🧭 Workflow Runtime Manifest-backed workflows with slash commands, CLI inspection, and recorded runs
⏰ Remote Triggers Local remote triggers with create/update/run flows and npm-style trigger actions
πŸͺ Hook & Policy Runtime Trust reporting, safe env, managed settings, tool blocking, and budget overrides
🧠 LSP Code Intelligence Local LSP-style definitions, references, hover, symbols, diagnostics, and call hierarchy
🧠 Context Engine Automatic context building with CLAUDE.md discovery, compaction, and snipping
πŸ”’ Tokenizer-Aware Accounting Model-aware token counting with cached tokenizer backends and fallback heuristics
πŸ“ Prompt Budgeting Soft/hard prompt-window checks, token-budget reports, and preflight context collapse
πŸ”„ Session Persistence Save and resume agent sessions with file-history replay
πŸ—‚οΈ Background Sessions agent-bg and local daemon wrappers for background runs, logs, attach, and kill
πŸ’° Cost & Budget Control Token budgets, cost limits, tool-call caps, model-call caps
πŸ“‹ Structured Output JSON schema response mode for programmatic use
πŸ” Permission System Granular control: --allow-write, --allow-shell, --unsafe
πŸ—οΈ OpenAI-Compatible Works with vLLM, Ollama, LiteLLM Proxy, OpenRouter β€” any OpenAI-compatible API
πŸ‰ Qwen3-Coder First-class support for Qwen3-Coder-30B-A3B-Instruct via vLLM
πŸ“¦ Zero Dependencies Pure Python standard library β€” nothing to install

πŸ“‹ Roadmap

πŸ“š Documentation

Document Description
TESTING_GUIDE.md Step-by-step commands to verify every feature
PARITY_CHECKLIST.md Full implementation status vs the npm source

βœ… Done

  • Python CLI agent loop
  • Interactive chat mode (agent-chat) with multi-turn REPL
  • OpenAI-compatible local model backend
  • Qwen3-Coder support through vLLM with qwen3_xml tool parser
  • Ollama, LiteLLM Proxy, and OpenRouter backends
  • Core tools: list_dir, read_file, write_file, edit_file, glob_search, grep_search, bash
  • Context building and /context-style usage reporting
  • Slash commands: /help, /context, /context-raw, /prompt, /permissions, /model, /tools, /memory, /status, /clear
  • Session persistence and agent-resume flow
  • Permission system (read-only, write, shell, unsafe tiers)
  • Streaming token-by-token assistant output
  • Truncated-response continuation flow
  • Auto-snip and auto-compact context reduction
  • Reactive compaction retry on prompt-too-long errors
  • Preflight prompt-length validation and token-budget reporting
  • Preflight auto-compact/context collapse before backend prompt-too-long failures
  • Cost tracking and usage budget enforcement
  • Token, tool-call, model-call, and session-turn budgets
  • Structured output / JSON schema response mode
  • File history journaling with snapshot IDs and replay summaries
  • Nested agent delegation with dependency-aware topological batching
  • Agent manager with lineage tracking and group membership
  • Local daemon-style background command family
  • Local background session workflows: agent-bg, agent-ps, agent-logs, agent-attach, agent-kill
  • Local remote runtime: manifest discovery, profile listing, connect/disconnect persistence, and CLI/slash flows
  • Local hook and policy runtime with trust reporting, safe env, tool blocking, and budget overrides
  • Local config runtime: config discovery, effective settings, source inspection, and config mutation
  • Local LSP runtime: definitions, references, hover, symbols, diagnostics, and call hierarchy
  • Local account runtime: profile discovery, login/logout state, and account CLI/slash flows
  • Local ask-user runtime: queued answers, history, and ask-user CLI/slash flows
  • Local team runtime: persisted teams, team messages, and team CLI/slash flows
  • Local search runtime with provider discovery, activation, and provider-backed web_search
  • Local MCP runtime: manifest resources, stdio transport, MCP resources, and MCP tool calls
  • Local task and plan runtimes with plan sync and dependency-aware task execution
  • Notebook edit tool in the real Python tool registry
  • Local workflow runtime with workflow list/get/run tools and CLI/slash flows
  • Local remote trigger runtime with create/update/run flows and CLI/slash inspection
  • Local managed git worktree runtime with live cwd switching and worktree CLI/slash flows
  • Tokenizer-aware context accounting with cached tokenizer backends and heuristic fallback
  • Plugin runtime: manifest discovery, hooks, aliases, virtual tools, tool blocking
  • Plugin lifecycle hooks: resume, persist, delegate phases
  • Plugin session-state persistence and resume restoration
  • Query engine facade driving the real Python runtime
  • Compaction metadata with lineage IDs and revision summaries
  • Extended runtime tools: web_fetch, web_search, tool_search, sleep
  • Unit tests for the Python runtime
  • pyproject.toml packaging with setuptools

πŸ”² In Progress

  • Full MCP parity beyond the current stdio transport and local manifest/resource/tool support
  • Full slash-command parity with npm runtime
  • Full interactive REPL / TUI behavior
  • Full tokenizer/chat-message framing parity beyond the current tokenizer-aware accounting
  • Hooks system parity
  • Real remote transport/runtime parity beyond the current local remote-profile runtime
  • Voice and VIM modes
  • Editor and platform integrations
  • Background and team features

πŸ—οΈ Architecture

claw-code/
β”œβ”€β”€ README.md
β”œβ”€β”€ TESTING_GUIDE.md              # How to test every feature
β”œβ”€β”€ PARITY_CHECKLIST.md           # Implementation status vs npm source
β”œβ”€β”€ pyproject.toml
β”œβ”€β”€ .gitignore
β”œβ”€β”€ images/
β”‚   └── logo.png
β”œβ”€β”€ src/                          # Python implementation
β”‚   β”œβ”€β”€ main.py                   # CLI entry point & argument parsing
β”‚   β”œβ”€β”€ agent_runtime.py          # Core agent loop (LocalCodingAgent)
β”‚   β”œβ”€β”€ agent_tools.py            # Tool definitions & execution engine
β”‚   β”œβ”€β”€ agent_prompting.py        # System prompt assembly
β”‚   β”œβ”€β”€ agent_context.py          # Context building & CLAUDE.md discovery
β”‚   β”œβ”€β”€ agent_context_usage.py    # Context usage estimation & reporting
β”‚   β”œβ”€β”€ agent_session.py          # Session state management
β”‚   β”œβ”€β”€ agent_slash_commands.py   # Local slash command processing
β”‚   β”œβ”€β”€ agent_manager.py          # Nested agent lineage & group tracking
β”‚   β”œβ”€β”€ agent_types.py            # Shared dataclasses & type definitions
β”‚   β”œβ”€β”€ openai_compat.py          # OpenAI-compatible API client (streaming)
β”‚   β”œβ”€β”€ plugin_runtime.py         # Plugin manifest, hooks, aliases, virtual tools
β”‚   β”œβ”€β”€ agent_plugin_cache.py     # Plugin discovery & prompt injection cache
β”‚   β”œβ”€β”€ session_store.py          # Session serialization & persistence
β”‚   β”œβ”€β”€ transcript.py             # Transcript block export & mutation tracking
β”‚   β”œβ”€β”€ query_engine.py           # Query engine facade & runtime orchestration
β”‚   β”œβ”€β”€ mcp_runtime.py            # Local MCP discovery and stdio MCP transport
β”‚   β”œβ”€β”€ search_runtime.py         # Search providers and provider-backed web_search
β”‚   β”œβ”€β”€ remote_runtime.py         # Local remote profiles, connect/disconnect state, remote CLI support
β”‚   β”œβ”€β”€ background_runtime.py     # Local background sessions and daemon support
β”‚   β”œβ”€β”€ account_runtime.py        # Local account profiles, login/logout state, account CLI support
β”‚   β”œβ”€β”€ ask_user_runtime.py       # Local ask-user queued answers and interaction history
β”‚   β”œβ”€β”€ config_runtime.py         # Local workspace config/settings discovery and mutation
β”‚   β”œβ”€β”€ lsp_runtime.py            # Local LSP-style code intelligence and diagnostics
β”‚   β”œβ”€β”€ token_budget.py           # Prompt-window budgeting and preflight prompt-length validation
β”‚   β”œβ”€β”€ plan_runtime.py           # Persistent plan runtime and plan sync
β”‚   β”œβ”€β”€ task_runtime.py           # Persistent task runtime and task execution
β”‚   β”œβ”€β”€ task.py                   # Task state model and task dataclasses
β”‚   β”œβ”€β”€ team_runtime.py           # Local teams, messages, and collaboration metadata
β”‚   β”œβ”€β”€ workflow_runtime.py       # Local workflow manifests and recorded workflow runs
β”‚   β”œβ”€β”€ remote_trigger_runtime.py # Local remote trigger manifests and trigger run history
β”‚   β”œβ”€β”€ worktree_runtime.py       # Managed git worktree sessions and cwd switching
β”‚   β”œβ”€β”€ hook_policy.py            # Hook/policy manifests, trust, and safe env handling
β”‚   β”œβ”€β”€ tokenizer_runtime.py      # Tokenizer-aware context accounting backends
β”‚   β”œβ”€β”€ permissions.py            # Tool permission filtering
β”‚   β”œβ”€β”€ cost_tracker.py           # Cost & budget enforcement
β”‚   β”œβ”€β”€ commands.py               # Mirrored command inventory
β”‚   β”œβ”€β”€ tools.py                  # Mirrored tool inventory
β”‚   β”œβ”€β”€ runtime.py                # Mirrored runtime facade
β”‚   └── reference_data/           # Mirrored inventory snapshots
└── tests/                        # Unit tests
    β”œβ”€β”€ test_agent_runtime.py
    β”œβ”€β”€ test_agent_context.py
    β”œβ”€β”€ test_agent_context_usage.py
    β”œβ”€β”€ test_agent_prompting.py
    β”œβ”€β”€ test_agent_slash_commands.py
    β”œβ”€β”€ test_main.py
    β”œβ”€β”€ test_query_engine_runtime.py
    └── test_porting_workspace.py

πŸ“¦ Requirements

Requirement Details
🐍 Python 3.10 or higher
πŸ“š Dependencies None β€” pure Python standard library
πŸ–₯️ Model Server vLLM, Ollama, LiteLLM Proxy, or OpenRouter, with tool calling support
🧠 Model Qwen/Qwen3-Coder-30B-A3B-Instruct (recommended)

πŸš€ Quick Start

1. Start vLLM with Qwen3-Coder

vLLM must be started with automatic tool choice enabled. Use the qwen3_xml parser for Qwen3-Coder tool calling:

python -m vllm.entrypoints.openai.api_server \
  --model Qwen/Qwen3-Coder-30B-A3B-Instruct \
  --host 127.0.0.1 \
  --port 8000 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_xml

Verify the server is running:

curl http://127.0.0.1:8000/v1/models

πŸ“š References: vLLM Tool Calling Docs Β· OpenAI-Compatible Server

Optional: Use Ollama Instead of vLLM

claw-code-agent can also work with Ollama because the runtime targets an OpenAI-compatible API. Use a model that supports tool calling well.

Example:

ollama serve
ollama pull qwen3

Then configure:

export OPENAI_BASE_URL=http://127.0.0.1:11434/v1
export OPENAI_API_KEY=ollama
export OPENAI_MODEL=qwen3

Notes:

  • prefer tool-capable models such as qwen3
  • plain chat-only models are not enough for full agent behavior
  • Ollama does not use the vLLM parser flags shown above

πŸ“š References: Ollama OpenAI Compatibility Β· Ollama Tool Calling

Optional: Use LiteLLM Proxy

claw-code-agent can also work through LiteLLM Proxy because the runtime targets an OpenAI-compatible chat completions API. The routed model still needs to support tool calling for full agent behavior.

Quick start example:

pip install 'litellm[proxy]'
litellm --model ollama/qwen3

LiteLLM Proxy runs on port 4000 by default. Then configure:

export OPENAI_BASE_URL=http://127.0.0.1:4000
export OPENAI_API_KEY=anything
export OPENAI_MODEL=ollama/qwen3

Notes:

  • LiteLLM Proxy gives you an OpenAI-style gateway in front of many providers
  • tool use still depends on the underlying routed model and provider behavior
  • if you configure a LiteLLM master key, use that instead of anything

πŸ“š References: LiteLLM Docs Β· LiteLLM Proxy Quick Start

Optional: Use OpenRouter

claw-code-agent can also work with OpenRouter, a cloud API gateway that provides access to models from OpenAI, Anthropic, Google, Meta, and others through a single OpenAI-compatible endpoint. No local model server required.

Configure:

export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_API_KEY=sk-or-v1-your-key-here
export OPENAI_MODEL=openai/gpt-4o-mini

Notes:

  • sign up at openrouter.ai and create an API key under Keys
  • model names use the provider/model format (e.g. anthropic/claude-sonnet-4, openai/gpt-4o, google/gemini-2.5-pro)
  • tool calling support varies by model β€” check the model list for capabilities
  • this sends your conversation (including file contents and shell output) to OpenRouter and the upstream provider β€” do not use with repos containing secrets or sensitive data

πŸ“š References: OpenRouter Docs Β· Supported Models Β· API Keys

2. Configure Environment

export OPENAI_BASE_URL=http://127.0.0.1:8000/v1
export OPENAI_API_KEY=local-token
export OPENAI_MODEL=Qwen/Qwen3-Coder-30B-A3B-Instruct

Use Another Model With vLLM

If you want to try another model, keep the same vLLM server setup and change the --model value when you launch vLLM.

Example:

python -m vllm.entrypoints.openai.api_server \
  --model your-model-name \
  --host 127.0.0.1 \
  --port 8000 \
  --enable-auto-tool-choice \
  --tool-call-parser your_parser

Then update:

export OPENAI_MODEL=your-model-name

Notes:

  • the documented path in this repository is vLLM
  • the model must support tool calling well enough for agent use
  • some model families require a different --tool-call-parser
  • slash commands such as /help, /context, and /tools are local and do not require the model server

3. Run the Agent

# Read-only question
python3 -m src.main agent \
  "Read src/agent_runtime.py and summarize how the loop works." \
  --cwd .

# Write-enabled task
python3 -m src.main agent \
  "Create TEST_QWEN_AGENT.md with one line: test ok" \
  --cwd . --allow-write

# Shell-enabled task
python3 -m src.main agent \
  "Run pwd and ls src, then summarize the result." \
  --cwd . --allow-shell

# Interactive chat mode
python3 -m src.main agent-chat --cwd .

# Streaming output
python3 -m src.main agent \
  "Explain the current architecture." \
  --cwd . --stream

πŸ› οΈ Usage

Agent Commands

Command Description
agent <prompt> Run the agent with a prompt
agent-chat [prompt] Start interactive multi-turn chat mode
agent-bg <prompt> Run the agent in a local background session
agent-ps List local background sessions
agent-logs <id> Show background session logs
agent-attach <id> Show the current background output snapshot
agent-kill <id> Stop a background session
daemon <subcommand> Daemon-style wrapper over local background sessions
agent-prompt Show the assembled system prompt
agent-context Show estimated context usage
agent-context-raw Show the raw context snapshot
token-budget Show prompt-window budget, reserves, and soft/hard input limits
agent-resume <id> <prompt> Resume a saved session

Runtime Utility Commands

Command Description
search-status / search-providers / search-activate / search Inspect and use the local search runtime
mcp-status / mcp-resources / mcp-resource / mcp-tools / mcp-call-tool Inspect and use the local MCP runtime
remote-status / remote-profiles / remote-disconnect Inspect local remote runtime state
remote-mode / ssh-mode / teleport-mode / direct-connect-mode / deep-link-mode Activate local remote runtime modes
config-status / config-effective / config-source / config-get / config-set Inspect and mutate local config/settings
account-status / account-profiles / account-login / account-logout Inspect and mutate local account state

CLI Flags

Flag Description
--cwd <path> Set the workspace directory
--model <name> Override the model name
--base-url <url> Override the API base URL
--allow-write Allow the agent to modify files
--allow-shell Allow the agent to execute shell commands
--unsafe Allow destructive shell operations
--stream Enable token-by-token streaming output
--show-transcript Print the full message transcript
--scratchpad-root <path> Override the scratchpad directory
--system-prompt <text> Set a custom system prompt
--append-system-prompt <text> Append to the system prompt
--override-system-prompt <text> Replace the generated system prompt
--add-dir <path> Add extra directories to context

Budget & Limit Flags

Flag Description
--max-total-tokens <n> Total token budget
--max-input-tokens <n> Input token budget
--max-output-tokens <n> Output token budget
--max-reasoning-tokens <n> Reasoning token budget
--max-budget-usd <n> Maximum cost in USD
--max-tool-calls <n> Maximum tool calls per run
--max-delegated-tasks <n> Maximum delegated subtasks
--max-model-calls <n> Maximum model API calls
--max-session-turns <n> Maximum session turns
--input-cost-per-million <n> Input token pricing
--output-cost-per-million <n> Output token pricing

Context Control Flags

Flag Description
--auto-snip-threshold <n> Auto-snip older messages at this token count
--auto-compact-threshold <n> Auto-compact at this token count
--compact-preserve-messages <n> Messages to preserve during compaction
--disable-claude-md Disable CLAUDE.md discovery

Structured Output Flags

Flag Description
--response-schema-file <path> JSON schema file for structured output
--response-schema-name <name> Schema name identifier
--response-schema-strict Enforce strict schema validation

Slash Commands

These are handled locally before the model loop:

Command Aliases Description
/help /commands Show built-in slash commands
/context /usage Show estimated session context usage
/context-raw /env Show raw environment & context snapshot
/token-budget /budget Show prompt-window budget, reserves, and soft/hard input limits
/mcp β€” Show MCP runtime status, tools, or a single MCP tool
/resources β€” List MCP resources
/resource β€” Read an MCP resource by URI
/search β€” Show search status, providers, activate a provider, or run a search
/remote β€” Show local remote status or activate a target
/remotes β€” List local remote profiles
/ssh β€” Activate an SSH-style remote profile
/teleport β€” Activate a teleport-style remote profile
/direct-connect β€” Activate a direct-connect remote profile
/deep-link β€” Activate a deep-link remote profile
/disconnect /remote-disconnect Disconnect the active remote runtime target
/account β€” Show account runtime status or profiles
/login β€” Activate a local account profile or identity
/logout β€” Clear the active account session
/config /settings Inspect effective config, sources, or a single config value
/plan /planner Show the local plan runtime state
/tasks /todo Show the local task list
/task β€” Show a task by id
/task-next /next-task Show the next actionable tasks
/prompt /system-prompt Render the effective system prompt
/hooks /policy Show local hook/policy manifests
/trust β€” Show trust mode, managed settings, and safe env values
/permissions β€” Show active tool permission mode
/model β€” Show or update the active model
/tools β€” List registered tools with permission status
/memory β€” Show loaded CLAUDE.md memory bundle
/status /session Show runtime/session status summary
/clear β€” Clear ephemeral runtime state
python3 -m src.main agent "/help"
python3 -m src.main agent "/context" --cwd .
python3 -m src.main agent "/token-budget" --cwd .
python3 -m src.main agent "/tools" --cwd .
python3 -m src.main agent "/status" --cwd .

Utility Commands

python3 -m src.main summary            # Workspace summary
python3 -m src.main manifest           # Workspace manifest
python3 -m src.main commands --limit 10 # Command inventory
python3 -m src.main tools --limit 10    # Tool inventory

πŸ”§ Built-in Tools

The runtime currently includes core and extended tools:

Tool Description Permission
list_dir List files and directories 🟒 Always
read_file Read file contents (with line ranges) 🟒 Always
write_file Write or create files 🟑 --allow-write
edit_file Edit files via exact string matching 🟑 --allow-write
glob_search Find files by glob pattern 🟒 Always
grep_search Search file contents by regex 🟒 Always
bash Execute shell commands πŸ”΄ --allow-shell
web_fetch Fetch local or remote text content by URL 🟒 Always
search_status / search_list_providers / search_activate_provider / web_search Search runtime status and provider-backed web search 🟒 Always
tool_search Search the current Python tool registry 🟒 Always
sleep Bounded local wait tool 🟒 Always
config_list / config_get / config_set Inspect and mutate local workspace config config_set is 🟑 --allow-write
account_status / account_list_profiles / account_login / account_logout Inspect and mutate local account state 🟒 Always
remote_status / remote_list_profiles / remote_connect / remote_disconnect Inspect and mutate local remote runtime state 🟒 Always
mcp_list_resources / mcp_read_resource / mcp_list_tools / mcp_call_tool Use local MCP resources and transport-backed MCP tools 🟒 Always
plan_get / update_plan / plan_clear Inspect and mutate the local plan runtime update_plan is 🟑 --allow-write
task_next / task_list / task_get / task_create / task_update / task_start / task_complete / task_block / task_cancel / todo_write Persistent local task and todo management write-like task mutations are 🟑 --allow-write
delegate_agent Delegate work to nested child agents 🟒 Always

πŸ”Œ Plugin System

Claw Code Agent supports a manifest-based plugin runtime. Drop a plugin.json in a plugins/ subdirectory:

{
  "name": "my-plugin",
  "hooks": {
    "beforePrompt": "Inject guidance into the system prompt.",
    "afterTurn": "Run after each agent turn.",
    "onResume": "Reapply state on session resume.",
    "beforePersist": "Save state before session is saved.",
    "beforeDelegate": "Inject guidance before child agents.",
    "afterDelegate": "Process child agent results."
  },
  "toolAliases": [
    { "name": "my_read", "baseTool": "read_file", "description": "Custom read alias." }
  ],
  "virtualTools": [
    { "name": "my_tool", "description": "A virtual tool.", "responseTemplate": "result: {input}" }
  ]
}

See TESTING_GUIDE.md Section 19 for full plugin testing commands.


πŸͺ† Nested Agent Delegation

The agent can delegate subtasks to child agents with full context carryover:

python3 -m src.main agent \
  "Delegate a subtask to inspect src/agent_runtime.py and return a summary." \
  --cwd . --show-transcript

Features:

  • Sequential and parallel subtask execution
  • Dependency-aware topological batching
  • Child-session save and resume
  • Agent manager lineage tracking

See TESTING_GUIDE.md Section 20 for delegation testing commands.


πŸ”„ Session Persistence

Each agent run automatically saves a resumable session:

session_id=4f2c8c6f9c0e4d7c9c7b1b2a3d4e5f67
session_path=.port_sessions/agent/4f2c8c6f...

Resume a previous session:

python3 -m src.main agent-resume \
  4f2c8c6f9c0e4d7c9c7b1b2a3d4e5f67 \
  "Continue the previous task and finish the missing parts."

Resume directly into interactive chat:

python3 -m src.main agent-chat \
  --resume-session-id <session-id> \
  --cwd .

Inspect saved sessions:

ls -lt .port_sessions/agent

Note: Run agent-resume from the same claw-code/ directory where the session was created. A resumed session continues from the saved transcript, not from scratch.


πŸ§ͺ Testing

Run the full test suite:

python3 -m unittest discover -s tests -v

Smoke tests:

python3 -m src.main agent "/help"
python3 -m src.main agent-context --cwd .
python3 -m src.main agent \
  "Read src/agent_session.py and summarize the message flow." \
  --cwd .

πŸ“š Full testing guide: See TESTING_GUIDE.md for step-by-step commands covering the full implemented runtime surface.


πŸ” Permission Model

Claw Code Agent uses a tiered permission system to keep the agent safe by default:

Tier Capability Flag Required
Read-only List, read, glob, grep None (default)
Write + file creation and editing --allow-write
Shell + shell command execution --allow-shell
Unsafe + destructive shell operations --unsafe

πŸ”Ž Parity Status

The full implementation checklist tracking parity against the npm src lives in PARITY_CHECKLIST.md.

It covers: core runtime, CLI modes, prompt assembly, context/memory, slash commands, tools, permissions, plugins, MCP, REPL/TUI, remote features, editor integrations, and internal subsystems.


⚠️ Disclaimer

  • This repository is a Python reimplementation inspired by the Claude Code npm architecture.
  • It does not ship the original npm source.
  • It is not affiliated with or endorsed by Anthropic.

Built with 🐍 Python Β· Powered by πŸ‰ HarnessLab Team.

About

Claw Code No Rust No TypeScript Only Python. Easy to work with. Fast to iterate. πŸ”₯ Zero external dependencies πŸ”₯

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors