Skip to content

Commit a901f34

Browse files
chore(sync): merge upstream/main (42 commits) into our main
Adopts upstream's full LLM client refactor (PR plastic-labs#459: src/utils/clients.py deleted in favor of the new src/llm/ package with per-backend handlers, ConfiguredModelSettings, and ModelTransport). Conflict resolutions were taken upstream-side via -X theirs and our customizations are re-applied adjacent to the new structure rather than as parallel forks. Notable upstream changes pulled in: - LLM client refactor: src/llm/{api,backend,executor,runtime,tool_loop,...} with src/llm/backends/{anthropic,gemini,openai}.py - ConfiguredModelSettings + ModelConfig replace per-component model fields - New honcho-cli package, Zo Computer / Paperclip / SillyTavern / opencode integration docs - Surprisal filter format fix (plastic-labs#581) — converged with our 4e7f136 - Many smaller fixes: dreamer thresholds, deriver blank-observation guard, vector sync retry budget, embed() string-input fix, etc. Adjacent re-applications (deployment-critical): - src/config.py: re-add LLM.CF_GATEWAY_AUTH_TOKEN - src/llm/registry.py: inject cf-aig-authorization header in get_*_override_client factories when base_url targets a CF gateway - src/embedding_client.py: same header injection on openai/gemini branches Dropped (now redundant or replaceable): - Per-specialist DEDUCTION_PROVIDER / INDUCTION_PROVIDER / *_THINKING_BUDGET_TOKENS overrides — covered by upstream's DREAM_DEDUCTION_MODEL_CONFIG__TRANSPORT etc. env vars - get_provider() / get_thinking_budget() methods on BaseSpecialist — superseded by ConfiguredModelSettings on each specialist's MODEL_CONFIG - src/utils/types.SupportedProviders — replaced by ModelTransport - Custom Traefik service block in docker-compose.yml.example — Traefik configs remain in docker/traefik/ for users who want to wire it up - Our 4e7f136 surprisal fix — identical to upstream's plastic-labs#581 Deployment notes for re-keying .env: - LLM_CF_GATEWAY_API_KEY / LLM_CF_GATEWAY_BASE_URL / LLM_OPENAI_BASE_URL / LLM_OPENAI_COMPATIBLE_* / LLM_VLLM_* are no-ops now (extra='ignore'). Use per-component MODEL_CONFIG__BASE_URL / MODEL_CONFIG__API_KEY env vars (e.g. DREAM_DEDUCTION_MODEL_CONFIG__BASE_URL=...). - LLM_CF_GATEWAY_AUTH_TOKEN remains as the single global needed for the cf-aig-authorization header. Verification: ruff check src/ ✓, basedpyright src/ ✓ (0 errors).
2 parents 2e237eb + f37338b commit a901f34

178 files changed

Lines changed: 20209 additions & 6098 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.claude/skills/honcho-cli/SKILL.md

Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
---
2+
name: honcho-cli
3+
description: Inspect and debug Honcho workspaces via the `honcho` CLI. Use when investigating peer representations, memory state, session context, queue status, or dialectic quality — any task that requires introspection of a Honcho deployment.
4+
allowed-tools: Bash(honcho:*), Bash(jq:*), Read, Grep
5+
---
6+
7+
# Honcho CLI
8+
9+
`honcho` wraps the Honcho Python SDK with agent-friendly defaults: JSON output, structured errors, input validation. Use it to inspect workspace state, debug peer memory, and diagnose the dialectic.
10+
11+
## Output & config
12+
13+
- **TTY**: human-readable tables (default when interactive)
14+
- **Piped / `--json`**: JSON — collection commands emit arrays, single-resource commands emit objects
15+
- **Exit codes**: `0` success · `1` client error (bad input, not found) · `2` server error · `3` auth error
16+
- **Config**: `~/.honcho/config.json` (shared with other Honcho tools). The CLI owns `apiKey` and `environmentUrl` at the top level; run `honcho init` to confirm or set them. Per-command scope (workspace / peer / session) is via `-w` / `-p` / `-s` flags or `HONCHO_*` env vars.
17+
18+
## Command groups
19+
20+
- `honcho config` — CLI configuration
21+
- `honcho workspace` — inspect, delete, search
22+
- `honcho peer` — inspect, card, chat, search
23+
- `honcho session` — inspect, messages, context, summaries
24+
- `honcho message` — list and get
25+
- `honcho conclusion` — list, search, create, delete
26+
27+
## Rules
28+
29+
- Always pass `--json` when processing output programmatically.
30+
- Run `honcho peer inspect` before `honcho peer chat` to understand context.
31+
- Use `honcho session context` to see exactly what an agent receives.
32+
- Never run `honcho workspace delete` without `honcho workspace inspect` first.
33+
- Check queue status when derivation seems stalled.
34+
- Compare peer card with conclusions to understand memory state.
35+
36+
## Inspection tour
37+
38+
When orienting to a Honcho deployment, walk outside-in:
39+
40+
### 1. Understand the workspace
41+
42+
```bash
43+
honcho workspace inspect --json
44+
```
45+
46+
### 2. Find the peer
47+
48+
```bash
49+
honcho peer list --json
50+
honcho peer inspect <peer_id> --json
51+
```
52+
53+
### 3. Check peer's memory
54+
55+
```bash
56+
honcho peer card <peer_id> --json
57+
honcho conclusion list --observer <peer_id> --json
58+
honcho conclusion search "topic" --observer <peer_id> --json
59+
```
60+
61+
### 4. Debug a session
62+
63+
```bash
64+
honcho session inspect <session_id> --json
65+
honcho message list <session_id> --last 20 --json
66+
honcho session context <session_id> --json
67+
honcho session summaries <session_id> --json
68+
```
69+
70+
### 5. Search across workspace
71+
72+
```bash
73+
honcho workspace search "query" --json
74+
honcho peer search <peer_id> "query" --json
75+
```
76+
77+
## Debugging playbook
78+
79+
### Peer not learning?
80+
81+
```bash
82+
# Is observation enabled?
83+
honcho peer inspect <peer_id> --json | jq '.configuration'
84+
85+
# Is the deriver queue processing messages?
86+
honcho workspace queue-status --json
87+
88+
# What conclusions exist?
89+
honcho conclusion list --observer <peer_id> --json
90+
honcho conclusion search "expected topic" --observer <peer_id> --json
91+
```
92+
93+
### Session context looks wrong?
94+
95+
```bash
96+
# Raw context an agent would receive
97+
honcho session context <session_id> --json
98+
99+
# Summaries feeding the context
100+
honcho session summaries <session_id> --json
101+
102+
# Recent message history
103+
honcho message list <session_id> --last 50 --json
104+
```
105+
106+
### Dialectic giving bad answers?
107+
108+
```bash
109+
# What the peer card says
110+
honcho peer card <peer_id> --json
111+
112+
# Conclusions on the specific topic
113+
honcho conclusion search "topic" --observer <peer_id> --json
114+
115+
# Exercise the dialectic directly
116+
honcho peer chat <peer_id> "what do you know about X?" --json
117+
```

.claude/skills/honcho-integration/SKILL.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,8 @@ Based on interview responses, implement the integration:
9191

9292
### Phase 4: Verification
9393

94+
- If the Honcho CLI is available, run `honcho doctor` to confirm connectivity before testing the integration code
95+
- Use `honcho peer list` and `honcho peer chat` to verify peers exist and the dialectic endpoint works independently of the integration
9496
- Ensure all message exchanges are stored to Honcho
9597
- Verify AI peers have `observe_me=False` (unless user specifically wants AI observation)
9698
- Check that the workspace ID is consistent across the codebase
@@ -106,6 +108,16 @@ Based on interview responses, implement the integration:
106108

107109
2. **Get an API key** ask the user to get a Honcho API key from <https://app.honcho.dev> and add it to the environment.
108110

111+
3. **Verify with the CLI** (optional but recommended). If the user has the Honcho CLI installed (`pip install honcho-cli`), they can validate their setup before writing any integration code:
112+
113+
```bash
114+
honcho init # persist API key + URL to ~/.honcho/config.json
115+
honcho doctor # verify connectivity, config, workspace health
116+
honcho peer chat # test the dialectic endpoint interactively
117+
```
118+
119+
This is the fastest way to confirm the API key and URL are correct before debugging SDK code.
120+
109121
## Installation
110122

111123
### Python (use uv)
@@ -524,6 +536,8 @@ When integrating Honcho into an existing codebase:
524536
- [ ] Pre-fetch pattern for simpler integrations
525537
- [ ] context() for conversation history
526538
- [ ] Store messages after each exchange to build user models
539+
- [ ] (Optional) Run `honcho doctor` to verify connectivity before testing integration code
540+
- [ ] (Optional) Use `honcho peer chat` to test dialectic queries independently
527541

528542
## Common Mistakes to Avoid
529543

0 commit comments

Comments
 (0)