Skip to content

Commit 24ed80f

Browse files
committed
feat: article
1 parent a291bd5 commit 24ed80f

File tree

5 files changed

+937
-3
lines changed

5 files changed

+937
-3
lines changed

.knowledge/drafts/the-anatomy-of-ai-agents-v2-draft.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,11 @@ But "plan mode" in most tools is just a prompt. There's no enforcement. The agen
5858

5959
This matters because a plan only works if it's actually followed. If the agent can deviate mid-execution—if "plan mode" and "build mode" are just prompts with different names—the plan becomes advisory. And advisory plans get ignored.
6060

61-
The second problem is structural: there's no artifact that passes from plan to build. The plan lives in context. By the time build mode starts, the plan is mixed in with everything else the agent said. Which file was the plan? Which changes were approved? The agent has to re-read the conversation to remember. Context saturation accelerates.
61+
The second problem is structural: there's no artifact that passes from plan to build. The plan lives in the context. By the time build mode starts, the plan is mixed in with everything else the agent said. Which file was the plan? Which changes were approved? The agent has to re-read the conversation to remember. Context saturation accelerates.
6262

63-
**The plan is a map. But terrain changes. The agent needs a compass, not just a destination.** Context—the information the agent carries—is that compass. And context has limits. After extended work, those limits become visible.
63+
The plan is a map. But terrain changes. The agent needs a compass, not just a destination.
64+
65+
Context—the information the agent carries—is that compass. And context has limits. After extended work, those limits become visible.
6466

6567
## Symptom Three: Context Saturation
6668

Lines changed: 222 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,222 @@
1+
# Article Brainstorm: Agent Architectures
2+
3+
**Created:** 2026-04-01
4+
**Target:** blog.apiad.net
5+
**Status:** Brainstorming
6+
7+
---
8+
9+
## Core Thesis
10+
11+
The article argues for a principled taxonomy of AI agent components:
12+
- **Modes** (Primary Agents): Define the AI's *persona* and *permissions* — implicit, always active
13+
- **Skills**: Extend capabilities with *domain knowledge* — implicit, activated by context
14+
- **Commands**: Trigger *workflows* — explicit, user-initiated
15+
- **Subagents**: Handle *background delegation* — implicit, spawned by commands
16+
17+
Key insight: **The distinction between implicit (automatic) and explicit (requested) is the fundamental design decision.**
18+
19+
---
20+
21+
## Target Audience
22+
23+
- Developers using AI coding agents (Claude Code, Gemini CLI, Copilot, etc.)
24+
- AI practitioners building agentic systems
25+
- Technical writers wanting to leverage AI effectively
26+
27+
**Assumed knowledge:** Basic familiarity with AI agents, prompts
28+
29+
---
30+
31+
## Key Points to Cover
32+
33+
### 1. The Problem: Agent Confusion
34+
35+
Current agents conflate three concerns:
36+
- *Who* the AI is (mode/agent)
37+
- *What* it knows (skills)
38+
- *How* it works (commands)
39+
40+
This leads to brittle prompts that try to do everything at once.
41+
42+
### 2. The Taxonomy (Main Contribution)
43+
44+
#### Primary Agents = Modes
45+
- Define *who* the AI is
46+
- Affect permissions, available tools, thinking style
47+
- Always active, implicitly determined
48+
- Examples: `analyze`, `design`, `create`
49+
50+
#### Skills = Domain Knowledge
51+
- Extend the agent with *implicit* capabilities
52+
- Activated by context, not by user invocation
53+
- Provide specialized knowledge or behavior
54+
- Examples: `literate-commands`, `code-review`, `debugging`
55+
56+
#### Commands = Workflows
57+
- Explicit user-triggered sequences
58+
- Simple prompts that invoke a structured workflow
59+
- Orchestrate agents, subagents, and skills
60+
- Examples: `/build`, `/research`, `/plan`
61+
62+
#### Subagents = Delegation
63+
- Spawned by commands for background tasks
64+
- Keep main context clean
65+
- Return summarized results only
66+
- Examples: `scout`, `investigator`, `critic`
67+
68+
### 3. The Key Insight: Implicit vs Explicit
69+
70+
**Skills are implicit:**
71+
- Always available in context
72+
- Activated by relevance, not by request
73+
- "The agent just knows how to do X"
74+
75+
**Commands are explicit:**
76+
- User must invoke with `/command`
77+
- Trigger a defined workflow
78+
- "Do this specific thing, this specific way"
79+
80+
This distinction is crucial because:
81+
- Implicit keeps the agent *aware*
82+
- Explicit keeps the user *in control*
83+
84+
### 4. The Command Prompt Limitation
85+
86+
Commands are *simple prompts* — not full agents. They:
87+
- Define *what* to do (the workflow)
88+
- Invoke *who* to do it (agents/subagents)
89+
- But don't embed domain knowledge (that's skills)
90+
91+
This separation allows:
92+
- Commands to be workflow-agnostic
93+
- Skills to be command-agnostic
94+
- Composable, reusable pieces
95+
96+
### 5. Literate Commands: The Next Evolution
97+
98+
Commands that are Markdown files with YAML frontmatter:
99+
- Self-documenting
100+
- Declarative configuration
101+
- Step-by-step execution with variable substitution
102+
- Conditional logic for branching
103+
104+
Example:
105+
```yaml
106+
---
107+
name: feature-workflow
108+
variables:
109+
- name: feature
110+
type: string
111+
prompt: "Feature name?"
112+
---
113+
# Feature Implementation
114+
115+
## Step 1: Create Branch
116+
git checkout -b feature/${{feature}}
117+
```
118+
119+
### 6. The OpenCode Example
120+
121+
Present the full system as a case study:
122+
- 3 modes: `analyze`, `design`, `create`
123+
- 5 subagents: `scout`, `investigator`, `critic`, `tester`, `drafter`
124+
- 1 skill: `literate-commands`
125+
- 19+ commands orchestrating everything
126+
127+
Show how they compose:
128+
- `/build` invokes `create` agent + `tester` subagent
129+
- `/plan` invokes `design` agent + `investigator` subagent
130+
- `/research` invokes `analyze` agent + `scout` subagent
131+
132+
---
133+
134+
## Potential Structure
135+
136+
### Option A: Conceptual First
137+
1. Hook: "Everyone talks about AI agents, but nobody explains the *parts*"
138+
2. The taxonomy (the four components)
139+
3. The key insight (implicit vs explicit)
140+
4. Why commands are simple prompts
141+
5. Literate commands as evolution
142+
6. Case study: OpenCode
143+
7. Takeaways
144+
145+
### Option B: Problem-Solution
146+
1. Hook: "Your AI agent does too much. Here's why."
147+
2. The problem: monolithic agents
148+
3. The solution: principled separation
149+
4. The taxonomy explained
150+
5. Literate commands demo
151+
6. OpenCode implementation
152+
7. Conclusion: build your own
153+
154+
### Option C: Tutorial-Style
155+
1. Hook: "I built an agent framework. Let me show you the architecture."
156+
2. Start with agents/modes
157+
3. Add skills
158+
4. Add commands
159+
5. Add subagents
160+
6. Put it together: OpenCode
161+
7. The literate commands innovation
162+
8. How to build your own
163+
164+
---
165+
166+
## Opening Hook Options
167+
168+
1. **Provocative:** "You think you're prompting an AI agent. You're not. You're designing a system."
169+
170+
2. **Confessional:** "I've built three different agent frameworks. Here's what I learned about the one thing everyone gets wrong."
171+
172+
3. **Question:** "What's the difference between a skill, a command, and an agent? If you don't know, keep reading."
173+
174+
4. **Statement:** "The biggest mistake in AI agent design is treating everything as a prompt."
175+
176+
---
177+
178+
## Closing Takeaways
179+
180+
1. **Modes define persona, skills extend knowledge, commands trigger workflows, subagents delegate**
181+
2. **Implicit vs explicit is the fundamental design decision**
182+
3. **Commands should be simple prompts, not knowledge dumps**
183+
4. **Literate commands are the next evolution: self-documenting, declarative workflows**
184+
5. **Composable systems beat monolithic prompts**
185+
186+
---
187+
188+
## Tone and Style
189+
190+
Based on previous blog posts:
191+
- Direct address ("you")
192+
- Short paragraphs, punchy sentences
193+
- Clear opinions with justification
194+
- Real-world examples
195+
- Philosophical framing + practical implementation
196+
- ~1500-2500 words
197+
198+
---
199+
200+
## Visual Elements to Consider
201+
202+
- Architecture diagram showing the four components
203+
- Flowchart: how a command orchestrates agents/subagents
204+
- Example literate command file
205+
- OpenCode directory structure
206+
207+
---
208+
209+
## Related Posts
210+
211+
- "How I'm Using AI Today" (Mar 2026) - Previous article on the system
212+
- "How to Train your Chatbot" series - Building agents from scratch
213+
214+
---
215+
216+
## Next Steps
217+
218+
1. [ ] Pick structure option
219+
2. [ ] Draft opening hook
220+
3. [ ] Write first draft
221+
4. [ ] Add diagrams/visuals
222+
5. [ ] Review against writing style guide

0 commit comments

Comments
 (0)