Skip to content

Commit 89bf29f

Browse files
feat: add update implementation workflow for plot modifications
- Introduced a local interactive workflow for updating plot implementations - Added phases for parsing arguments, creating agent teams, and collecting feedback - Implemented validation checks for existing specifications and libraries - Defined steps for shipping updates and handling metadata
1 parent 1eb6d27 commit 89bf29f

1 file changed

Lines changed: 399 additions & 0 deletions

File tree

agentic/commands/update.md

Lines changed: 399 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,399 @@
1+
# Update Implementation
2+
3+
> Local interactive workflow for updating plot implementations. Spawns per-library Opus agents that modify, regenerate,
4+
> and preview plots in parallel. The lead coordinates iteration based on user feedback and handles shipping (GCS upload,
5+
> git branch, PR, review trigger).
6+
7+
## Context
8+
9+
@CLAUDE.md
10+
@pyproject.toml
11+
12+
## Instructions
13+
14+
You are the **update-lead**. Your job is to coordinate a team of per-library updater agents, present results to the
15+
user, iterate on feedback, and ship the final changes.
16+
17+
**Prerequisite**: This command uses agent teams (experimental). Ensure `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
18+
in your environment or Claude Code settings.
19+
20+
---
21+
22+
### Phase 1: Parse & Setup
23+
24+
Parse `$ARGUMENTS` using this format:
25+
26+
```
27+
/update {spec-id} [{libraries}] [{description}]
28+
/update {issue-url-or-number}
29+
```
30+
31+
**Argument parsing rules:**
32+
33+
1. **Issue URL or `#N` number**: If the first argument is a GitHub URL containing `/issues/` or starts with `#`, run
34+
`gh issue view {number} --json title,body,labels` to extract:
35+
- `spec_id`: from issue body or labels (look for `spec:` label prefix or spec-id mention)
36+
- `libraries`: from issue body or labels (look for `impl:` label prefix or library mentions)
37+
- `description`: from issue title and body
38+
2. **Normal arguments**: First arg is `spec_id`. Second arg (if comma-separated list of known libraries) is the library
39+
filter. Everything after is the `description`.
40+
3. **Known libraries**: `matplotlib`, `seaborn`, `plotly`, `bokeh`, `altair`, `plotnine`, `pygal`, `highcharts`,
41+
`letsplot`
42+
4. **Default libraries**: If no libraries specified, scan `plots/{spec_id}/implementations/` for existing `*.py` files (
43+
excluding `__init__.py`) — update all that exist.
44+
45+
**Validation:**
46+
47+
- Confirm `plots/{spec_id}/` exists (abort with helpful message if not)
48+
- Confirm `plots/{spec_id}/specification.md` exists
49+
- Confirm at least one implementation exists for the requested libraries
50+
- List which libraries will be updated and ask user to confirm before proceeding
51+
52+
**Read specification once**: Read `plots/{spec_id}/specification.md` — you'll pass context to agents.
53+
54+
---
55+
56+
### Phase 2: Create Team & Spawn Agents
57+
58+
1. **Create team**: `TeamCreate` with name `update-{spec_id}`
59+
60+
2. **Create one task per library**: Use `TaskCreate` with:
61+
- Subject: `Update {library} implementation for {spec_id}`
62+
- Description: Include spec_id, library, and the user's description
63+
64+
3. **Spawn one `general-purpose` opus agent per library** via `Task` tool with:
65+
- `team_name`: `update-{spec_id}`
66+
- `name`: `{library}-updater`
67+
- `subagent_type`: `general-purpose`
68+
- `model`: `opus`
69+
- The **library-updater prompt** (see below), with `{SPEC_ID}`, `{LIBRARY}`, and `{DESCRIPTION}` filled in
70+
71+
4. **Assign tasks** to the corresponding agents via `TaskUpdate`
72+
73+
All agents run in parallel — each only touches its own library's files.
74+
75+
---
76+
77+
### Phase 3: Collect & Present
78+
79+
Agents report back via `SendMessage` (auto-delivered to you). Once all agents have reported:
80+
81+
1. **Present a summary to the user** for each library:
82+
- What was changed (bullet points from agent)
83+
- Local preview image path: `plots/{spec_id}/implementations/.update-preview/{library}/plot.png`
84+
- Agent's self-assessment score
85+
- Any spec changes the agent made
86+
87+
2. **Ask the user for feedback.** They can:
88+
- Give per-library feedback (e.g., "matplotlib looks good, seaborn needs more contrast")
89+
- Say **"ship"**, **"ok"**, **"looks good"**, or **"passt"** to proceed to shipping
90+
- Say **"abort"** to cancel everything
91+
92+
---
93+
94+
### Phase 4: Iterate
95+
96+
For per-library feedback:
97+
98+
1. Send the feedback to the specific idle teammate via `SendMessage` (e.g., to `seaborn-updater`). This wakes them up.
99+
2. The agent re-modifies, re-generates, reports back, and goes idle again.
100+
3. Present updated results to the user.
101+
4. Repeat until the user approves.
102+
103+
---
104+
105+
### Phase 5: Ship
106+
107+
**Only proceed when the user explicitly approves shipping.**
108+
109+
The lead handles all shipping directly (no delegation to teammates):
110+
111+
#### 5a. Code Quality
112+
113+
```bash
114+
uv run ruff format plots/{spec_id}/implementations/*.py
115+
uv run ruff check --fix plots/{spec_id}/implementations/*.py
116+
```
117+
118+
#### 5b. Update Metadata YAML
119+
120+
For each updated library, edit `plots/{spec_id}/metadata/{library}.yaml`:
121+
122+
| Field | Value |
123+
|-------------------|---------------------------------------------------------------------------|
124+
| `updated` | Current UTC timestamp in ISO 8601 (e.g., `2026-02-10T14:30:00+00:00`) |
125+
| `generated_by` | `claude-opus-4-6` |
126+
| `python_version` | Get from `python3 --version` |
127+
| `library_version` | Get from `python3 -c "import {library}; print({library}.__version__)"` |
128+
| `quality_score` | Set to `null` (CI review will fill this) |
129+
| All other fields | **Keep unchanged** (including `review`, `impl_tags`, `preview_url`, etc.) |
130+
131+
#### 5c. Update Implementation Header
132+
133+
For each updated library, ensure the implementation file starts with:
134+
135+
```python
136+
""" pyplots.ai
137+
{spec_id}: {Title from specification}
138+
Library: {library} {lib_version} | Python {py_version}
139+
Quality: /100 | Updated: {YYYY-MM-DD}
140+
"""
141+
```
142+
143+
#### 5d. Copy Final Images
144+
145+
For each library, copy the preview images to the implementations directory for GCS upload:
146+
147+
```bash
148+
cp plots/{spec_id}/implementations/.update-preview/{library}/plot.png plots/{spec_id}/implementations/plot.png
149+
# Process images (thumbnail + optimization)
150+
uv run python -m core.images process \
151+
plots/{spec_id}/implementations/plot.png \
152+
plots/{spec_id}/implementations/plot.png \
153+
plots/{spec_id}/implementations/plot_thumb.png
154+
```
155+
156+
Note: Since we process one library at a time for GCS upload, handle sequentially.
157+
158+
#### 5e. GCS Staging Upload
159+
160+
For each library:
161+
162+
```bash
163+
STAGING_PATH="gs://pyplots-images/staging/{spec_id}/{library}"
164+
165+
# Upload PNG
166+
gsutil cp plots/{spec_id}/implementations/plot.png "${STAGING_PATH}/plot.png"
167+
gsutil acl ch -u AllUsers:R "${STAGING_PATH}/plot.png" 2>/dev/null || true
168+
169+
# Upload thumbnail
170+
gsutil cp plots/{spec_id}/implementations/plot_thumb.png "${STAGING_PATH}/plot_thumb.png"
171+
gsutil acl ch -u AllUsers:R "${STAGING_PATH}/plot_thumb.png" 2>/dev/null || true
172+
173+
# Upload HTML if it exists (interactive libraries: plotly, bokeh, altair, highcharts, pygal, letsplot)
174+
if [ -f "plots/{spec_id}/implementations/.update-preview/{library}/plot.html" ]; then
175+
gsutil cp "plots/{spec_id}/implementations/.update-preview/{library}/plot.html" "${STAGING_PATH}/plot.html"
176+
gsutil acl ch -u AllUsers:R "${STAGING_PATH}/plot.html" 2>/dev/null || true
177+
fi
178+
```
179+
180+
Update `preview_url` and `preview_thumb` in the metadata YAML to point to the staging URLs:
181+
182+
- `preview_url`: `https://storage.googleapis.com/pyplots-images/staging/{spec_id}/{library}/plot.png`
183+
- `preview_thumb`: `https://storage.googleapis.com/pyplots-images/staging/{spec_id}/{library}/plot_thumb.png`
184+
185+
#### 5f. Clean Up Preview Directory
186+
187+
```bash
188+
rm -rf plots/{spec_id}/implementations/.update-preview
189+
```
190+
191+
#### 5g. Git Branch & PR
192+
193+
```bash
194+
# Create branch
195+
git checkout -b implementation/{spec_id}/update
196+
197+
# Stage only the changed files (NO images — those are in GCS)
198+
git add plots/{spec_id}/implementations/*.py
199+
git add plots/{spec_id}/metadata/*.yaml
200+
# If spec was changed:
201+
git add plots/{spec_id}/specification.md
202+
203+
# Commit
204+
git commit -m "update({spec_id}): {short description}
205+
206+
Updated libraries: {comma-separated list}
207+
{description}
208+
209+
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>"
210+
211+
# Push
212+
git push -u origin implementation/{spec_id}/update
213+
```
214+
215+
#### 5h. Create PR
216+
217+
```bash
218+
gh pr create \
219+
--title "update({spec_id}): {short description}" \
220+
--body "$(cat <<'EOF'
221+
## Summary
222+
223+
Updated implementation(s) for **{spec_id}**.
224+
225+
**Libraries:** {comma-separated list}
226+
**Changes:** {description}
227+
228+
### Per-Library Changes
229+
230+
{For each library:}
231+
#### {library}
232+
{bullet points of changes from agent}
233+
- Quality self-assessment: {score}/100
234+
235+
## Test Plan
236+
237+
- [ ] Preview images uploaded to GCS staging
238+
- [ ] Implementation files pass ruff format/check
239+
- [ ] Metadata YAML updated with current versions
240+
- [ ] Automated review triggered
241+
242+
---
243+
Generated with [Claude Code](https://claude.com/claude-code) `/update` command
244+
EOF
245+
)"
246+
```
247+
248+
#### 5i. Trigger Review
249+
250+
```bash
251+
PR_NUMBER=$(gh pr view --json number -q '.number')
252+
gh api repos/{owner}/{repo}/dispatches \
253+
-f event_type=review-pr \
254+
-f 'client_payload[pr_number]='"$PR_NUMBER"
255+
```
256+
257+
Get `{owner}/{repo}` from `git remote get-url origin`.
258+
259+
#### 5j. Cleanup Team
260+
261+
1. `SendMessage` with type `shutdown_request` to all agents
262+
2. `TeamDelete` to clean up the team
263+
3. Report the PR URL to the user
264+
265+
---
266+
267+
## Library-Updater Agent Prompt
268+
269+
Use this prompt when spawning each per-library agent. Replace `{SPEC_ID}`, `{LIBRARY}`, and `{DESCRIPTION}` with actual
270+
values.
271+
272+
---
273+
274+
You are the **{LIBRARY}-updater** on the `update-{SPEC_ID}` team. Your job is to update the {LIBRARY} implementation for
275+
**{SPEC_ID}**.
276+
277+
**Task:** {DESCRIPTION}
278+
279+
### Step 1: Read Context
280+
281+
Read these files to understand what you're working with:
282+
283+
1. `plots/{SPEC_ID}/specification.md` — the specification (what the plot should show)
284+
2. `plots/{SPEC_ID}/implementations/{LIBRARY}.py` — current implementation to update
285+
3. `plots/{SPEC_ID}/metadata/{LIBRARY}.yaml` — review feedback from last review:
286+
- `review.strengths` — PRESERVE these (don't break what works)
287+
- `review.weaknesses` — FIX these
288+
- `review.criteria_checklist` — items with `passed: false` need fixing
289+
- `quality_score` — current score to beat
290+
4. `prompts/library/{LIBRARY}.md` — library-specific rules (**CRITICAL**: follow these exactly)
291+
5. `prompts/plot-generator.md` — base generation rules
292+
6. `prompts/quality-criteria.md` — quality scoring criteria
293+
294+
If `preview_url` exists in the metadata, view the current preview image to understand what the plot currently looks
295+
like.
296+
297+
### Step 2: Modify Implementation
298+
299+
Edit `plots/{SPEC_ID}/implementations/{LIBRARY}.py`:
300+
301+
- Follow all rules from `prompts/plot-generator.md` and `prompts/library/{LIBRARY}.md`
302+
- KISS structure: imports → data → plot → save
303+
- Preserve review strengths, fix weaknesses
304+
- Address the user's specific request: **{DESCRIPTION}**
305+
- If no specific request was given, focus on fixing review weaknesses and improving quality score
306+
307+
If the specification itself needs changes to make the plot better, also edit `plots/{SPEC_ID}/specification.md` and
308+
explain what you changed and why.
309+
310+
### Step 3: Generate Locally
311+
312+
Run the implementation to generate the plot image:
313+
314+
```bash
315+
mkdir -p plots/{SPEC_ID}/implementations/.update-preview/{LIBRARY}
316+
cd plots/{SPEC_ID}/implementations && MPLBACKEND=Agg python3 {LIBRARY}.py
317+
cp plot.png .update-preview/{LIBRARY}/plot.png
318+
```
319+
320+
For interactive libraries (plotly, bokeh, altair, highcharts, pygal, letsplot), also copy `plot.html` if generated:
321+
322+
```bash
323+
[ -f plot.html ] && cp plot.html .update-preview/{LIBRARY}/plot.html
324+
```
325+
326+
If the script fails, read the error, fix the implementation, and retry. **Up to 3 retries.**
327+
328+
### Step 4: Process Images
329+
330+
Generate thumbnail and optimize:
331+
332+
```bash
333+
uv run python -m core.images process \
334+
plots/{SPEC_ID}/implementations/.update-preview/{LIBRARY}/plot.png \
335+
plots/{SPEC_ID}/implementations/.update-preview/{LIBRARY}/plot.png \
336+
plots/{SPEC_ID}/implementations/.update-preview/{LIBRARY}/plot_thumb.png
337+
```
338+
339+
### Step 5: Self-Check
340+
341+
View the generated image at `plots/{SPEC_ID}/implementations/.update-preview/{LIBRARY}/plot.png`.
342+
343+
Check against the quality criteria from `prompts/quality-criteria.md`:
344+
345+
- Text legibility (title 24pt, labels 20pt, ticks 16pt)
346+
- No overlapping elements
347+
- Elements visible and distinguishable
348+
- Color accessibility
349+
- Layout balance (16:9)
350+
- Correct axis labels with units
351+
- Spec compliance
352+
353+
Fix any obvious issues before reporting.
354+
355+
### Step 6: Report to Lead
356+
357+
Send a message to `update-lead` via `SendMessage` with:
358+
359+
```
360+
LIBRARY: {LIBRARY}
361+
STATUS: done
362+
363+
CHANGES:
364+
- {bullet point 1}
365+
- {bullet point 2}
366+
- ...
367+
368+
IMAGE: plots/{SPEC_ID}/implementations/.update-preview/{LIBRARY}/plot.png
369+
SELF_SCORE: {your estimated quality score}/100
370+
371+
SPEC_CHANGES: {none, or describe what you changed in specification.md}
372+
373+
ISSUES: {none, or describe any problems encountered}
374+
```
375+
376+
Then mark your task as completed via `TaskUpdate`.
377+
378+
**After reporting, go idle. The lead will wake you if the user has feedback for revisions.**
379+
380+
If the lead sends you feedback, repeat Steps 2-6 with the new instructions.
381+
382+
---
383+
384+
## Usage Examples
385+
386+
```
387+
# Update all existing implementations for a spec (no description)
388+
/update area-basic
389+
390+
# Update single library
391+
/update area-basic matplotlib
392+
393+
# Update specific libraries with description
394+
/update area-basic matplotlib,seaborn fix the axis label overlap
395+
396+
# Update from a GitHub issue
397+
/update #3970
398+
/update https://github.com/MarkusNeusinger/pyplots/issues/3970
399+
```

0 commit comments

Comments
 (0)