Skip to content

Commit 05f4d67

Browse files
luiscanteroCopilot
andauthored
Add skill autoresearch (#1062)
* Add skill autoresearch * Update readme * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Address bot feedback --------- Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
1 parent e8b1e93 commit 05f4d67

2 files changed

Lines changed: 276 additions & 0 deletions

File tree

docs/README.skills.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
3737
| [aspire](../skills/aspire/SKILL.md) | Aspire skill covering the Aspire CLI, AppHost orchestration, service discovery, integrations, MCP server, VS Code extension, Dev Containers, GitHub Codespaces, templates, dashboard, and deployment. Use when the user asks to create, run, debug, configure, deploy, or troubleshoot an Aspire distributed application. | `references/architecture.md`<br />`references/cli-reference.md`<br />`references/dashboard.md`<br />`references/deployment.md`<br />`references/integrations-catalog.md`<br />`references/mcp-server.md`<br />`references/polyglot-apis.md`<br />`references/testing.md`<br />`references/troubleshooting.md` |
3838
| [aspnet-minimal-api-openapi](../skills/aspnet-minimal-api-openapi/SKILL.md) | Create ASP.NET Minimal API endpoints with proper OpenAPI documentation | None |
3939
| [automate-this](../skills/automate-this/SKILL.md) | Analyze a screen recording of a manual process and produce targeted, working automation scripts. Extracts frames and audio narration from video files, reconstructs the step-by-step workflow, and proposes automation at multiple complexity levels using tools already installed on the user machine. | None |
40+
| [autoresearch](../skills/autoresearch/SKILL.md) | Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy's autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric. | None |
4041
| [az-cost-optimize](../skills/az-cost-optimize/SKILL.md) | Analyze Azure resources used in the app (IaC files and/or resources in a target rg) and optimize costs - creating GitHub issues for identified optimizations. | None |
4142
| [azure-deployment-preflight](../skills/azure-deployment-preflight/SKILL.md) | Performs comprehensive preflight validation of Bicep deployments to Azure, including template syntax validation, what-if analysis, and permission checks. Use this skill before any deployment to Azure to preview changes, identify potential issues, and ensure the deployment will succeed. Activate when users mention deploying to Azure, validating Bicep files, checking deployment permissions, previewing infrastructure changes, running what-if, or preparing for azd provision. | `references/ERROR-HANDLING.md`<br />`references/REPORT-TEMPLATE.md`<br />`references/VALIDATION-COMMANDS.md` |
4243
| [azure-devops-cli](../skills/azure-devops-cli/SKILL.md) | Manage Azure DevOps resources via CLI including projects, repos, pipelines, builds, pull requests, work items, artifacts, and service endpoints. Use when working with Azure DevOps, az commands, devops automation, CI/CD, or when user mentions Azure DevOps CLI. | `references/advanced-usage.md`<br />`references/boards-and-iterations.md`<br />`references/org-and-security.md`<br />`references/pipelines-and-builds.md`<br />`references/repos-and-prs.md`<br />`references/variables-and-agents.md`<br />`references/workflows-and-patterns.md` |

skills/autoresearch/SKILL.md

Lines changed: 275 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,275 @@
1+
---
2+
name: autoresearch
3+
description: 'Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy''s autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric.'
4+
license: MIT
5+
compatibility: Requires git. The project must be a git repository. Requires terminal access to run commands.
6+
metadata:
7+
author: luiscantero
8+
inspired-by: https://github.com/karpathy/autoresearch
9+
---
10+
11+
# Autoresearch: Autonomous Iterative Experimentation
12+
13+
An autonomous experimentation loop for any programming task. You define the goal and how to measure it; the agent iterates autonomously -- modifying code, running experiments, measuring results, and keeping or discarding changes -- until interrupted.
14+
15+
This skill is inspired by [Karpathy's autoresearch](https://github.com/karpathy/autoresearch), generalized from ML training to **any programming task with a measurable outcome**.
16+
17+
---
18+
19+
## Agent Behavior Rules
20+
21+
1. **DO** guide the user through the Setup phase interactively before starting the loop.
22+
2. **DO** establish a baseline measurement before making any changes.
23+
3. **DO** commit every experiment attempt before running it (so it can be reverted cleanly).
24+
4. **DO** keep a results log (TSV) tracking every experiment.
25+
5. **DO** revert changes that do not improve the metric (git reset to last known good).
26+
6. **DO** run autonomously once the loop starts -- never pause to ask "should I continue?".
27+
7. **DO NOT** modify files the user marked as out-of-scope.
28+
8. **DO NOT** skip the measurement step -- every experiment must be measured.
29+
9. **DO NOT** keep changes that regress the metric unless the user explicitly allowed trade-offs.
30+
10. **DO NOT** install new dependencies or make environment changes unless the user approved it.
31+
32+
---
33+
34+
## Phase 1: Setup (Interactive)
35+
36+
Before any experimentation begins, work with the user to establish these parameters.
37+
Ask the user directly for each item. Do not assume or skip any.
38+
39+
### 1.1 Define the Goal
40+
41+
Ask the user:
42+
43+
> **What are you trying to improve or optimize?**
44+
>
45+
> Examples: execution time, memory usage, binary size, test pass rate, code coverage,
46+
> API response latency, throughput, error rate, benchmark score, build time, bundle size,
47+
> lines of code, cyclomatic complexity, etc.
48+
49+
Record the user's answer as the **goal**.
50+
51+
### 1.2 Define the Metric
52+
53+
Ask the user:
54+
55+
> **How do we measure success? What exact command produces the metric?**
56+
>
57+
> I need:
58+
> 1. **The command** to run (e.g., `dotnet test`, `npm run benchmark`, `time ./build.sh`, `pytest --tb=short`)
59+
> 2. **How to extract the metric** from the output (e.g., a regex pattern, a specific line, a JSON field)
60+
> 3. **Direction**: Is lower better or higher better?
61+
>
62+
> Example: "Run `dotnet test --logger trx`, count passing tests. Higher is better."
63+
> Example: "Run `hyperfine './my-program'`, extract mean time. Lower is better."
64+
65+
Record:
66+
- `METRIC_COMMAND`: the command to run
67+
- `METRIC_EXTRACTION`: how to extract the numeric metric from output
68+
- `METRIC_DIRECTION`: `lower_is_better` or `higher_is_better`
69+
70+
### 1.3 Define the Scope
71+
72+
Ask the user:
73+
74+
> **Which files or directories am I allowed to modify?**
75+
>
76+
> And which files are OFF LIMITS (read-only)?
77+
78+
Record:
79+
- `IN_SCOPE_FILES`: files/dirs the agent may edit
80+
- `OUT_OF_SCOPE_FILES`: files/dirs that must not be modified
81+
82+
### 1.4 Define Constraints
83+
84+
Ask the user:
85+
86+
> **Are there any constraints I should respect?**
87+
>
88+
> Examples:
89+
> - Time budget per experiment (e.g., "each run should take < 2 minutes")
90+
> - No new dependencies
91+
> - Must keep all existing tests passing
92+
> - Must not change the public API
93+
> - Must maintain backward compatibility
94+
> - VRAM/memory limit
95+
> - Code complexity limits (prefer simpler solutions)
96+
97+
Record as `CONSTRAINTS`.
98+
99+
### 1.5 Define the Experiment Budget (Optional)
100+
101+
Ask the user:
102+
103+
> **How many experiments should I run, or should I just keep going until you stop me?**
104+
>
105+
> You can say a number (e.g., "try 20 experiments") or "unlimited" (I'll run until you interrupt).
106+
107+
Record as `MAX_EXPERIMENTS` (number or `unlimited`).
108+
109+
### 1.6 Simplicity Criterion
110+
111+
Inform the user of the default simplicity policy:
112+
113+
> **Simplicity policy (default):** All else being equal, simpler is better. A small improvement
114+
> that adds ugly complexity is not worth it. Removing code while maintaining or improving
115+
> the metric is a great outcome. I'll weigh the complexity cost against the improvement
116+
> magnitude. Does this policy work for you, or do you want to adjust it?
117+
118+
Record any adjustments as `SIMPLICITY_POLICY`.
119+
120+
### 1.7 Confirm Setup
121+
122+
Summarize all parameters back to the user in a clear table:
123+
124+
| Parameter | Value |
125+
| ------------------ | ---------------------------- |
126+
| Goal | ... |
127+
| Metric command | ... |
128+
| Metric extraction | ... |
129+
| Direction | lower is better / higher ... |
130+
| In-scope files | ... |
131+
| Out-of-scope files | ... |
132+
| Constraints | ... |
133+
| Max experiments | ... |
134+
| Simplicity policy | ... |
135+
136+
Ask the user to confirm. Do not proceed until confirmed.
137+
138+
---
139+
140+
## Phase 2: Branch & Baseline
141+
142+
Once the user confirms:
143+
144+
1. **Create a branch**: Propose a tag based on today's date (e.g., `autoresearch/mar17`).
145+
Create the branch: `git checkout -b autoresearch/<tag>`.
146+
147+
2. **Read in-scope files**: Read all files that are in scope to build full context of the current state.
148+
149+
3. **Initialize results.tsv**: Create `results.tsv` in the repo root with the header row:
150+
```
151+
experiment commit metric status description
152+
```
153+
Add `results.tsv` and `run.log` to `.git/info/exclude` (append if not already present) so they stay untracked without modifying any tracked files.
154+
155+
4. **Run the baseline**: Execute the metric command on the current unmodified code.
156+
Record the result as experiment `0` with status `baseline` in `results.tsv`.
157+
158+
5. **Report baseline** to the user:
159+
> Baseline established: **[metric_name] = [value]**
160+
> Starting autonomous experimentation loop.
161+
162+
---
163+
164+
## Phase 3: Experiment Loop
165+
166+
Run this loop continuously. Do not stop to ask the user. Run until:
167+
- `MAX_EXPERIMENTS` is reached, OR
168+
- The user manually interrupts
169+
170+
### For each experiment:
171+
172+
```
173+
LOOP:
174+
1. THINK - Analyze previous results and the current code.
175+
Generate an experiment hypothesis.
176+
Consider: what worked, what didn't, what hasn't been tried.
177+
178+
2. EDIT - Modify the in-scope file(s) to implement the idea.
179+
Keep changes focused and minimal per experiment.
180+
181+
3. COMMIT - git add + git commit with a short descriptive message.
182+
Format: "experiment: <short description of what changed>"
183+
184+
4. RUN - Execute the metric command.
185+
Redirect output to run.log so it does not flood the context window.
186+
Use shell-appropriate redirection:
187+
- Bash/Zsh: `<command> > run.log 2>&1`
188+
- PowerShell: `<command> *> run.log`
189+
190+
5. MEASURE - Extract the metric from run.log.
191+
If extraction fails (crash/error), read the last 50 lines
192+
of run.log for the error.
193+
194+
6. DECIDE - Compare metric to the current best:
195+
- IMPROVED: Keep the commit. Update the "best" baseline.
196+
Log status = "keep".
197+
- SAME OR WORSE: Revert. `git reset --hard HEAD~1`.
198+
Log status = "discard".
199+
- CRASH: Attempt a quick fix (typo, import, simple error).
200+
Amend the experiment commit (`git commit --amend`) with the fix
201+
and rerun. The experiment keeps its original number.
202+
If unfixable after 2 attempts, revert the entire experiment
203+
(`git reset --hard HEAD~1`) and log status = "crash".
204+
205+
7. LOG - Append a row to results.tsv:
206+
experiment_number commit_hash metric_value status description
207+
208+
8. CONTINUE - Go to step 1.
209+
```
210+
211+
### Experiment Strategy
212+
213+
When generating experiment ideas, follow this priority order:
214+
215+
1. **Low-hanging fruit first**: Simple parameter tweaks, obvious inefficiencies.
216+
2. **Informed by results**: If a direction showed promise, explore further in that direction.
217+
3. **Diversify after plateaus**: If the last 3-5 experiments all failed, try a different approach entirely.
218+
4. **Combine winners**: If experiments A and B each improved independently, try combining them.
219+
5. **Simplification passes**: Periodically try removing code/complexity to see if the metric holds.
220+
6. **Radical changes**: After exhausting incremental ideas, try larger architectural changes.
221+
222+
### Handling Constraints
223+
224+
- **Time budget**: If a run exceeds 2x the expected duration, kill it and treat as a crash.
225+
- **Existing tests**: If constraints require tests to pass, run them before/after and revert if they break.
226+
- **Memory/resources**: Monitor and revert if resource usage exceeds stated limits.
227+
228+
---
229+
230+
## Phase 4: Reporting
231+
232+
When the loop ends (budget reached or user interrupts):
233+
234+
1. **Print the full results.tsv** as a formatted table.
235+
2. **Summarize**:
236+
- Total experiments run
237+
- Experiments kept / discarded / crashed
238+
- Starting metric (baseline) vs. final metric
239+
- Improvement percentage
240+
- Top 3 most impactful changes
241+
3. **Show the cumulative git log** of kept experiments:
242+
`git log --oneline <start_commit>..HEAD`
243+
4. **Recommend next steps**: Based on the results, suggest what a human researcher might try next (ideas that were too risky/complex for automated experimentation).
244+
245+
---
246+
247+
## Quick Reference
248+
249+
### Results TSV Format
250+
251+
Tab-separated, 5 columns:
252+
253+
```
254+
experiment commit metric status description
255+
0 a1b2c3d 0.997900 baseline unmodified code
256+
1 b2c3d4e 0.993200 keep increase learning rate to 0.04
257+
2 c3d4e5f 1.005000 discard switch to GeLU activation
258+
3 d4e5f6g 0.000000 crash double model width (OOM)
259+
```
260+
261+
### Git Workflow
262+
263+
- All experiments happen on the `autoresearch/<tag>` branch
264+
- Each experiment is committed before running
265+
- Failed experiments are reverted with `git reset --hard HEAD~1`
266+
- Successful experiments advance the branch
267+
- `results.tsv` and `run.log` stay untracked (added to `.git/info/exclude`)
268+
269+
### Key Principles
270+
271+
1. **Measure everything**: No experiment without a measurement.
272+
2. **Revert failures**: The branch only advances on improvements.
273+
3. **Stay autonomous**: Never stop to ask. Think harder if stuck.
274+
4. **Keep it simple**: Complexity is a cost. Weigh it against gains.
275+
5. **Log everything**: The TSV is the research journal.

0 commit comments

Comments
 (0)