Skip to content

Commit 3e90ed6

Browse files
art049claude
andcommitted
chore(skills): rename skill directories with codspeed prefix
Rename optimize -> codspeed-optimize and setup-harness -> codspeed-setup-harness for clearer namespacing as plugin skills. Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 70e2ece commit 3e90ed6

2 files changed

Lines changed: 41 additions & 9 deletions

File tree

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
name: optimize
2+
name: codspeed-optimize
33
description: "Autonomously optimize code for performance using CodSpeed benchmarks, flamegraph analysis, and iterative improvement. Use this skill whenever the user wants to make code faster, reduce CPU usage, optimize memory, improve throughput, find performance bottlenecks, or asks to 'optimize', 'speed up', 'make faster', 'reduce latency', 'improve performance', or points at a CodSpeed benchmark result wanting improvements. Also trigger when the user mentions a slow function, a regression, or wants to understand where time is spent in their code."
44
---
55

@@ -87,6 +87,7 @@ Use the CodSpeed MCP tools to understand where time is spent:
8787
Apply optimizations one at a time. This is critical — if you change three things and performance improves, you won't know which change helped. If it regresses, you won't know which one hurt.
8888

8989
**Important constraints:**
90+
9091
- Only change code you've read and understood
9192
- Preserve correctness — run existing tests after each change
9293
- Keep changes minimal and focused
@@ -148,11 +149,13 @@ codspeed exec -m walltime -- <command>
148149
Then compare the walltime run against a walltime baseline using `compare_runs`.
149150

150151
**Patterns that often show up in simulation but NOT walltime:**
152+
151153
- Iterator adapter overhead (e.g., `.take(n)` to `[..n]`) — branch prediction hides it
152154
- Bounds check elimination — hardware speculates past them
153155
- Trivial arithmetic simplifications — hidden by out-of-order execution
154156

155157
**Patterns that reliably help in both modes:**
158+
156159
- Avoiding type conversions in hot loops (float/integer round-trips)
157160
- Eliminating libm calls (roundf, sinf — these are software routines)
158161
- Skipping redundant memory initialization
@@ -165,6 +168,7 @@ If a simulation improvement doesn't show up in walltime, strongly consider rever
165168
If the user wants more optimization, go back to Step 2 with fresh flamegraphs from your latest run. The profile will have shifted now that you've addressed the top bottleneck, revealing new targets.
166169

167170
Keep iterating until:
171+
168172
- The user says they're satisfied
169173
- The flamegraph shows no clear bottleneck (time is spread evenly)
170174
- Remaining optimizations would require architectural changes the user hasn't approved
@@ -173,28 +177,34 @@ Keep iterating until:
173177
## Language-specific notes
174178

175179
### Rust
180+
176181
- Use `cargo codspeed build -m <mode>` to build, `cargo codspeed run` to run
177182
- `--bench <name>` selects specific benchmark suites (matching `[[bench]]` targets in Cargo.toml)
178183
- Positional filter after `cargo codspeed run` matches benchmark names (e.g., `cargo codspeed run cat.jpg`)
179184
- Frameworks: criterion, divan, bencher (all work with cargo-codspeed)
180185

181186
### Python
187+
182188
- Uses pytest-codspeed: `codspeed run -m simulation -- pytest --codspeed`
183189
- Framework: pytest-benchmark compatible
184190

185191
### Node.js
192+
186193
- Frameworks: vitest (`@codspeed/vitest-plugin`), tinybench v5 (`@codspeed/tinybench-plugin`), benchmark.js (`@codspeed/benchmark.js-plugin`)
187194
- Run via: `codspeed run -m simulation -- npx vitest bench` (or equivalent)
188195

189196
### Go
197+
190198
- Built-in: `codspeed run -m simulation -- go test -bench .`
191199
- No special packages needed — CodSpeed instruments `go test -bench` directly
192200

193201
### C/C++
202+
194203
- Uses Google Benchmark with valgrind-codspeed
195204
- Build with CMake, run benchmarks via `codspeed run`
196205

197206
### Any language (exec harness)
207+
198208
- Use `codspeed exec -m <mode> -- <command>` for any executable
199209
- Or define benchmarks in `codspeed.yml` and use `codspeed run`
200210
- No code changes required — CodSpeed instruments the binary externally
Lines changed: 30 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
name: setup-harness
2+
name: codspeed-setup-harness
33
description: "Set up performance benchmarks and CodSpeed harness for a project. Use this skill whenever the user wants to create benchmarks, add performance tests, set up CodSpeed, configure codspeed.yml, integrate a benchmarking framework (criterion, divan, pytest-benchmark, vitest bench, go test -bench, google benchmark), or when the user says 'add benchmarks', 'set up perf tests', 'create a benchmark', 'benchmark this', or wants to measure performance of their code for the first time. Also trigger when the optimize skill needs benchmarks that don't exist yet."
44
---
55

@@ -27,13 +27,13 @@ Based on the language and what the user wants to benchmark, pick the right harne
2727

2828
These integrate deeply with CodSpeed and provide per-benchmark flamegraphs, fine-grained comparison, and simulation mode support.
2929

30-
| Language | Framework | How to set up |
31-
|----------|-----------|---------------|
32-
| **Rust** | divan (recommended), criterion, bencher | Add `codspeed-<framework>-compat` as dependency using `cargo add --rename` |
33-
| **Python** | pytest-benchmark | Install `pytest-codspeed`, use `@pytest.benchmark` or `benchmark` fixture |
34-
| **Node.js** | vitest (recommended), tinybench v5, benchmark.js | Install `@codspeed/<framework>-plugin`, configure in vitest/test config |
35-
| **Go** | go test -bench | No packages needed — CodSpeed instruments `go test -bench` directly |
36-
| **C/C++** | Google Benchmark | Build with CMake, CodSpeed instruments via valgrind-codspeed |
30+
| Language | Framework | How to set up |
31+
| ----------- | ------------------------------------------------ | -------------------------------------------------------------------------- |
32+
| **Rust** | divan (recommended), criterion, bencher | Add `codspeed-<framework>-compat` as dependency using `cargo add --rename` |
33+
| **Python** | pytest-benchmark | Install `pytest-codspeed`, use `@pytest.benchmark` or `benchmark` fixture |
34+
| **Node.js** | vitest (recommended), tinybench v5, benchmark.js | Install `@codspeed/<framework>-plugin`, configure in vitest/test config |
35+
| **Go** | go test -bench | No packages needed — CodSpeed instruments `go test -bench` directly |
36+
| **C/C++** | Google Benchmark | Build with CMake, CodSpeed instruments via valgrind-codspeed |
3737

3838
### Exec harness (universal)
3939

@@ -43,6 +43,7 @@ For any language or when you want to benchmark a whole program (not individual f
4343
- Or create a `codspeed.yml` with benchmark definitions for repeatable setups
4444

4545
The exec harness requires no code changes — it instruments the binary externally. This is ideal for:
46+
4647
- Languages without a dedicated CodSpeed integration
4748
- End-to-end benchmarks (full program execution)
4849
- Quick setup when you just want to track a command's performance
@@ -58,12 +59,14 @@ The exec harness requires no code changes — it instruments the binary external
5859
### Rust with divan (recommended)
5960

6061
1. Add the dependency:
62+
6163
```bash
6264
cargo add divan
6365
cargo add codspeed-divan-compat --rename divan --dev
6466
```
6567

6668
2. Create a benchmark file in `benches/`:
69+
6770
```rust
6871
// benches/my_bench.rs
6972
use divan;
@@ -81,13 +84,15 @@ fn bench_my_function() {
8184
```
8285

8386
3. Add to `Cargo.toml`:
87+
8488
```toml
8589
[[bench]]
8690
name = "my_bench"
8791
harness = false
8892
```
8993

9094
4. Build and run:
95+
9196
```bash
9297
cargo codspeed build -m simulation --bench my_bench
9398
codspeed run -m simulation -- cargo codspeed run --bench my_bench
@@ -96,12 +101,14 @@ codspeed run -m simulation -- cargo codspeed run --bench my_bench
96101
### Rust with criterion
97102

98103
1. Add dependencies:
104+
99105
```bash
100106
cargo add criterion --dev
101107
cargo add codspeed-criterion-compat --rename criterion --dev
102108
```
103109

104110
2. Create benchmark in `benches/`:
111+
105112
```rust
106113
use criterion::{criterion_group, criterion_main, Criterion};
107114

@@ -120,13 +127,15 @@ criterion_main!(benches);
120127
### Python with pytest-codspeed
121128

122129
1. Install:
130+
123131
```bash
124132
pip install pytest-codspeed
125133
# or
126134
uv add --dev pytest-codspeed
127135
```
128136

129137
2. Create benchmark tests:
138+
130139
```python
131140
# tests/test_benchmarks.py
132141
import pytest
@@ -143,20 +152,23 @@ def test_with_setup(benchmark):
143152
```
144153

145154
3. Run:
155+
146156
```bash
147157
codspeed run -m simulation -- pytest --codspeed
148158
```
149159

150160
### Node.js with vitest (recommended)
151161

152162
1. Install:
163+
153164
```bash
154165
npm install -D @codspeed/vitest-plugin
155166
# or
156167
pnpm add -D @codspeed/vitest-plugin
157168
```
158169

159170
2. Configure vitest (`vitest.config.ts`):
171+
160172
```typescript
161173
import { defineConfig } from "vitest/config";
162174
import codspeed from "@codspeed/vitest-plugin";
@@ -167,6 +179,7 @@ export default defineConfig({
167179
```
168180

169181
3. Create benchmark file:
182+
170183
```typescript
171184
// bench/my.bench.ts
172185
import { bench, describe } from "vitest";
@@ -179,6 +192,7 @@ describe("my module", () => {
179192
```
180193

181194
4. Run:
195+
182196
```bash
183197
codspeed run -m simulation -- npx vitest bench
184198
```
@@ -188,6 +202,7 @@ codspeed run -m simulation -- npx vitest bench
188202
No packages needed — CodSpeed instruments `go test -bench` directly.
189203

190204
1. Create benchmark tests:
205+
191206
```go
192207
// my_test.go
193208
func BenchmarkMyFunction(b *testing.B) {
@@ -198,6 +213,7 @@ func BenchmarkMyFunction(b *testing.B) {
198213
```
199214

200215
2. Run (walltime is the default for Go):
216+
201217
```bash
202218
codspeed run -m walltime -- go test -bench . ./...
203219
```
@@ -207,6 +223,7 @@ codspeed run -m walltime -- go test -bench . ./...
207223
1. Install Google Benchmark (via CMake FetchContent or system package)
208224

209225
2. Create benchmark:
226+
210227
```cpp
211228
#include <benchmark/benchmark.h>
212229

@@ -221,6 +238,7 @@ BENCHMARK_MAIN();
221238
```
222239
223240
3. Build and run with CodSpeed:
241+
224242
```bash
225243
cmake -B build && cmake --build build
226244
codspeed run -m simulation -- ./build/my_benchmark
@@ -231,6 +249,7 @@ codspeed run -m simulation -- ./build/my_benchmark
231249
For benchmarking whole programs without code changes:
232250

233251
1. Create `codspeed.yml`:
252+
234253
```yaml
235254
$schema: https://raw.githubusercontent.com/CodSpeedHQ/codspeed/refs/heads/main/schemas/codspeed.schema.json
236255

@@ -249,11 +268,13 @@ benchmarks:
249268
```
250269
251270
2. Run:
271+
252272
```bash
253273
codspeed run -m walltime
254274
```
255275

256276
Or for a one-off:
277+
257278
```bash
258279
codspeed exec -m walltime -- ./my_binary --input data.txt
259280
```
@@ -279,6 +300,7 @@ Good benchmarks are representative, isolated, and stable. Here are guidelines:
279300
After setting up:
280301

281302
1. **Run the benchmarks locally** to verify they work:
303+
282304
```bash
283305
# For language-specific harnesses
284306
cargo codspeed build -m simulation && codspeed run -m simulation -- cargo codspeed run

0 commit comments

Comments
 (0)