Commit 2bf7117
[Skill] Add liger-kernel-perf skill for Triton kernel performance optimization (#1185)
## Summary
Adds a new Claude Code skill (`liger-kernel-perf`) that optimizes
existing Liger Kernel Triton kernels through a 3-stage pipeline: Profile
-> Optimize -> Finalize. The skill automatically diagnoses bottlenecks,
generates multiple optimization variants, benchmarks them against the
full suite, and creates a PR with the winning variant -- all while
maintaining correctness.
Tested on `fused_add_rms_norm` (see PR #1187), which achieved up to 70%
backward speedup on H100 via register pressure reduction.
## What the skill does
- **Stage 1 (Profiler):** Runs baseline benchmarks, detects GPU
architecture (Ampere/Hopper/Blackwell), optionally runs NCU profiling,
classifies bottleneck (memory-bound vs compute-bound), produces
optimization profile with recommended strategy order
- **Stage 2 (Optimizer):** Autonomous optimization loop -- tries
parameter tuning first (manual sweep, NOT @triton.autotune), then
diagnosis-driven techniques. Each variant gets a versioned file + lab
notebook tracking hypothesis/changes/results/learnings. Stops on budget
exhaustion, diminishing returns, or target met
- **Stage 3 (Finalizer):** Applies winner in-place, runs full test suite
(hard gate), generates 3-way comparison plots (original vs optimized vs
baseline), updates benchmark CSV, creates PR with descriptive body
## Key design decisions
- **No @triton.autotune**: Incompatible with Liger's forward-backward
ctx coupling pattern and NPU backends. Uses manual parameter sweeps
instead.
- **Full benchmark suite every iteration**: No lightweight shortcuts --
ensures optimization is good across ALL input sizes, not just
cherry-picked ones.
- **Balanced guardrails**: Rejects variants that regress non-target
metric >5% or regress one pass >10% to improve the other.
- **Comment preservation**: All existing comments kept, explanatory
comments added for every optimization change.
- **Autonomous + interactive modes**: Runs end-to-end when told "just
optimize it", or pauses at human checkpoints between stages.
## Files (7 files, ~2,100 lines)
| File | Lines | Purpose |
|------|-------|---------|
| `SKILL.md` | 116 | Orchestration, input parsing, pipeline flow,
guardrails |
| `profiler.md` | 156 | Stage 1: baseline benchmarks, GPU detection,
bottleneck diagnosis |
| `optimizer.md` | 395 | Stage 2: optimization loop with accumulated
learning |
| `finalizer.md` | 417 | Stage 3: apply, test, plot, update CSV, create
PR, report |
| `optimization-strategies.md` | 794 | Technique catalog: parameter
tuning, memory-bound, compute-bound, architecture-specific |
| `templates/optimization-profile.md` | 140 | Cross-stage contract
between Profiler and Optimizer |
| `templates/variant-notes.md` | 70 | Per-variant lab notebook format
for learning accumulation |
## Test plan
- [x] Skill triggers correctly on "optimize the X kernel" prompts
- [x] Tested end-to-end on `fused_add_rms_norm` (PR #1187)
- [x] Verified against Claude Code skill best practices (conciseness,
progressive disclosure, SKILL.md under 500 lines)
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>1 parent 969d4ab commit 2bf7117
7 files changed
Lines changed: 2095 additions & 0 deletions
File tree
- .claude/skills/liger-kernel-perf
- templates
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
0 commit comments