Skip to content

Commit f521cfe

Browse files
authored
Add Agent Evaluation skill for accuracy benchmarking (#1132)
### What does this PR do? Type of change: New feature Add a Claude Code skill for evaluating LLM accuracy using NeMo Evaluator Launcher (NEL). Based on the upstream [nel-assistant skill](https://github.com/NVIDIA-NeMo/Evaluator/tree/f1fa073/packages/nemo-evaluator-launcher/.claude/skills/nel-assistant) with ModelOpt-specific additions: - **Auto-detect ModelOpt quantization format** from `hf_quant_config.json` (with `config.json` fallback) and set the correct vLLM/SGLang `--quantization` flag - **Quantization-aware benchmark defaults** — recommend MMLU, GSM8K, ARC-Challenge for quantized models (sensitive to precision loss) - **Workspace management** for multi-user environments (Step 0) - **Progressive disclosure** — model card research checklist and multi-node patterns extracted to `references/` for on-demand loading #### Skill structure ``` evaluation/ ├── SKILL.md 310 lines (core 8-step workflow) ├── references/ │ ├── model-card-research.md Sampling params, reasoning config, ARM64, pre_cmd │ └── multi-node.md HAProxy multi-instance, Ray TP/PP patterns └── evals/ └── nemotron3-nano-bf16-reasoning.json ``` The skill guides users through: NEL installation check → config generation via `nel skills build-config` → model card research → parameter tuning → task selection → multi-node setup → interceptors → execution with dry-run/test/full modes. ### Testing Invoke in Claude Code: ``` claude -p "evaluate outputs/Qwen3-0.6B-FP8 on mmlu" ``` ### Before your PR is "*Ready for review*" - Is this change backward compatible?: N/A (new feature) - If you copied code from any other sources, did you follow guidance in `CONTRIBUTING.md`: ✅ (NEL skill attributed in frontmatter) - Did you write any new necessary tests?: N/A (skill evals provided separately) 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Documentation** * Added comprehensive NeMo evaluation skill documentation with interactive workflow guidance * Added reference guides for model card extraction, multi-node evaluation patterns, and quantization-aware benchmark selection * **Tests** * Added evaluation workflow test definitions * **Chores** * Updated linting configuration rules <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Kai Xu <kaix@nvidia.com>
1 parent f539c03 commit f521cfe

6 files changed

Lines changed: 483 additions & 0 deletions

File tree

.claude/skills/evaluation/SKILL.md

Lines changed: 307 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,307 @@
1+
---
2+
name: evaluation
3+
description: Evaluates accuracy of quantized or unquantized LLMs using NeMo Evaluator Launcher (NEL). Triggers on "evaluate model", "benchmark accuracy", "run MMLU", "evaluate quantized model", "accuracy drop", "run nel". Handles deployment, config generation, and evaluation execution. Not for quantizing models (use ptq) or deploying/serving models (use deployment).
4+
license: Apache-2.0
5+
# Based on nel-assistant skill from NeMo Evaluator Launcher (commit f1fa073)
6+
# https://github.com/NVIDIA-NeMo/Evaluator/tree/f1fa073/packages/nemo-evaluator-launcher/.claude/skills/nel-assistant
7+
# Modifications: renamed to evaluation, added workspace management (Step 0),
8+
# auto-detect ModelOpt quantization format, quantization-aware benchmark defaults.
9+
---
10+
11+
## NeMo Evaluator Launcher Assistant
12+
13+
You're an expert in NeMo Evaluator Launcher! Guide the user through creating production-ready YAML configurations, running evaluations, and monitoring progress via an interactive workflow specified below.
14+
15+
### Workspace (multi-user / Slack bot)
16+
17+
If `MODELOPT_WORKSPACE_ROOT` is set, read `skills/common/workspace-management.md`. Check for existing workspaces — especially if evaluating a model from a prior PTQ or deployment step. Reuse the existing workspace so you have access to the quantized checkpoint and any code modifications.
18+
19+
### Workflow
20+
21+
```text
22+
Config Generation Progress:
23+
- [ ] Step 0: Check workspace (if MODELOPT_WORKSPACE_ROOT is set)
24+
- [ ] Step 1: Check if nel is installed and if user has existing config
25+
- [ ] Step 2: Build the base config file
26+
- [ ] Step 3: Configure model path and parameters
27+
- [ ] Step 4: Fill in remaining missing values
28+
- [ ] Step 5: Confirm tasks (iterative)
29+
- [ ] Step 6: Advanced - Multi-node (Data Parallel)
30+
- [ ] Step 7: Advanced - Interceptors
31+
- [ ] Step 8: Run the evaluation
32+
```
33+
34+
**Step 1: Check prerequisites**
35+
36+
Test that `nel` is installed with `nel --version`. If not, instruct the user to `pip install nemo-evaluator-launcher`.
37+
38+
If the user already has a config file (e.g., "run this config", "evaluate with my-config.yaml"), skip to Step 8. Optionally review it for common issues (missing `???` values, quantization flags) before running.
39+
40+
**Step 2: Build the base config file**
41+
42+
Prompt the user with "I'll ask you 5 questions to build the base config we'll adjust in the next steps". Guide the user through the 5 questions using AskUserQuestion:
43+
44+
1. Execution:
45+
46+
- Local
47+
- SLURM
48+
49+
2. Deployment:
50+
51+
- None (External)
52+
- vLLM
53+
- SGLang
54+
- NIM
55+
- TRT-LLM
56+
57+
3. Auto-export:
58+
59+
- None (auto-export disabled)
60+
- MLflow
61+
- wandb
62+
63+
4. Model type
64+
65+
- Base
66+
- Chat
67+
- Reasoning
68+
69+
5. Benchmarks:
70+
Allow for multiple choices in this question.
71+
1. Standard LLM Benchmarks (like MMLU, IFEval, GSM8K, ...)
72+
2. Code Evaluation (like HumanEval, MBPP, and LiveCodeBench)
73+
3. Math & Reasoning (like AIME, GPQA, MATH-500, ...)
74+
4. Safety & Security (like Garak and Safety Harness)
75+
5. Multilingual (like MMATH, Global MMLU, MMLU-Prox)
76+
77+
DON'T ALLOW FOR ANY OTHER OPTIONS, only the ones listed above under each category (Execution, Deployment, Auto-export, Model type, Benchmarks). YOU HAVE TO GATHER THE ANSWERS for the 5 questions before you can build the base config.
78+
79+
> **Note:** These categories come from NEL's `build-config` CLI. If `nel skills build-config --help` shows different options than listed above, use the CLI's current options instead.
80+
81+
When you have all the answers, run the script to build the base config:
82+
83+
```bash
84+
nel skills build-config --execution <local|slurm> --deployment <none|vllm|sglang|nim|trtllm> --model_type <base|chat|reasoning> --benchmarks <standard|code|math_reasoning|safety|multilingual> [--export <none|mlflow|wandb>] [--output <OUTPUT>]
85+
```
86+
87+
Where `--output` depends on what the user provides:
88+
89+
- Omit: Uses current directory with auto-generated filename
90+
- Directory: Writes to that directory with auto-generated filename
91+
- File path (*.yaml): Writes to that specific file
92+
93+
It never overwrites existing files.
94+
95+
**Step 3: Configure model path and parameters**
96+
97+
Ask for model path. Determine type:
98+
99+
- Checkpoint path (local directory — starts with `/`, `./`, `../`, `~`, or contains no `/` but exists on disk) → set `deployment.checkpoint_path: <path>` and `deployment.hf_model_handle: null`
100+
- HF handle (e.g., `org/model-name` — contains exactly one `/` and does not exist locally) → set `deployment.hf_model_handle: <handle>` and `deployment.checkpoint_path: null`
101+
102+
**Auto-detect ModelOpt quantization format** (checkpoint paths only):
103+
104+
Check for `hf_quant_config.json` in the checkpoint directory:
105+
106+
```bash
107+
cat <checkpoint_path>/hf_quant_config.json 2>/dev/null
108+
```
109+
110+
If found, read `quantization.quant_algo` and set the correct vLLM/SGLang quantization flag in `deployment.extra_args`:
111+
112+
| `quant_algo` | Flag to add |
113+
|-------------|-------------|
114+
| `FP8` | `--quantization modelopt` |
115+
| `W4A8_AWQ` | `--quantization modelopt` |
116+
| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` |
117+
| Other values | Try `--quantization modelopt`; consult vLLM/SGLang docs if unsure |
118+
119+
If no `hf_quant_config.json`, also check `config.json` for a `quantization_config` section with `quant_method: "modelopt"`. If neither is found, the checkpoint is unquantized — no flag needed.
120+
121+
> **Note:** Some models require additional env vars for deployment (e.g., `VLLM_NVFP4_GEMM_BACKEND=marlin` for Nemotron Super). These are not in `hf_quant_config.json` — they are discovered during model card research below.
122+
123+
**Quantization-aware benchmark defaults:**
124+
125+
When a quantized checkpoint is detected, read `references/quantization-benchmarks.md` for benchmark sensitivity rankings and recommended sets. Present recommendations to the user and ask which to include.
126+
127+
Read `references/model-card-research.md` for the full extraction checklist (sampling params, reasoning config, ARM64 compatibility, pre_cmd, etc.). Use WebSearch to research the model card, present findings, and ask the user to confirm.
128+
129+
**Step 4: Fill in remaining missing values**
130+
131+
- Find all remaining `???` missing values in the config.
132+
- Ask the user only for values that couldn't be auto-discovered from the model card (e.g., SLURM hostname, account, output directory, MLflow/wandb tracking URI). Don't propose any defaults here. Let the user give you the values in plain text.
133+
- Ask the user if they want to change any other defaults e.g. execution partition or walltime (if running on SLURM) or add MLflow/wandb tags (if auto-export enabled).
134+
135+
**Step 5: Confirm tasks (iterative)**
136+
137+
Show tasks in the current config. Loop until the user confirms the task list is final:
138+
139+
1. Tell the user: "Run `nel ls tasks` to see all available tasks".
140+
2. Ask if they want to add/remove tasks or add/remove/modify task-specific parameter overrides.
141+
To add per-task `nemo_evaluator_config` as specified by the user, e.g.:
142+
143+
```yaml
144+
tasks:
145+
- name: <task>
146+
nemo_evaluator_config:
147+
config:
148+
params:
149+
temperature: <value>
150+
max_new_tokens: <value>
151+
...
152+
```
153+
154+
3. Apply changes.
155+
4. Show updated list and ask: "Is the task list final, or do you want to make more changes?"
156+
157+
**Known Issues**
158+
159+
- NeMo-Skills workaround (self-deployment only): If using `nemo_skills.*` tasks with self-deployment (vLLM/SGLang/NIM), add at top level:
160+
161+
```yaml
162+
target:
163+
api_endpoint:
164+
api_key_name: DUMMY_API_KEY
165+
```
166+
167+
For the None (External) deployment the `api_key_name` should be already defined. The `DUMMY_API_KEY` export is handled in Step 8.
168+
169+
**Step 6: Advanced - Multi-node**
170+
171+
If the user needs multi-node evaluation (model >120B, or more throughput), read `references/multi-node.md` for the configuration patterns (HAProxy multi-instance, Ray TP/PP, or combined).
172+
173+
**Step 7: Advanced - Interceptors**
174+
175+
- Tell the user they should see: <https://docs.nvidia.com/nemo/evaluator/latest/libraries/nemo-evaluator/interceptors/index.html> .
176+
- DON'T provide any general information about what interceptors typically do in API frameworks without reading the docs. If the user asks about interceptors, only then read the webpage to provide precise information.
177+
- If the user asks you to configure some interceptor, then read the webpage of this interceptor and configure it according to the `--overrides` syntax but put the values in the YAML config under `evaluation.nemo_evaluator_config.config.target.api_endpoint.adapter_config` (NOT under `target.api_endpoint.adapter_config`) instead of using CLI overrides.
178+
By defining `interceptors` list you'd override the full chain of interceptors which can have unintended consequences like disabling default interceptors. That's why use the fields specified in the `CLI Configuration` section after the `--overrides` keyword to configure interceptors in the YAML config.
179+
180+
**Documentation Errata**
181+
182+
- The docs may show incorrect parameter names for logging. Use `max_logged_requests` and `max_logged_responses` (NOT `max_saved_*` or `max_*`).
183+
184+
**Step 8: Run the evaluation**
185+
186+
Print the following commands to the user. Propose to execute them in order to confirm the config works as expected before the full run.
187+
188+
**Important**: Export required environment variables based on your config. If any tokens or keys are missing (e.g. `HF_TOKEN`, `NGC_API_KEY`, `api_key_name` from the config), ask the user to put them in a `.env` file in the project root so you can run `set -a && source .env && set +a` (or equivalent) before executing `nel run` commands.
189+
190+
```bash
191+
# If using pre_cmd or post_cmd (review pre_cmd content before enabling — it runs arbitrary commands):
192+
export NEMO_EVALUATOR_TRUST_PRE_CMD=1
193+
194+
# If using nemo_skills.* tasks with self-deployment:
195+
export DUMMY_API_KEY=dummy
196+
```
197+
198+
1. **Dry-run** (validates config without running):
199+
200+
```bash
201+
nel run --config <config_path> --dry-run
202+
```
203+
204+
2. **Test with limited samples** (quick validation run):
205+
206+
```bash
207+
nel run --config <config_path> -o ++evaluation.nemo_evaluator_config.config.params.limit_samples=10
208+
```
209+
210+
3. **Re-run a single task** (useful for debugging or re-testing after config changes):
211+
212+
```bash
213+
nel run --config <config_path> -t <task_name>
214+
```
215+
216+
Combine with `-o` for limited samples: `nel run --config <config_path> -t <task_name> -o ++evaluation.nemo_evaluator_config.config.params.limit_samples=10`
217+
218+
4. **Full evaluation** (production run):
219+
220+
```bash
221+
nel run --config <config_path>
222+
```
223+
224+
After the dry-run, check the output from `nel` for any problems with the config. If there are no problems, propose to first execute the test run with limited samples and then execute the full evaluation. If there are problems, resolve them before executing the full evaluation.
225+
226+
**Monitoring Progress**
227+
228+
After job submission, you can monitor progress using:
229+
230+
1. **Check job status:**
231+
232+
```bash
233+
nel status <invocation_id>
234+
nel info <invocation_id>
235+
```
236+
237+
2. **Stream logs** (Local execution only):
238+
239+
```bash
240+
nel logs <invocation_id>
241+
```
242+
243+
Note: `nel logs` is not supported for SLURM execution.
244+
245+
3. **Inspect logs via SSH** (SLURM workaround):
246+
247+
When `nel logs` is unavailable (SLURM), use SSH to inspect logs directly:
248+
249+
First, get log locations:
250+
251+
```bash
252+
nel info <invocation_id> --logs
253+
```
254+
255+
Then, use SSH to view logs:
256+
257+
**Check server deployment logs:**
258+
259+
```bash
260+
ssh <username>@<hostname> "tail -100 <log path from `nel info <invocation_id> --logs`>/server-<slurm_job_id>-*.log"
261+
```
262+
263+
Shows vLLM server startup, model loading, and deployment errors (e.g., missing wget/curl).
264+
265+
**Check evaluation client logs:**
266+
267+
```bash
268+
ssh <username>@<hostname> "tail -100 <log path from `nel info <invocation_id> --logs`>/client-<slurm_job_id>.log"
269+
```
270+
271+
Shows evaluation progress, task execution, and results.
272+
273+
**Check SLURM scheduler logs:**
274+
275+
```bash
276+
ssh <username>@<hostname> "tail -100 <log path from `nel info <invocation_id> --logs`>/slurm-<slurm_job_id>.log"
277+
```
278+
279+
Shows job scheduling, health checks, and overall execution flow.
280+
281+
**Search for errors:**
282+
283+
```bash
284+
ssh <username>@<hostname> "grep -i 'error\|warning\|failed' <log path from `nel info <invocation_id> --logs`>/*.log"
285+
```
286+
287+
---
288+
289+
Direct users with issues to:
290+
291+
- **GitHub Issues:** <https://github.com/NVIDIA-NeMo/Evaluator/issues>
292+
- **GitHub Discussions:** <https://github.com/NVIDIA-NeMo/Evaluator/discussions>
293+
294+
Now, copy this checklist and track your progress:
295+
296+
```text
297+
Config Generation Progress:
298+
- [ ] Step 0: Check workspace (if MODELOPT_WORKSPACE_ROOT is set)
299+
- [ ] Step 1: Check if nel is installed and if user has existing config
300+
- [ ] Step 2: Build the base config file
301+
- [ ] Step 3: Configure model path and parameters
302+
- [ ] Step 4: Fill in remaining missing values
303+
- [ ] Step 5: Confirm tasks (iterative)
304+
- [ ] Step 6: Advanced - Multi-node (Data Parallel)
305+
- [ ] Step 7: Advanced - Interceptors
306+
- [ ] Step 8: Run the evaluation
307+
```
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# Model Card Research
2+
3+
Use WebSearch to find the model card (HuggingFace, build.nvidia.com). Read it carefully, the FULL text, the devil is in the details. Extract ALL relevant configurations:
4+
5+
- Sampling params (`temperature`, `top_p`)
6+
- Context length (`deployment.extra_args: "--max-model-len <value>"`)
7+
- TP/DP settings (to set them appropriately, AskUserQuestion on how many GPUs the model will be deployed)
8+
- Reasoning config (if applicable):
9+
- reasoning on/off: use either:
10+
- `adapter_config.custom_system_prompt` (like `/think`, `/no_think`) and no `adapter_config.params_to_add` (leave `params_to_add` unrelated to reasoning untouched)
11+
- `adapter_config.params_to_add` for payload modifier (like `"chat_template_kwargs": {"enable_thinking": true/false}`) and no `adapter_config.custom_system_prompt` and `adapter_config.use_system_prompt: false` (leave `custom_system_prompt` and `use_system_prompt` unrelated to reasoning untouched).
12+
- reasoning effort/budget (if it's configurable, AskUserQuestion what reasoning effort they want)
13+
- higher `max_new_tokens`
14+
- etc.
15+
- Deployment-specific `extra_args` for vLLM/SGLang (look for the vLLM/SGLang deployment command)
16+
- Deployment-specific vLLM/SGLang versions (by default we use latest docker images, but you can control it with `deployment.image` e.g. vLLM above `vllm/vllm-openai:v0.11.0` stopped supporting `rope-scaling` arg used by Qwen models)
17+
- ARM64 / non-standard GPU compatibility: The default `vllm/vllm-openai` image only supports common GPU architectures. For ARM64 platforms or GPUs with non-standard compute capabilities (e.g., NVIDIA GB10 with sm_121), use NGC vLLM images instead:
18+
- Example: `deployment.image: nvcr.io/nvidia/vllm:26.01-py3`
19+
- AskUserQuestion about their GPU architecture if the model card doesn't specify deployment constraints
20+
- Any preparation requirements (e.g., downloading reasoning parsers, custom plugins):
21+
- If the model card mentions downloading files (like reasoning parsers, custom plugins) before deployment, add `deployment.pre_cmd` with the download command
22+
- Use `curl` instead of `wget` as it's more widely available in Docker containers
23+
- Example: `pre_cmd: curl -L -o reasoning_parser.py https://huggingface.co/.../reasoning_parser.py`
24+
- When using `pip install` in `pre_cmd`, always use `--no-cache-dir` to avoid cross-device link errors in Docker containers (the pip cache and temp directories may be on different filesystems)
25+
- Example: `pre_cmd: pip3 install --no-cache-dir flash-attn --no-build-isolation`
26+
- Any other model-specific requirements
27+
28+
Remember to check `evaluation.nemo_evaluator_config` and `evaluation.tasks.*.nemo_evaluator_config` overrides too for parameters to adjust (e.g. disabling reasoning)!
29+
30+
Present findings, explain each setting, ask user to confirm or adjust. If no model card found, ask user directly for the above configurations.
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# Multi-Node Evaluation Patterns
2+
3+
There are two multi-node patterns. Ask the user which applies:
4+
5+
## Pattern A: Multi-instance (independent instances with HAProxy)
6+
7+
Only if model >120B parameters or user wants more throughput. Explain: "Each node runs an independent deployment instance. HAProxy load-balances requests across all instances."
8+
9+
```yaml
10+
execution:
11+
num_nodes: 4 # Total nodes
12+
num_instances: 4 # 4 independent instances → HAProxy auto-enabled
13+
```
14+
15+
## Pattern B: Multi-node single instance (Ray TP/PP across nodes)
16+
17+
When a single model is too large for one node and needs pipeline parallelism across nodes. Use `vllm_ray` deployment config:
18+
19+
```yaml
20+
defaults:
21+
- deployment: vllm_ray # Built-in Ray cluster setup (replaces manual pre_cmd)
22+
23+
execution:
24+
num_nodes: 2 # Single instance spanning 2 nodes
25+
26+
deployment:
27+
tensor_parallel_size: 8
28+
pipeline_parallel_size: 2
29+
```
30+
31+
## Pattern A+B combined: Multi-instance with multi-node instances
32+
33+
For very large models needing both cross-node parallelism AND multiple instances:
34+
35+
```yaml
36+
defaults:
37+
- deployment: vllm_ray
38+
39+
execution:
40+
num_nodes: 4 # Total nodes
41+
num_instances: 2 # 2 instances of 2 nodes each → HAProxy auto-enabled
42+
43+
deployment:
44+
tensor_parallel_size: 8
45+
pipeline_parallel_size: 2
46+
```
47+
48+
## Common Confusions
49+
50+
- **`num_instances`** controls independent deployment instances with HAProxy. **`data_parallel_size`** controls DP replicas *within* a single instance.
51+
- Global data parallelism is `num_instances x data_parallel_size` (e.g., 2 instances x 8 DP each = 16 replicas).
52+
- With multi-instance, `parallelism` in task config is the total concurrent requests across all instances, not per-instance.
53+
- `num_nodes` must be divisible by `num_instances`.

0 commit comments

Comments
 (0)