|
| 1 | +--- |
| 2 | +name: evaluation |
| 3 | +description: Evaluates accuracy of quantized or unquantized LLMs using NeMo Evaluator Launcher (NEL). Triggers on "evaluate model", "benchmark accuracy", "run MMLU", "evaluate quantized model", "accuracy drop", "run nel". Handles deployment, config generation, and evaluation execution. Not for quantizing models (use ptq) or deploying/serving models (use deployment). |
| 4 | +license: Apache-2.0 |
| 5 | +# Based on nel-assistant skill from NeMo Evaluator Launcher (commit f1fa073) |
| 6 | +# https://github.com/NVIDIA-NeMo/Evaluator/tree/f1fa073/packages/nemo-evaluator-launcher/.claude/skills/nel-assistant |
| 7 | +# Modifications: renamed to evaluation, added workspace management (Step 0), |
| 8 | +# auto-detect ModelOpt quantization format, quantization-aware benchmark defaults. |
| 9 | +--- |
| 10 | + |
| 11 | +## NeMo Evaluator Launcher Assistant |
| 12 | + |
| 13 | +You're an expert in NeMo Evaluator Launcher! Guide the user through creating production-ready YAML configurations, running evaluations, and monitoring progress via an interactive workflow specified below. |
| 14 | + |
| 15 | +### Workspace (multi-user / Slack bot) |
| 16 | + |
| 17 | +If `MODELOPT_WORKSPACE_ROOT` is set, read `skills/common/workspace-management.md`. Check for existing workspaces — especially if evaluating a model from a prior PTQ or deployment step. Reuse the existing workspace so you have access to the quantized checkpoint and any code modifications. |
| 18 | + |
| 19 | +### Workflow |
| 20 | + |
| 21 | +```text |
| 22 | +Config Generation Progress: |
| 23 | +- [ ] Step 0: Check workspace (if MODELOPT_WORKSPACE_ROOT is set) |
| 24 | +- [ ] Step 1: Check if nel is installed and if user has existing config |
| 25 | +- [ ] Step 2: Build the base config file |
| 26 | +- [ ] Step 3: Configure model path and parameters |
| 27 | +- [ ] Step 4: Fill in remaining missing values |
| 28 | +- [ ] Step 5: Confirm tasks (iterative) |
| 29 | +- [ ] Step 6: Advanced - Multi-node (Data Parallel) |
| 30 | +- [ ] Step 7: Advanced - Interceptors |
| 31 | +- [ ] Step 8: Run the evaluation |
| 32 | +``` |
| 33 | + |
| 34 | +**Step 1: Check prerequisites** |
| 35 | + |
| 36 | +Test that `nel` is installed with `nel --version`. If not, instruct the user to `pip install nemo-evaluator-launcher`. |
| 37 | + |
| 38 | +If the user already has a config file (e.g., "run this config", "evaluate with my-config.yaml"), skip to Step 8. Optionally review it for common issues (missing `???` values, quantization flags) before running. |
| 39 | + |
| 40 | +**Step 2: Build the base config file** |
| 41 | + |
| 42 | +Prompt the user with "I'll ask you 5 questions to build the base config we'll adjust in the next steps". Guide the user through the 5 questions using AskUserQuestion: |
| 43 | + |
| 44 | +1. Execution: |
| 45 | + |
| 46 | +- Local |
| 47 | +- SLURM |
| 48 | + |
| 49 | +2. Deployment: |
| 50 | + |
| 51 | +- None (External) |
| 52 | +- vLLM |
| 53 | +- SGLang |
| 54 | +- NIM |
| 55 | +- TRT-LLM |
| 56 | + |
| 57 | +3. Auto-export: |
| 58 | + |
| 59 | +- None (auto-export disabled) |
| 60 | +- MLflow |
| 61 | +- wandb |
| 62 | + |
| 63 | +4. Model type |
| 64 | + |
| 65 | +- Base |
| 66 | +- Chat |
| 67 | +- Reasoning |
| 68 | + |
| 69 | +5. Benchmarks: |
| 70 | + Allow for multiple choices in this question. |
| 71 | +1. Standard LLM Benchmarks (like MMLU, IFEval, GSM8K, ...) |
| 72 | +2. Code Evaluation (like HumanEval, MBPP, and LiveCodeBench) |
| 73 | +3. Math & Reasoning (like AIME, GPQA, MATH-500, ...) |
| 74 | +4. Safety & Security (like Garak and Safety Harness) |
| 75 | +5. Multilingual (like MMATH, Global MMLU, MMLU-Prox) |
| 76 | + |
| 77 | +DON'T ALLOW FOR ANY OTHER OPTIONS, only the ones listed above under each category (Execution, Deployment, Auto-export, Model type, Benchmarks). YOU HAVE TO GATHER THE ANSWERS for the 5 questions before you can build the base config. |
| 78 | + |
| 79 | +> **Note:** These categories come from NEL's `build-config` CLI. If `nel skills build-config --help` shows different options than listed above, use the CLI's current options instead. |
| 80 | +
|
| 81 | +When you have all the answers, run the script to build the base config: |
| 82 | + |
| 83 | +```bash |
| 84 | +nel skills build-config --execution <local|slurm> --deployment <none|vllm|sglang|nim|trtllm> --model_type <base|chat|reasoning> --benchmarks <standard|code|math_reasoning|safety|multilingual> [--export <none|mlflow|wandb>] [--output <OUTPUT>] |
| 85 | +``` |
| 86 | + |
| 87 | +Where `--output` depends on what the user provides: |
| 88 | + |
| 89 | +- Omit: Uses current directory with auto-generated filename |
| 90 | +- Directory: Writes to that directory with auto-generated filename |
| 91 | +- File path (*.yaml): Writes to that specific file |
| 92 | + |
| 93 | +It never overwrites existing files. |
| 94 | + |
| 95 | +**Step 3: Configure model path and parameters** |
| 96 | + |
| 97 | +Ask for model path. Determine type: |
| 98 | + |
| 99 | +- Checkpoint path (local directory — starts with `/`, `./`, `../`, `~`, or contains no `/` but exists on disk) → set `deployment.checkpoint_path: <path>` and `deployment.hf_model_handle: null` |
| 100 | +- HF handle (e.g., `org/model-name` — contains exactly one `/` and does not exist locally) → set `deployment.hf_model_handle: <handle>` and `deployment.checkpoint_path: null` |
| 101 | + |
| 102 | +**Auto-detect ModelOpt quantization format** (checkpoint paths only): |
| 103 | + |
| 104 | +Check for `hf_quant_config.json` in the checkpoint directory: |
| 105 | + |
| 106 | +```bash |
| 107 | +cat <checkpoint_path>/hf_quant_config.json 2>/dev/null |
| 108 | +``` |
| 109 | + |
| 110 | +If found, read `quantization.quant_algo` and set the correct vLLM/SGLang quantization flag in `deployment.extra_args`: |
| 111 | + |
| 112 | +| `quant_algo` | Flag to add | |
| 113 | +|-------------|-------------| |
| 114 | +| `FP8` | `--quantization modelopt` | |
| 115 | +| `W4A8_AWQ` | `--quantization modelopt` | |
| 116 | +| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | |
| 117 | +| Other values | Try `--quantization modelopt`; consult vLLM/SGLang docs if unsure | |
| 118 | + |
| 119 | +If no `hf_quant_config.json`, also check `config.json` for a `quantization_config` section with `quant_method: "modelopt"`. If neither is found, the checkpoint is unquantized — no flag needed. |
| 120 | + |
| 121 | +> **Note:** Some models require additional env vars for deployment (e.g., `VLLM_NVFP4_GEMM_BACKEND=marlin` for Nemotron Super). These are not in `hf_quant_config.json` — they are discovered during model card research below. |
| 122 | +
|
| 123 | +**Quantization-aware benchmark defaults:** |
| 124 | + |
| 125 | +When a quantized checkpoint is detected, read `references/quantization-benchmarks.md` for benchmark sensitivity rankings and recommended sets. Present recommendations to the user and ask which to include. |
| 126 | + |
| 127 | +Read `references/model-card-research.md` for the full extraction checklist (sampling params, reasoning config, ARM64 compatibility, pre_cmd, etc.). Use WebSearch to research the model card, present findings, and ask the user to confirm. |
| 128 | + |
| 129 | +**Step 4: Fill in remaining missing values** |
| 130 | + |
| 131 | +- Find all remaining `???` missing values in the config. |
| 132 | +- Ask the user only for values that couldn't be auto-discovered from the model card (e.g., SLURM hostname, account, output directory, MLflow/wandb tracking URI). Don't propose any defaults here. Let the user give you the values in plain text. |
| 133 | +- Ask the user if they want to change any other defaults e.g. execution partition or walltime (if running on SLURM) or add MLflow/wandb tags (if auto-export enabled). |
| 134 | + |
| 135 | +**Step 5: Confirm tasks (iterative)** |
| 136 | + |
| 137 | +Show tasks in the current config. Loop until the user confirms the task list is final: |
| 138 | + |
| 139 | +1. Tell the user: "Run `nel ls tasks` to see all available tasks". |
| 140 | +2. Ask if they want to add/remove tasks or add/remove/modify task-specific parameter overrides. |
| 141 | + To add per-task `nemo_evaluator_config` as specified by the user, e.g.: |
| 142 | + |
| 143 | + ```yaml |
| 144 | + tasks: |
| 145 | + - name: <task> |
| 146 | + nemo_evaluator_config: |
| 147 | + config: |
| 148 | + params: |
| 149 | + temperature: <value> |
| 150 | + max_new_tokens: <value> |
| 151 | + ... |
| 152 | + ``` |
| 153 | + |
| 154 | +3. Apply changes. |
| 155 | +4. Show updated list and ask: "Is the task list final, or do you want to make more changes?" |
| 156 | + |
| 157 | +**Known Issues** |
| 158 | + |
| 159 | +- NeMo-Skills workaround (self-deployment only): If using `nemo_skills.*` tasks with self-deployment (vLLM/SGLang/NIM), add at top level: |
| 160 | + |
| 161 | + ```yaml |
| 162 | + target: |
| 163 | + api_endpoint: |
| 164 | + api_key_name: DUMMY_API_KEY |
| 165 | + ``` |
| 166 | +
|
| 167 | + For the None (External) deployment the `api_key_name` should be already defined. The `DUMMY_API_KEY` export is handled in Step 8. |
| 168 | + |
| 169 | +**Step 6: Advanced - Multi-node** |
| 170 | + |
| 171 | +If the user needs multi-node evaluation (model >120B, or more throughput), read `references/multi-node.md` for the configuration patterns (HAProxy multi-instance, Ray TP/PP, or combined). |
| 172 | + |
| 173 | +**Step 7: Advanced - Interceptors** |
| 174 | + |
| 175 | +- Tell the user they should see: <https://docs.nvidia.com/nemo/evaluator/latest/libraries/nemo-evaluator/interceptors/index.html> . |
| 176 | +- DON'T provide any general information about what interceptors typically do in API frameworks without reading the docs. If the user asks about interceptors, only then read the webpage to provide precise information. |
| 177 | +- If the user asks you to configure some interceptor, then read the webpage of this interceptor and configure it according to the `--overrides` syntax but put the values in the YAML config under `evaluation.nemo_evaluator_config.config.target.api_endpoint.adapter_config` (NOT under `target.api_endpoint.adapter_config`) instead of using CLI overrides. |
| 178 | + By defining `interceptors` list you'd override the full chain of interceptors which can have unintended consequences like disabling default interceptors. That's why use the fields specified in the `CLI Configuration` section after the `--overrides` keyword to configure interceptors in the YAML config. |
| 179 | + |
| 180 | +**Documentation Errata** |
| 181 | + |
| 182 | +- The docs may show incorrect parameter names for logging. Use `max_logged_requests` and `max_logged_responses` (NOT `max_saved_*` or `max_*`). |
| 183 | + |
| 184 | +**Step 8: Run the evaluation** |
| 185 | + |
| 186 | +Print the following commands to the user. Propose to execute them in order to confirm the config works as expected before the full run. |
| 187 | + |
| 188 | +**Important**: Export required environment variables based on your config. If any tokens or keys are missing (e.g. `HF_TOKEN`, `NGC_API_KEY`, `api_key_name` from the config), ask the user to put them in a `.env` file in the project root so you can run `set -a && source .env && set +a` (or equivalent) before executing `nel run` commands. |
| 189 | + |
| 190 | +```bash |
| 191 | +# If using pre_cmd or post_cmd (review pre_cmd content before enabling — it runs arbitrary commands): |
| 192 | +export NEMO_EVALUATOR_TRUST_PRE_CMD=1 |
| 193 | +
|
| 194 | +# If using nemo_skills.* tasks with self-deployment: |
| 195 | +export DUMMY_API_KEY=dummy |
| 196 | +``` |
| 197 | + |
| 198 | +1. **Dry-run** (validates config without running): |
| 199 | + |
| 200 | + ```bash |
| 201 | + nel run --config <config_path> --dry-run |
| 202 | + ``` |
| 203 | + |
| 204 | +2. **Test with limited samples** (quick validation run): |
| 205 | + |
| 206 | + ```bash |
| 207 | + nel run --config <config_path> -o ++evaluation.nemo_evaluator_config.config.params.limit_samples=10 |
| 208 | + ``` |
| 209 | + |
| 210 | +3. **Re-run a single task** (useful for debugging or re-testing after config changes): |
| 211 | + |
| 212 | + ```bash |
| 213 | + nel run --config <config_path> -t <task_name> |
| 214 | + ``` |
| 215 | + |
| 216 | + Combine with `-o` for limited samples: `nel run --config <config_path> -t <task_name> -o ++evaluation.nemo_evaluator_config.config.params.limit_samples=10` |
| 217 | + |
| 218 | +4. **Full evaluation** (production run): |
| 219 | + |
| 220 | + ```bash |
| 221 | + nel run --config <config_path> |
| 222 | + ``` |
| 223 | + |
| 224 | +After the dry-run, check the output from `nel` for any problems with the config. If there are no problems, propose to first execute the test run with limited samples and then execute the full evaluation. If there are problems, resolve them before executing the full evaluation. |
| 225 | + |
| 226 | +**Monitoring Progress** |
| 227 | + |
| 228 | +After job submission, you can monitor progress using: |
| 229 | + |
| 230 | +1. **Check job status:** |
| 231 | + |
| 232 | + ```bash |
| 233 | + nel status <invocation_id> |
| 234 | + nel info <invocation_id> |
| 235 | + ``` |
| 236 | + |
| 237 | +2. **Stream logs** (Local execution only): |
| 238 | + |
| 239 | + ```bash |
| 240 | + nel logs <invocation_id> |
| 241 | + ``` |
| 242 | + |
| 243 | + Note: `nel logs` is not supported for SLURM execution. |
| 244 | + |
| 245 | +3. **Inspect logs via SSH** (SLURM workaround): |
| 246 | + |
| 247 | + When `nel logs` is unavailable (SLURM), use SSH to inspect logs directly: |
| 248 | + |
| 249 | + First, get log locations: |
| 250 | + |
| 251 | + ```bash |
| 252 | + nel info <invocation_id> --logs |
| 253 | + ``` |
| 254 | + |
| 255 | + Then, use SSH to view logs: |
| 256 | + |
| 257 | + **Check server deployment logs:** |
| 258 | + |
| 259 | + ```bash |
| 260 | + ssh <username>@<hostname> "tail -100 <log path from `nel info <invocation_id> --logs`>/server-<slurm_job_id>-*.log" |
| 261 | + ``` |
| 262 | + |
| 263 | + Shows vLLM server startup, model loading, and deployment errors (e.g., missing wget/curl). |
| 264 | + |
| 265 | + **Check evaluation client logs:** |
| 266 | + |
| 267 | + ```bash |
| 268 | + ssh <username>@<hostname> "tail -100 <log path from `nel info <invocation_id> --logs`>/client-<slurm_job_id>.log" |
| 269 | + ``` |
| 270 | + |
| 271 | + Shows evaluation progress, task execution, and results. |
| 272 | + |
| 273 | + **Check SLURM scheduler logs:** |
| 274 | + |
| 275 | + ```bash |
| 276 | + ssh <username>@<hostname> "tail -100 <log path from `nel info <invocation_id> --logs`>/slurm-<slurm_job_id>.log" |
| 277 | + ``` |
| 278 | + |
| 279 | + Shows job scheduling, health checks, and overall execution flow. |
| 280 | + |
| 281 | + **Search for errors:** |
| 282 | + |
| 283 | + ```bash |
| 284 | + ssh <username>@<hostname> "grep -i 'error\|warning\|failed' <log path from `nel info <invocation_id> --logs`>/*.log" |
| 285 | + ``` |
| 286 | + |
| 287 | +--- |
| 288 | + |
| 289 | +Direct users with issues to: |
| 290 | + |
| 291 | +- **GitHub Issues:** <https://github.com/NVIDIA-NeMo/Evaluator/issues> |
| 292 | +- **GitHub Discussions:** <https://github.com/NVIDIA-NeMo/Evaluator/discussions> |
| 293 | + |
| 294 | +Now, copy this checklist and track your progress: |
| 295 | + |
| 296 | +```text |
| 297 | +Config Generation Progress: |
| 298 | +- [ ] Step 0: Check workspace (if MODELOPT_WORKSPACE_ROOT is set) |
| 299 | +- [ ] Step 1: Check if nel is installed and if user has existing config |
| 300 | +- [ ] Step 2: Build the base config file |
| 301 | +- [ ] Step 3: Configure model path and parameters |
| 302 | +- [ ] Step 4: Fill in remaining missing values |
| 303 | +- [ ] Step 5: Confirm tasks (iterative) |
| 304 | +- [ ] Step 6: Advanced - Multi-node (Data Parallel) |
| 305 | +- [ ] Step 7: Advanced - Interceptors |
| 306 | +- [ ] Step 8: Run the evaluation |
| 307 | +``` |
0 commit comments