|
| 1 | +--- |
| 2 | +name: launching-evals |
| 3 | +description: Run, monitor, analyze, and debug LLM evaluations via nemo-evaluator-launcher. Covers running evaluations, checking status and live progress, debugging failed runs, exporting artifacts and logs, and analyzing results. ALWAYS triggers on mentions of running evaluations, checking progress, debugging failed evals, analyzing or analysing runs or results, run directories or artifact paths on clusters, Slurm job issues, invocation IDs, or inspecting logs (client logs, server logs, SSH to cluster, tail logs, grep logs). Do NOT use for creating or modifying evaluation configs. |
| 4 | +license: Apache-2.0 |
| 5 | +# Vendored verbatim from NVIDIA NeMo Evaluator (commit 01899f8) |
| 6 | +# https://github.com/NVIDIA-NeMo/Evaluator/tree/01899f89e8f31116efbca56e8f87fbd8513e24ac/packages/nemo-evaluator-launcher/.claude/skills/launching-evals |
| 7 | +# To re-sync: scripts/sync-upstream-skills.sh |
| 8 | +--- |
| 9 | + |
| 10 | +# NeMo Evaluator Skill |
| 11 | + |
| 12 | +## Quick Reference |
| 13 | + |
| 14 | +### nemo-evaluator-launcher CLI |
| 15 | + |
| 16 | +```bash |
| 17 | +# Run evaluation |
| 18 | +uv run nemo-evaluator-launcher run --config <path.yaml> |
| 19 | +uv run nemo-evaluator-launcher run --config <path.yaml> -t <a_single_task_to_be_run_by_name> |
| 20 | +uv run nemo-evaluator-launcher run --config <path.yaml> -t <task_name_1> -t <task_name_2> ... |
| 21 | +uv run nemo-evaluator-launcher run --config <path.yaml> -o evaluation.nemo_evaluator_config.config.params.limit_samples=10 ... |
| 22 | + |
| 23 | +# Preview the resolved config and the sbatch script without running the evaluation |
| 24 | +uv run nemo-evaluator-launcher run --config <path.yaml> --dry-run |
| 25 | + |
| 26 | +# Check status (--json for machine-readable output) |
| 27 | +uv run nemo-evaluator-launcher status <invocation_id> --json |
| 28 | + |
| 29 | +# Get evaluation run info (output paths, slurm job IDs, cluster hostname, etc.) |
| 30 | +uv run nemo-evaluator-launcher info <invocation_id> |
| 31 | + |
| 32 | +# Copy just the logs (quick — good for debugging) |
| 33 | +uv run nemo-evaluator-launcher info <invocation_id> --copy-logs ./evaluation-results/ |
| 34 | + |
| 35 | +# For artifacts: use `nel info` to discover paths. If remote, SSH to explore and rsync what you need. |
| 36 | +# If local, just read directly from the paths shown by `nel info`. |
| 37 | +# ssh <user>@<hostname> "ls <artifacts_path>/" |
| 38 | +# rsync -avzP <user>@<hostname>:<artifacts_path>/{results.yml,eval_factory_metrics.json,config.yml} ./evaluation-results/<invocation_id>.<job_index>/artifacts/ |
| 39 | + |
| 40 | +# List past runs |
| 41 | +uv run nemo-evaluator-launcher ls runs --since 1d |
| 42 | + |
| 43 | +# List available evaluation tasks (by default, only shows tasks from the latest released containers) |
| 44 | +uv run nemo-evaluator-launcher ls tasks |
| 45 | +uv run nemo-evaluator-launcher ls tasks --from_container gitlab-master.nvidia.com/dl/joc/competitive_evaluation/nvidia-core-evals/ci-llm/long-context-eval:dev-2025-12-16T14-37-1693de28-amd64 |
| 46 | +``` |
| 47 | + |
| 48 | +## Workflow |
| 49 | + |
| 50 | +The complete evaluation workflow is divided into the following steps you should follow IN ORDER. |
| 51 | + |
| 52 | +1. Create or modify a config using the `nel-assistant` skill. If the user provides a past run, use its `config.yml` artifact as a starting point. |
| 53 | +2. Run the evaluation. See `references/run-evaluation.md` when executing this step. |
| 54 | +3. Check progress (while RUNNING). See `references/check-progress.md` when executing this step. |
| 55 | +4. Post-run actions (when terminal state reached): |
| 56 | + 1. When the evaluation status is `SUCCESS`, analyze the results. See `references/analyze-results.md` when executing this step. |
| 57 | + 2. When the evaluation status is `FAILED`, debug the failed run. See `references/debug-failed-runs.md` when executing this step. |
| 58 | + |
| 59 | +# Key Facts |
| 60 | + |
| 61 | +- Benchmark-specific info learned during launching/analyzing evals should be added to `references/benchmarks/` |
| 62 | +- **PPP** = Slurm account (the `account` field in cluster_config.yaml). When the user says "change PPP to X", update the account value (e.g., `coreai_dlalgo_compeval` → `coreai_dlalgo_llm`). |
| 63 | +- **Slurm job pairs**: NEL (nemo-evaluator-launcher) submits paired Slurm jobs — a RUNNING job + a PENDING restart job (for when the 4h walltime expires). Never cancel the pending restart jobs — they are expected and necessary. |
| 64 | +- **HF cache requirement**: For configs with `HF_HUB_OFFLINE=1`, models must be pre-downloaded to the HF cache on each cluster before launching. **Before running a model on a new cluster, always ask the user if the model is already cached there.** If not, on the cluster login node: `python3 -m venv hf_cli && source hf_cli/bin/activate && pip install huggingface_hub` then `HF_HOME=/lustre/fsw/portfolios/coreai/users/<username>/cache/huggingface hf download <model>`. Without this, vLLM will fail with `LocalEntryNotFoundError`. |
| 65 | +- **`data_parallel_size` is per node**: `dp_size=1` with `num_nodes=8` means 8 model instances total (one per node), load-balanced by haproxy. Do NOT interpret `dp_size` as the global replica count. |
| 66 | +- **`payload_modifier` interceptor**: The `params_to_remove` list (e.g. `[max_tokens, max_completion_tokens]`) strips those fields from the outgoing payload, intentionally lifting output length limits so reasoning models can think as long as they need. |
| 67 | +- **Auto-export git workaround**: The export container (`python:3.12-slim`) lacks `git`. When installing the launcher from a git URL, set `auto_export.launcher_install_cmd` to install git first (e.g., `apt-get update -qq && apt-get install -qq -y git && pip install "nemo-evaluator-launcher[all] @ git+...#subdirectory=packages/nemo-evaluator-launcher"`). |
| 68 | +- **Do NOT use `nemo-evaluator-launcher export --dest local`** — it only writes a summary JSON (`processed_results.json`), it does NOT copy actual logs or artifacts despite accepting `--copy_logs` and `--copy-artifacts` flags. `nel info --copy-artifacts` works but copies everything (very slow for large benchmarks). Preferred approach: use `nel info` to discover paths — if local, read directly; if remote, SSH to explore and rsync only what you need. Note that `nel info` prints standard artifacts but benchmarks produce additional artifacts in subdirs — explore to find them. |
| 69 | + |
0 commit comments