Skip to content

Commit f746623

Browse files
Merge branch 'main' into ajarasane/resnet_support
2 parents 84b4ec8 + 92622a9 commit f746623

395 files changed

Lines changed: 39342 additions & 1567 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.claude/skills/common/slurm-setup.md

Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -192,3 +192,128 @@ chmod -R g+rwX /path/to/.hf_cache/
192192
```
193193

194194
Scope `chmod` to only the directories the job needs — avoid world-writable paths on shared clusters.
195+
196+
---
197+
198+
## 6. Container Registry Authentication
199+
200+
**Before submitting any SLURM job that pulls a container image**, check that the cluster has credentials for the image's registry. Missing auth causes jobs to fail after waiting in the queue — a costly mistake.
201+
202+
### Step 1: Detect the container runtime
203+
204+
Different clusters use different container runtimes. Detect which is available:
205+
206+
```bash
207+
# On the cluster (or via ssh):
208+
which enroot 2>/dev/null && echo "RUNTIME=enroot"
209+
which docker 2>/dev/null && echo "RUNTIME=docker"
210+
```
211+
212+
| Runtime | Typical clusters | SLURM integration |
213+
| --- | --- | --- |
214+
| **enroot/pyxis** | NVIDIA internal (DGX Cloud, EOS, Selene, GCP-NRT) | `srun --container-image` |
215+
| **Docker** | Bare-metal / on-prem with GPU | `docker run` inside job script |
216+
217+
### Step 2: Check credentials for the image's registry
218+
219+
Determine the registry from the image URI:
220+
221+
| Image pattern | Registry |
222+
| --- | --- |
223+
| `nvcr.io/nvidia/...` | NGC |
224+
| `vllm/vllm-openai:...`, `lmsysorg/sglang:...`, or no registry prefix | DockerHub |
225+
| `ghcr.io/...` | GitHub Container Registry |
226+
| `docker.io/...` | DockerHub (explicit) |
227+
228+
Then check credentials based on the runtime:
229+
230+
#### enroot/pyxis
231+
232+
```bash
233+
grep -E '^\s*machine\s+' ~/.config/enroot/.credentials 2>/dev/null
234+
```
235+
236+
Look for `machine <registry>` lines:
237+
- NGC → `machine nvcr.io`
238+
- DockerHub → `machine auth.docker.io`
239+
- GHCR → `machine ghcr.io`
240+
241+
#### Docker
242+
243+
```bash
244+
cat ~/.docker/config.json 2>/dev/null | python3 -c "import json,sys; print('\n'.join(json.load(sys.stdin).get('auths', {}).keys()))"
245+
```
246+
247+
Look for registry keys (`https://index.docker.io/v1/`, `nvcr.io`, `ghcr.io`).
248+
249+
### Step 3: If credentials are missing
250+
251+
**Do not submit the job.** Instead:
252+
253+
1. Tell the user which registry and runtime need authentication
254+
2. Show the fix for their runtime:
255+
256+
**enroot/pyxis:**
257+
258+
```bash
259+
mkdir -p ~/.config/enroot
260+
261+
# DockerHub (get token from https://hub.docker.com/settings/security)
262+
cat >> ~/.config/enroot/.credentials << 'EOF'
263+
machine auth.docker.io
264+
login <dockerhub_username>
265+
password <access_token>
266+
EOF
267+
268+
# NGC (get API key from https://org.ngc.nvidia.com/setup/api-keys)
269+
cat >> ~/.config/enroot/.credentials << 'EOF'
270+
machine nvcr.io
271+
login $oauthtoken
272+
password <ngc_api_key>
273+
EOF
274+
```
275+
276+
**Docker:**
277+
278+
```bash
279+
# DockerHub (interactive prompt)
280+
docker login
281+
282+
# NGC (use --password-stdin to avoid exposing secrets in process list)
283+
echo "$NGC_API_KEY" | docker login nvcr.io -u '$oauthtoken' --password-stdin
284+
```
285+
286+
3. **Suggest an alternative image** on an authenticated registry. NVIDIA clusters typically have NGC auth pre-configured, so prefer NGC-hosted images:
287+
288+
| DockerHub image | NGC alternative |
289+
| --- | --- |
290+
| `vllm/vllm-openai:latest` | `nvcr.io/nvidia/vllm:<YY.MM>-py3` (check [NGC catalog](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm) for latest tag) |
291+
| `nvcr.io/nvidia/tensorrt-llm/release:<tag>` | Already NGC |
292+
293+
> **Note:** NGC image tags follow `YY.MM-py3` format (e.g., `26.03-py3`). Not all DockerHub images have NGC equivalents. If no NGC alternative exists and DockerHub auth is missing, the user must add DockerHub credentials or pre-cache the image as a `.sqsh` file.
294+
295+
4. After the user fixes auth or switches images, verify the image is **actually pullable** before submitting (credentials alone don't guarantee the image exists):
296+
297+
```bash
298+
# enroot — test pull (aborts after manifest fetch)
299+
enroot import --output /dev/null docker://<registry>#<image> 2>&1 | head -10
300+
# Success: shows "Fetching image manifest" + layer info
301+
# Failure: shows "401 Unauthorized" or "404 Not Found"
302+
303+
# docker
304+
docker manifest inspect <image> 2>&1 | head -5
305+
306+
# singularity
307+
singularity pull --dry-run docker://<image> 2>&1 | head -5
308+
```
309+
310+
> **Important**: Credentials existing for a registry does NOT mean a specific image is accessible. The image may not exist, or the credentials may lack permissions for that repository. Always verify the specific image before submitting.
311+
312+
### Common failure modes
313+
314+
| Symptom | Runtime | Cause | Fix |
315+
| --- | --- | --- | --- |
316+
| `curl: (22) ... error: 401` | enroot | No credentials for registry | Add to `~/.config/enroot/.credentials` |
317+
| `pyxis: failed to import docker image` | enroot | Auth failed or rate limit | Check credentials; DockerHub free: 100 pulls/6h per IP |
318+
| `unauthorized: authentication required` | docker | No `docker login` | Run `docker login [registry]` |
319+
| Image pulls on some nodes but not others | any | Cached on one node only | Pre-cache image or ensure auth on all nodes |

.claude/skills/debug/SKILL.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
---
2+
name: debug
3+
description: Run commands inside a remote Docker container via the file-based command relay (tools/debugger). Use when the user says "run in Docker", "run on GPU", "debug remotely", "run test in container", "check nvidia-smi", "run pytest in Docker", or needs to execute any command inside a Docker container that shares the repo filesystem. Requires the user to have started server.sh inside the container first.
4+
---
5+
6+
# Remote Docker Debugger
7+
8+
Execute commands inside a Docker container from the host using the file-based command relay.
9+
10+
**Read `tools/debugger/CLAUDE.md` for full usage details** — it has the protocol and examples.
11+
12+
## Quick Reference
13+
14+
```bash
15+
# Check connection
16+
bash tools/debugger/client.sh status
17+
18+
# Connect to server (user must start server.sh in Docker first)
19+
bash tools/debugger/client.sh handshake
20+
21+
# Run a command
22+
bash tools/debugger/client.sh run "<command>"
23+
24+
# Long-running command (default timeout is 600s)
25+
bash tools/debugger/client.sh --timeout 1800 run "<command>"
26+
27+
# Cancel the currently running command
28+
bash tools/debugger/client.sh cancel
29+
30+
# Reconnect after server restart
31+
bash tools/debugger/client.sh flush
32+
bash tools/debugger/client.sh handshake
33+
```

.claude/skills/deployment/SKILL.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,8 @@ All checks must pass before reporting success to the user.
174174

175175
If a cluster config exists (`~/.config/modelopt/clusters.yaml` or `.claude/clusters.yaml`), or the user mentions running on a remote machine:
176176

177+
0. **Check container registry auth** — before submitting any SLURM job with a container image, verify credentials exist on the cluster per `skills/common/slurm-setup.md` section 6. If credentials are missing for the image's registry, ask the user to fix auth or switch to an image on an authenticated registry (e.g., NGC). **Do not submit until auth is confirmed.**
178+
177179
1. **Source remote utilities:**
178180

179181
```bash
@@ -222,6 +224,10 @@ For NEL-managed deployment (evaluation with self-deployment), use the evaluation
222224
| `Connection refused` on health check | Server still starting | Wait 30-60s for large models; check logs for errors |
223225
| `modelopt_fp4 not supported` | Framework doesn't support FP4 for this model | Check support matrix in `references/support-matrix.md` |
224226
227+
## Unsupported Models
228+
229+
If the model is not in the validated support matrix (`references/support-matrix.md`), deployment may fail due to weight key mismatches, missing architecture mappings, or quantized/unquantized layer confusion. Read `references/unsupported-models.md` for the iterative debug loop: **run → read error → diagnose → patch framework source → re-run**. For kernel-level issues, escalate to the framework team rather than attempting fixes.
230+
225231
## Success Criteria
226232
227233
1. Server process is running and healthy (`/health` returns 200)
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# Deploying Unsupported Models
2+
3+
When deploying a model not in the validated support matrix (`support-matrix.md`), expect failures. This guide covers the iterative debug loop for getting unsupported models running on vLLM, SGLang, or TRT-LLM.
4+
5+
## Step 1 — Run and collect the error
6+
7+
Submit the deployment job. When it fails, read the full log — focus on the **first** error traceback (not "See root cause above" wrappers). Identify the file and line number in the framework source.
8+
9+
## Step 2 — Diagnose the root cause
10+
11+
Fetch the framework source at the failing line (use `gh api` for the tagged version, or `find` inside the container). Common error categories:
12+
13+
| Category | Symptoms | Examples |
14+
|----------|----------|----------|
15+
| **Weight key mismatch** | `KeyError`, `Unexpected key`, `Missing key` during weight loading | Checkpoint uses `model.language_model.layers.*` but framework expects `model.layers.*`. See [vllm#39406](https://github.com/vllm-project/vllm/pull/39406) |
16+
| **Quantized/unquantized layer confusion** | Wrong layer type loaded, dtype errors, shape mismatches | Framework tries to load unquantized layers with FP4 kernel due to overly broad `quantization_config.ignore` patterns or missing ignore entries. See [sglang#18937](https://github.com/sgl-project/sglang/pull/18937) |
17+
| **Missing architecture support** | `NoneType is not iterable`, `KeyError` on model type, unknown architecture | Framework's model handler doesn't recognize the text backbone type (e.g., `ministral3` not handled in vLLM's `mistral3.py` init). Fix: extend the model type mapping |
18+
| **Transformers version mismatch** | `ImportError`, `KeyError` on config fields | Framework ships with older transformers that doesn't know the model type. Fix: upgrade transformers after installing the framework |
19+
| **Kernel-level issues** | CUDA errors, `triton` import failures, unsupported ops | Framework lacks kernel support for this model + quantization combo |
20+
21+
## Step 3 — Apply a targeted fix
22+
23+
Focus on **small, targeted patches** to the framework source. Do not modify `config.json` or the checkpoint — fix the framework's handling instead.
24+
25+
### Weight key mismatches and architecture mapping gaps
26+
27+
Patch the framework source in the run script using `sed` or a Python one-liner. Keep patches minimal — change only what's needed to unblock the current error.
28+
29+
```bash
30+
# Example: extend model type mapping in vLLM mistral3.py
31+
FRAMEWORK_FILE=$(find /usr/local/lib -path "*/vllm/model_executor/models/mistral3.py" 2>/dev/null | head -1)
32+
sed -i 's/old_pattern/new_pattern/' "${FRAMEWORK_FILE}"
33+
```
34+
35+
> **Tip**: when locating framework source files inside containers, use `find` instead of Python import — some frameworks print log messages to stdout during import that can corrupt captured paths.
36+
37+
### Speeding up debug iterations (vLLM)
38+
39+
When iterating on fixes, use these flags to shorten the feedback loop:
40+
41+
- **`--load-format dummy`** — skip loading actual model weights. Useful for testing whether the model initializes, config is parsed correctly, and weight keys match without waiting for the full checkpoint load.
42+
- **`VLLM_USE_PRECOMPILED=1 pip install --editable .`** — when patching vLLM source directly (instead of `sed`), this rebuilds only Python code without recompiling C++/CUDA extensions.
43+
44+
### Quantized/unquantized layer confusion
45+
46+
Check `hf_quant_config.json` ignore patterns against the framework's weight loading logic. The framework may try to load layers listed in `ignore` with quantized kernels, or vice versa. Fix by adjusting the framework's layer filtering logic.
47+
48+
### Kernel-level issues
49+
50+
These require framework kernel team involvement. Do NOT attempt to patch kernels. Instead:
51+
52+
1. Document the exact error (model, format, framework version, GPU type)
53+
2. Inform the user: *"This model + quantization combination requires kernel support that isn't available in {framework} v{version}. I'd suggest reaching out to the {framework} kernel team or trying a different framework."*
54+
3. Suggest trying an alternative framework (vLLM → SGLang → TRT-LLM)
55+
56+
## Step 4 — Re-run and iterate
57+
58+
After applying a fix, resubmit the job. Each iteration may reveal a new error (e.g., fixing the init error exposes a weight loading error). Continue the loop: **run → read error → diagnose → patch → re-run**.
59+
60+
Typical iteration count: 1-3 for straightforward fixes, 3-5 for models requiring multiple patches.
61+
62+
## Step 5 — Know when to stop
63+
64+
**Stop patching and escalate** when:
65+
66+
- The error is in compiled CUDA kernels or triton ops (not Python-level)
67+
- The fix requires changes to core framework abstractions (not just model handlers)
68+
- You've done 5+ iterations without the server starting
69+
70+
In these cases, inform the user and suggest: trying a different framework, checking for a newer framework version, or filing an issue with the framework team.

.claude/skills/evaluation/SKILL.md

Lines changed: 34 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ Config Generation Progress:
2828
- [ ] Step 5: Confirm tasks (iterative)
2929
- [ ] Step 6: Advanced - Multi-node (Data Parallel)
3030
- [ ] Step 7: Advanced - Interceptors
31+
- [ ] Step 7.5: Check container registry auth (SLURM only)
3132
- [ ] Step 8: Run the evaluation
3233
```
3334

@@ -74,9 +75,9 @@ Prompt the user with "I'll ask you 5 questions to build the base config we'll ad
7475
4. Safety & Security (like Garak and Safety Harness)
7576
5. Multilingual (like MMATH, Global MMLU, MMLU-Prox)
7677

77-
DON'T ALLOW FOR ANY OTHER OPTIONS, only the ones listed above under each category (Execution, Deployment, Auto-export, Model type, Benchmarks). YOU HAVE TO GATHER THE ANSWERS for the 5 questions before you can build the base config.
78+
Only accept options from the categories listed above (Execution, Deployment, Auto-export, Model type, Benchmarks). YOU HAVE TO GATHER THE ANSWERS for the 5 questions before you can build the base config.
7879

79-
> **Note:** These categories come from NEL's `build-config` CLI. If `nel skills build-config --help` shows different options than listed above, use the CLI's current options instead.
80+
> **Note:** These categories come from NEL's `build-config` CLI. **Always run `nel skills build-config --help` first** to get the current options — they may differ from this list (e.g., `chat_reasoning` instead of separate `chat`/`reasoning`, `general_knowledge` instead of `standard`). When the CLI's current options differ from this list, prefer the CLI's options.
8081
8182
When you have all the answers, run the script to build the base config:
8283

@@ -181,6 +182,36 @@ If the user needs multi-node evaluation (model >120B, or more throughput), read
181182

182183
- The docs may show incorrect parameter names for logging. Use `max_logged_requests` and `max_logged_responses` (NOT `max_saved_*` or `max_*`).
183184

185+
**Step 7.5: Check container registry authentication (SLURM only)**
186+
187+
NEL's default deployment images by framework:
188+
189+
| Framework | Default image | Registry |
190+
| --- | --- | --- |
191+
| vLLM | `vllm/vllm-openai:latest` | DockerHub |
192+
| SGLang | `lmsysorg/sglang:latest` | DockerHub |
193+
| TRT-LLM | `nvcr.io/nvidia/tensorrt-llm/release:...` | NGC |
194+
| Evaluation tasks | `nvcr.io/nvidia/eval-factory/*:26.03` | NGC |
195+
196+
Before submitting, verify the cluster has credentials for the deployment image. See `skills/common/slurm-setup.md` section 6 for the full procedure.
197+
198+
```bash
199+
ssh <host> "grep -E '^\s*machine\s+' ~/.config/enroot/.credentials 2>/dev/null"
200+
```
201+
202+
**Decision flow (check before submitting):**
203+
1. Check if the cluster has credentials for the default DockerHub image (see command above)
204+
2. If DockerHub credentials exist → use the default image and submit
205+
3. If DockerHub credentials are missing but can be added → add them (see `slurm-setup.md` section 6), then submit
206+
4. If DockerHub credentials cannot be added → override `deployment.image` to the NGC alternative and submit:
207+
208+
```yaml
209+
deployment:
210+
image: nvcr.io/nvidia/vllm:<YY.MM>-py3 # check https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm for latest tag
211+
```
212+
213+
5. **Do not retry more than once** without fixing the auth issue
214+
184215
**Step 8: Run the evaluation**
185216

186217
Print the following commands to the user. Propose to execute them in order to confirm the config works as expected before the full run.
@@ -303,5 +334,6 @@ Config Generation Progress:
303334
- [ ] Step 5: Confirm tasks (iterative)
304335
- [ ] Step 6: Advanced - Multi-node (Data Parallel)
305336
- [ ] Step 7: Advanced - Interceptors
337+
- [ ] Step 7.5: Check container registry auth (SLURM only)
306338
- [ ] Step 8: Run the evaluation
307339
```

.claude/skills/ptq/SKILL.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,24 @@ Check the support table in `examples/llm_ptq/README.md` for verified HF models.
2424
- **Listed** → supported, use `hf_ptq.py` (step 4A/4B)
2525
- **Not listed** → read `references/unsupported-models.md` to determine if `hf_ptq.py` can still work or if a custom script is needed (step 4C)
2626

27+
## Step 2.5 — Check for model-specific dependencies
28+
29+
If the model uses `trust_remote_code` (check `config.json` for `auto_map`), inspect its custom Python files for imports not present in the container:
30+
31+
```bash
32+
grep -h "^from \|^import " <model_path>/modeling_*.py | sort -u
33+
```
34+
35+
**Known dependency patterns:**
36+
37+
| Import found | Packages to install |
38+
| --- | --- |
39+
| `from mamba_ssm` / `from causal_conv1d` | `mamba-ssm causal-conv1d` (Mamba/hybrid models: NemotronH, Jamba) |
40+
41+
If extra deps are needed:
42+
- **Launcher (4B)**: set `EXTRA_PIP_DEPS` in the task's `environment` section — `ptq.sh` installs them automatically
43+
- **Manual (4A)**: `unset PIP_CONSTRAINT && pip install <deps>` before running `hf_ptq.py`
44+
2745
## Step 3 — Choose quantization format
2846

2947
**First**, check for a model-specific recipe:
@@ -113,6 +131,10 @@ ls -lh <output_path>/
113131

114132
Report the path and size to the user.
115133

134+
### Post-quantization validation
135+
136+
Validate the exported checkpoint's quantization pattern matches the recipe. Quantization config patterns can silently miss layers if the model uses non-standard naming (e.g., Gemma4 `experts.*` missed by `*mlp*` patterns) — this only surfaces later as deployment failures. Read `references/checkpoint-validation.md` for the validation script, expected patterns per recipe, and common pattern gaps.
137+
116138
## Key API Rules
117139

118140
- `mtq.register()` classes **must** define `_setup()` and call it from `__init__`
@@ -124,6 +146,7 @@ Report the path and size to the user.
124146

125147
## Common Pitfalls
126148

149+
- **Model-specific dependencies**: Models with `trust_remote_code` may import packages not in the container (e.g., `mamba-ssm` for hybrid Mamba models). See Step 2.5. Use `EXTRA_PIP_DEPS` env var with the launcher, or install manually before running `hf_ptq.py`
127150
- **Transformers version**: New models may need a newer version of transformers than what's installed. Check `config.json` for `transformers_version`. In containers, beware of `PIP_CONSTRAINT` blocking upgrades — see `references/slurm-setup-ptq.md` for workarounds
128151
- **Gated datasets**: Some calibration datasets require HF authentication. Ensure `HF_TOKEN` is set in the job environment, or use `--dataset cnn_dailymail` as a non-gated alternative
129152
- **NFS root_squash + Docker**: See `skills/common/slurm-setup.md` section 5
@@ -137,6 +160,7 @@ Report the path and size to the user.
137160
| `references/launcher-guide.md` | Step 4B only (launcher path) |
138161
| `tools/launcher/CLAUDE.md` | Step 4B only, if you need more launcher detail |
139162
| `references/unsupported-models.md` | Step 4C only (unlisted model) |
163+
| `references/checkpoint-validation.md` | Step 5: validate quantization pattern matches recipe |
140164
| `skills/common/remote-execution.md` | Step 4A/4C only, if target is remote |
141165
| `skills/common/slurm-setup.md` | Step 4A/4C only, if using SLURM manually (not launcher) |
142166
| `references/slurm-setup-ptq.md` | Step 4A/4C only, PTQ-specific SLURM (container, GPU sizing, FSDP2) |

0 commit comments

Comments
 (0)