Skip to content

Commit 9f8188d

Browse files
authored
[1/N] Polish deployment skills - Add a debug loop for unsupported models (#1236)
### What does this PR do? Type of change: Skills update Add a debug loop guide for deploying unsupported models to the deployment skill. When deploying models not in the validated support matrix (e.g., newly quantized VLMs or models with new architectures like Devstral/ministral3), the inference framework (vLLM, SGLang, TRT-LLM) often fails during model init or weight loading. This PR adds: - `references/unsupported-models.md` — a 5-step iterative debug workflow: **run → read error → diagnose → patch framework source → re-run** - A short pointer in `SKILL.md` under "Unsupported Models" (keeps SKILL.md concise, matching the PTQ skill's pattern) The guide covers five common error categories with real-world examples: - **Weight key mismatches** (e.g., [vllm#39406](vllm-project/vllm#39406)) - **Quantized/unquantized layer confusion** (e.g., [sglang#18937](sgl-project/sglang#18937)) - **Missing architecture support** (e.g., `ministral3` not handled in vLLM's `mistral3.py`) - **Transformers version mismatches** - **Kernel-level issues** (escalate to framework team) Motivated by deploying a Devstral-Small-2-24B NVFP4 checkpoint on vLLM, where vLLM's `mistral3.py` didn't handle `ministral3` as a text backbone model type. ### Testing Validated end-to-end: NVFP4 quantization of Devstral-Small-2-24B → vLLM deployment on B100 GPUs with the debug loop (3 iterations to get the server running). ### Before your PR is "*Ready for review*" - Is this change backward compatible?: N/A (documentation only) - If you copied code from any other sources or added a new PIP dependency, did you follow guidance in `CONTRIBUTING.md`: N/A - Did you write any new necessary tests?: N/A (skill documentation) - Did you update [Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?: N/A <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Documentation** * Added a deployment guide for unsupported models with an iterative "run → read error → diagnose → patch → re-run" troubleshooting workflow, common failure categories, escalation criteria, and practical remediation tips. * Added post-quantization validation guidance and a lightweight script to verify which layers are quantized vs excluded, plus recommendations for addressing unexpected layers and MoE/VLM naming gaps. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
1 parent d45219b commit 9f8188d

File tree

5 files changed

+166
-0
lines changed

5 files changed

+166
-0
lines changed

.claude/skills/deployment/SKILL.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -222,6 +222,10 @@ For NEL-managed deployment (evaluation with self-deployment), use the evaluation
222222
| `Connection refused` on health check | Server still starting | Wait 30-60s for large models; check logs for errors |
223223
| `modelopt_fp4 not supported` | Framework doesn't support FP4 for this model | Check support matrix in `references/support-matrix.md` |
224224
225+
## Unsupported Models
226+
227+
If the model is not in the validated support matrix (`references/support-matrix.md`), deployment may fail due to weight key mismatches, missing architecture mappings, or quantized/unquantized layer confusion. Read `references/unsupported-models.md` for the iterative debug loop: **run → read error → diagnose → patch framework source → re-run**. For kernel-level issues, escalate to the framework team rather than attempting fixes.
228+
225229
## Success Criteria
226230
227231
1. Server process is running and healthy (`/health` returns 200)
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# Deploying Unsupported Models
2+
3+
When deploying a model not in the validated support matrix (`support-matrix.md`), expect failures. This guide covers the iterative debug loop for getting unsupported models running on vLLM, SGLang, or TRT-LLM.
4+
5+
## Step 1 — Run and collect the error
6+
7+
Submit the deployment job. When it fails, read the full log — focus on the **first** error traceback (not "See root cause above" wrappers). Identify the file and line number in the framework source.
8+
9+
## Step 2 — Diagnose the root cause
10+
11+
Fetch the framework source at the failing line (use `gh api` for the tagged version, or `find` inside the container). Common error categories:
12+
13+
| Category | Symptoms | Examples |
14+
|----------|----------|----------|
15+
| **Weight key mismatch** | `KeyError`, `Unexpected key`, `Missing key` during weight loading | Checkpoint uses `model.language_model.layers.*` but framework expects `model.layers.*`. See [vllm#39406](https://github.com/vllm-project/vllm/pull/39406) |
16+
| **Quantized/unquantized layer confusion** | Wrong layer type loaded, dtype errors, shape mismatches | Framework tries to load unquantized layers with FP4 kernel due to overly broad `quantization_config.ignore` patterns or missing ignore entries. See [sglang#18937](https://github.com/sgl-project/sglang/pull/18937) |
17+
| **Missing architecture support** | `NoneType is not iterable`, `KeyError` on model type, unknown architecture | Framework's model handler doesn't recognize the text backbone type (e.g., `ministral3` not handled in vLLM's `mistral3.py` init). Fix: extend the model type mapping |
18+
| **Transformers version mismatch** | `ImportError`, `KeyError` on config fields | Framework ships with older transformers that doesn't know the model type. Fix: upgrade transformers after installing the framework |
19+
| **Kernel-level issues** | CUDA errors, `triton` import failures, unsupported ops | Framework lacks kernel support for this model + quantization combo |
20+
21+
## Step 3 — Apply a targeted fix
22+
23+
Focus on **small, targeted patches** to the framework source. Do not modify `config.json` or the checkpoint — fix the framework's handling instead.
24+
25+
### Weight key mismatches and architecture mapping gaps
26+
27+
Patch the framework source in the run script using `sed` or a Python one-liner. Keep patches minimal — change only what's needed to unblock the current error.
28+
29+
```bash
30+
# Example: extend model type mapping in vLLM mistral3.py
31+
FRAMEWORK_FILE=$(find /usr/local/lib -path "*/vllm/model_executor/models/mistral3.py" 2>/dev/null | head -1)
32+
sed -i 's/old_pattern/new_pattern/' "${FRAMEWORK_FILE}"
33+
```
34+
35+
> **Tip**: when locating framework source files inside containers, use `find` instead of Python import — some frameworks print log messages to stdout during import that can corrupt captured paths.
36+
37+
### Speeding up debug iterations (vLLM)
38+
39+
When iterating on fixes, use these flags to shorten the feedback loop:
40+
41+
- **`--load-format dummy`** — skip loading actual model weights. Useful for testing whether the model initializes, config is parsed correctly, and weight keys match without waiting for the full checkpoint load.
42+
- **`VLLM_USE_PRECOMPILED=1 pip install --editable .`** — when patching vLLM source directly (instead of `sed`), this rebuilds only Python code without recompiling C++/CUDA extensions.
43+
44+
### Quantized/unquantized layer confusion
45+
46+
Check `hf_quant_config.json` ignore patterns against the framework's weight loading logic. The framework may try to load layers listed in `ignore` with quantized kernels, or vice versa. Fix by adjusting the framework's layer filtering logic.
47+
48+
### Kernel-level issues
49+
50+
These require framework kernel team involvement. Do NOT attempt to patch kernels. Instead:
51+
52+
1. Document the exact error (model, format, framework version, GPU type)
53+
2. Inform the user: *"This model + quantization combination requires kernel support that isn't available in {framework} v{version}. I'd suggest reaching out to the {framework} kernel team or trying a different framework."*
54+
3. Suggest trying an alternative framework (vLLM → SGLang → TRT-LLM)
55+
56+
## Step 4 — Re-run and iterate
57+
58+
After applying a fix, resubmit the job. Each iteration may reveal a new error (e.g., fixing the init error exposes a weight loading error). Continue the loop: **run → read error → diagnose → patch → re-run**.
59+
60+
Typical iteration count: 1-3 for straightforward fixes, 3-5 for models requiring multiple patches.
61+
62+
## Step 5 — Know when to stop
63+
64+
**Stop patching and escalate** when:
65+
66+
- The error is in compiled CUDA kernels or triton ops (not Python-level)
67+
- The fix requires changes to core framework abstractions (not just model handlers)
68+
- You've done 5+ iterations without the server starting
69+
70+
In these cases, inform the user and suggest: trying a different framework, checking for a newer framework version, or filing an issue with the framework team.

.claude/skills/ptq/SKILL.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,10 @@ ls -lh <output_path>/
113113

114114
Report the path and size to the user.
115115

116+
### Post-quantization validation
117+
118+
Validate the exported checkpoint's quantization pattern matches the recipe. Quantization config patterns can silently miss layers if the model uses non-standard naming (e.g., Gemma4 `experts.*` missed by `*mlp*` patterns) — this only surfaces later as deployment failures. Read `references/checkpoint-validation.md` for the validation script, expected patterns per recipe, and common pattern gaps.
119+
116120
## Key API Rules
117121

118122
- `mtq.register()` classes **must** define `_setup()` and call it from `__init__`
@@ -137,6 +141,7 @@ Report the path and size to the user.
137141
| `references/launcher-guide.md` | Step 4B only (launcher path) |
138142
| `tools/launcher/CLAUDE.md` | Step 4B only, if you need more launcher detail |
139143
| `references/unsupported-models.md` | Step 4C only (unlisted model) |
144+
| `references/checkpoint-validation.md` | Step 5: validate quantization pattern matches recipe |
140145
| `skills/common/remote-execution.md` | Step 4A/4C only, if target is remote |
141146
| `skills/common/slurm-setup.md` | Step 4A/4C only, if using SLURM manually (not launcher) |
142147
| `references/slurm-setup-ptq.md` | Step 4A/4C only, PTQ-specific SLURM (container, GPU sizing, FSDP2) |
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
# Post-Quantization Checkpoint Validation
2+
3+
Verify the exported checkpoint's quantization pattern matches the recipe used. Quantization config patterns may silently miss layers if the model uses non-standard naming — this only surfaces later as deployment failures when the serving framework tries to load unquantized weights as quantized.
4+
5+
## Expected quantization patterns by recipe
6+
7+
| Recipe (`--qformat`) | What should be quantized | What should be excluded |
8+
|----------------------|-------------------------|------------------------|
9+
| `nvfp4` | All linear layers | lm_head, routers, norms, embeddings |
10+
| `nvfp4_mlp_only` | MLP layers (including MoE experts) | Attention layers, lm_head, routers |
11+
| `nvfp4_experts_only` | MoE expert layers only | Dense MLP, attention, lm_head, routers |
12+
| `nvfp4_omlp_only` | MLP + o_proj layers | Other attention layers, lm_head, routers |
13+
| `fp8` | All linear layers | lm_head, norms, embeddings |
14+
| `int4_awq` | All linear layers | lm_head, norms, embeddings |
15+
16+
## Validation script
17+
18+
Run against the exported checkpoint to check every linear layer is either quantized (has scale params) or explicitly excluded:
19+
20+
```bash
21+
python3 -c "
22+
import json, fnmatch
23+
24+
output = '<output_path>'
25+
idx = json.load(open(f'{output}/model.safetensors.index.json'))
26+
cfg = json.load(open(f'{output}/hf_quant_config.json'))
27+
excludes = cfg['quantization']['exclude_modules']
28+
29+
all_keys = set(idx['weight_map'].keys())
30+
# Identify linear weight params (skip norms, embeddings, scalars, scales)
31+
skip_suffixes = ('_scale', '_scale_2', 'layernorm', 'layer_norm', 'norm.weight', 'embed', 'scalar')
32+
linear_weights = sorted(k for k in all_keys
33+
if k.endswith('.weight') and not any(s in k.lower() for s in skip_suffixes))
34+
35+
# Check which have quantization scales
36+
quantized, excluded, unexpected = [], [], []
37+
for w in linear_weights:
38+
base = w.rsplit('.weight', 1)[0]
39+
has_scales = any(f'{base}.{s}' in all_keys for s in ['weight_scale', 'input_scale'])
40+
is_excluded = any(fnmatch.fnmatch(w, p) or fnmatch.fnmatch(base, p) for p in excludes)
41+
42+
if has_scales:
43+
quantized.append(w)
44+
elif is_excluded:
45+
excluded.append(w)
46+
else:
47+
unexpected.append(w)
48+
49+
print(f'Quantized layers: {len(quantized)}')
50+
print(f'Excluded layers (in exclude_modules): {len(excluded)}')
51+
if unexpected:
52+
print(f'\nWARNING: {len(unexpected)} layers have NO scales and are NOT in exclude list:')
53+
# Group by module type for readability
54+
groups = {}
55+
for w in unexpected:
56+
parts = w.split('.')
57+
module_type = next((p for p in parts if p in
58+
('self_attn', 'mlp', 'experts', 'router', 'lm_head', 'embed_tokens', 'vision_tower')), 'other')
59+
groups.setdefault(module_type, []).append(w)
60+
for mtype, weights in sorted(groups.items()):
61+
print(f' {mtype}: {len(weights)} weights (e.g., {weights[0]})')
62+
print()
63+
print('These layers were silently skipped during quantization.')
64+
print('Likely cause: quantization config patterns did not match these module names.')
65+
print('This WILL cause deployment failures (framework loads them as quantized but they are BF16).')
66+
print('Fix: add missing patterns to the config, or add to exclude_modules if intentionally unquantized.')
67+
else:
68+
print('\nAll layers are either quantized or explicitly excluded. Checkpoint is consistent.')
69+
"
70+
```
71+
72+
## Common pattern gaps
73+
74+
Layers silently skipped because the quantization config patterns don't match the model's naming:
75+
76+
| Model | Module path | Missed by pattern | Fix |
77+
|-------|-------------|-------------------|-----|
78+
| Gemma4 MoE | `layers.N.experts.*` | `*mlp*`, `*block_sparse_moe*` | Add `*.experts.*` (PR #1219) |
79+
| Custom MoE | `layers.N.moe_block.experts.*` | `*mlp*` | Add matching pattern |
80+
| VLM projector | `multi_modal_projector.*` || Usually excluded; verify |
81+
82+
## What to do when warnings appear
83+
84+
- **Layers should have been quantized** (e.g., MoE experts with `nvfp4_mlp_only`): the quantization config patterns missed them. Fix by adding the missing pattern to the config and re-running PTQ. Check if ModelOpt already has a plugin for the model in `modelopt/torch/quantization/plugins/huggingface.py`.
85+
86+
- **Layers are intentionally unquantized** (e.g., attention layers with `nvfp4_mlp_only`): they should be in the `exclude_modules` list but the export didn't add them. Add them manually to both `hf_quant_config.json` and `config.json` `quantization_config.ignore` in the checkpoint to prevent deployment failures.

.claude/skills/ptq/references/unsupported-models.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -347,4 +347,5 @@ tokenizer.save_pretrained(output_path)
347347
- **Check quantizer summary**: `mtq.print_quant_summary(model)` shows which quantizers are enabled/disabled
348348
- **Inspect dtypes**: After loading, iterate `model.named_parameters()` and check for unexpected FP8 tensors
349349
- **Watch for silent disabling**: A misconfigured wildcard pattern can silently disable quantizers — always verify the summary
350+
- **Validate quantization pattern after export**: Run the validation script from SKILL.md Step 5 on the exported checkpoint. It checks every linear layer is either quantized (has scale params) or explicitly excluded. Layers that are neither were silently skipped — common for models with non-standard naming (e.g., Gemma4 `experts.*` missed by `*mlp*` patterns). This causes deployment failures when the framework tries to load BF16 weights as quantized
350351
- **Read pip errors carefully**: `ResolutionImpossible` means dependency conflict (try `--no-deps`), NOT network failure. Check for `Connection refused`/`Name resolution failed` before concluding network is down

0 commit comments

Comments
 (0)