Skip to content

Commit d94e64e

Browse files
committed
Refactor llm_qat: decouple dataset blend size from training runtime params
Dataset configs are now self-contained with blend_size and splits ratios. train_samples/eval_samples in train configs become runtime caps that don't invalidate the dataset cache. quantize.py only needs --dataset_config and --recipe instead of a full train config. Convert int4_blockwise_weight_only recipe to new list-of-dicts quant_cfg format. Route simple_qat_train through the blend dataset: - simple_qat_train.py: drop inline Daring-Anteater loader; load configs/dataset/blend.yaml via build_blend_dataset and use the "eval" split for calibration - configs/dataset/blend.yaml: 100K -> 20K samples, 90/5/5 splits - dataset_utils.py: widen _stream_samples try/except to skip sources that fail (e.g. pyarrow schema errors); simplify chat-template fallback detection Address PR review: - Fix recipe.ptq_cfg -> recipe.quantize in simple_qat_train.py, README, and transformers_trainer.py (ModelOptPTQRecipe has no ptq_cfg attr) - Add --config bounds check in ModelOptArgParser - Narrow broad exception handlers in dataset_utils.py - Fix <|im_end|> assistant mask to exclude boundary token - Warn when --recipe is passed to train.py (ignored, use quantize.py) - Hyphenate "right-padded" in arguments.py help text - Move HF TrainingArguments default overrides from arguments.py to YAML configs; drop redundant eval_accumulation_steps and learning_rate - Rename .yml -> .yaml for normalize-yaml-ext hook - Update ARGUMENTS.md help text to say "additional arguments" Signed-off-by: realAsma <akuriparambi@nvidia.com>
1 parent 4e33368 commit d94e64e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+2751
-1010
lines changed

.pre-commit-config.yaml

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ repos:
108108
examples/llm_eval/lm_eval_hf.py|
109109
examples/llm_eval/mmlu.py|
110110
examples/llm_eval/modeling.py|
111-
examples/llm_qat/main.py|
111+
examples/llm_qat/train.py|
112112
examples/llm_sparsity/weight_sparsity/finetune.py|
113113
examples/specdec_bench/specdec_bench/models/specbench_medusa.py|
114114
examples/speculative_decoding/main.py|
@@ -136,6 +136,21 @@ repos:
136136
args: ["-c", "pyproject.toml", "-q"]
137137
additional_dependencies: ["bandit[toml]"]
138138

139+
- repo: local
140+
hooks:
141+
- id: generate-arguments-md
142+
name: Regenerate examples/llm_qat/ARGUMENTS.md
143+
entry: bash -c 'python examples/llm_qat/train.py --generate_docs examples/llm_qat/ARGUMENTS.md'
144+
language: system
145+
files: >-
146+
(?x)^(
147+
examples/llm_qat/arguments\.py|
148+
examples/llm_qat/train\.py|
149+
modelopt/torch/opt/plugins/transformers\.py|
150+
modelopt/torch/quantization/plugins/transformers_trainer\.py
151+
)$
152+
pass_filenames: false
153+
139154
- repo: https://github.com/DavidAnson/markdownlint-cli2
140155
rev: v0.18.1
141156
hooks:

CHANGELOG.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,9 @@ Changelog
1515
- Enable PTQ workflow for the Step3.5-Flash MoE model with NVFP4 W4A4 + FP8 KV cache quantization. See `modelopt_recipes/models/Step3.5-Flash/nvfp4-mlp-only.yaml <https://github.com/NVIDIA/Model-Optimizer/blob/main/modelopt_recipes/models/Step3.5-Flash/nvfp4-mlp-only.yaml>`_ for more details.
1616
- Add support for vLLM fakequant reload using ModelOpt state for HF models. See `examples/vllm_serve/README.md <https://github.com/NVIDIA/Model-Optimizer/tree/main/examples/vllm_serve#load-qatptq-model-and-serve-in-vllm-wip>`_ for more details.
1717
- [Early Testing] Add Claude Code PTQ skill (``.claude/skills/ptq/``) for agent-assisted post-training quantization. The skill guides the agent through environment detection, model support checking, format selection, and execution via the launcher or manual SLURM/Docker/bare GPU paths. Includes handling for unlisted models with custom module patching. This feature is in early testing — use with caution.
18+
- Refactor ``llm_qat`` example with unified YAML-based configuration and flexible dataset blending.
19+
``ModelOptArgParser`` adds ``--config`` YAML support with CLI overrides and auto-generates ``ARGUMENTS.md`` from dataclass definitions.
20+
Dataset blending (``configs/dataset/blend.yaml``) supports HuggingFace datasets, local JSON/JSONL/Parquet files, and weighted multi-source blends.
1821

1922
**Backward Breaking Changes**
2023

examples/llm_qad/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
Quantization-Aware Distillation (QAD) training scripts for language models using Megatron-LM. These scripts enable training quantized (e.g., NVFP4) student models with knowledge distillation from full-precision teacher models.
44

5+
> **Note:** For Hugging Face LLM QAD, see the [LLM QAT QAD section](../llm_qat/README.md#end-to-end-qad-example).
6+
57
## Overview
68

79
| Script | Purpose |

examples/llm_qat/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.cache/

examples/llm_qat/ARGUMENTS.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# Argument Reference
2+
3+
_Auto-generated — do not edit by hand._
4+
5+
## DistillArguments
6+
7+
| Argument | Type | Default | Description |
8+
|----------|------|---------|-------------|
9+
| `--distill` | `bool` | `False` | Enable training with knowledge distillation. |
10+
| `--teacher_model` | `str` | `None` | The name or path of the teacher model to use for distillation. |
11+
| `--criterion` | `str` | `"logits_loss"` | Distillation loss criterion. Currently only 'logits_loss' is supported. |
12+
13+
## DataArguments
14+
15+
| Argument | Type | Default | Description |
16+
|----------|------|---------|-------------|
17+
| `--dataset_config` | `str` | `"configs/dataset/blend.yaml"` | Path to a dataset blend YAML config file. |
18+
| `--train_samples` | `int` | `20000` | Number of training samples to use. |
19+
| `--eval_samples` | `int` | `2000` | Number of evaluation samples to use. |
20+
| `--dataset_seed` | `int` | `42` | Random seed for dataset shuffling. |
21+
| `--dataset_cache_dir` | `str` | `".dataset_cache/tokenized"` | Directory for caching tokenized datasets. |
22+
| `--shuffle` | `bool` | `True` | Whether to shuffle dataset sources (reservoir sampling). |
23+
| `--shuffle_buffer` | `int` | `10000` | Buffer size for streaming shuffle. |
24+
| `--num_proc` | `int` | `16` | Number of CPU workers for tokenization. |
25+
26+
## ModelArguments
27+
28+
| Argument | Type | Default | Description |
29+
|----------|------|---------|-------------|
30+
| `--model_name_or_path` | `str` | `"meta-llama/Llama-2-7b-hf"` | HuggingFace model name or local path to the base model to quantize/train. |
31+
| `--model_max_length` | `int` | `4096` | Maximum sequence length. Sequences will be right-padded (and possibly truncated). |
32+
33+
## QuantizeArguments
34+
35+
| Argument | Type | Default | Description |
36+
|----------|------|---------|-------------|
37+
| `--recipe` | `str` | `None` | Path to a quantization recipe YAML file (built-in or custom). Built-in recipes can be specified by relative path, e.g. 'general/ptq/nvfp4_default-fp8_kv'. |
38+
| `--calib_size` | `int` | `512` | Specify the calibration size for quantization. The calibration dataset is used to setup the quantization scale parameters. |
39+
| `--calib_batch_size` | `int` | `1` | Batch size for calibration data during quantization. |
40+
| `--compress` | `bool` | `False` | Whether to compress the model weights after quantization for QLoRA. This is useful for reducing the model size. |
41+
| `--quantize_output_dir` | `str` | `"quantized_model"` | Directory to save the quantized model checkpoint. |
42+
43+
## TrainingArguments
44+
45+
Extends [HuggingFace TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). Only additional arguments are shown below.
46+
47+
| Argument | Type | Default | Description |
48+
|----------|------|---------|-------------|
49+
| `--cache_dir` | `str` | `None` | |
50+
| `--lora` | `bool` | `False` | Whether to add LoRA (Low-Rank Adaptation) adapter before training. When using real quantization, the LoRA adapter must be set, as quantized weights will be frozen during training. |

0 commit comments

Comments
 (0)