Add --freeze-base-for-mtp to train MTP heads on frozen quantized base#4785
Draft
yeyu-nvidia wants to merge 2 commits into
Draft
Add --freeze-base-for-mtp to train MTP heads on frozen quantized base#4785yeyu-nvidia wants to merge 2 commits into
yeyu-nvidia wants to merge 2 commits into
Conversation
After NVFP4 QAD, this flag allows loading the quantized checkpoint, adding randomly-initialized MTP heads, and training only MTP parameters with lm_loss while keeping the base model frozen. Changes: - arguments.py: Add --freeze-base-for-mtp argument - model_builder.py: Construct mtp_block_spec in modelopt builder, add _freeze_base_for_mtp() to freeze all non-MTP parameters Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Tests verify that _freeze_base_for_mtp correctly freezes all base model parameters while keeping MTP layer parameters trainable. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
--freeze-base-for-mtpargument to freeze all base model parameters and train only MTP headsmtp_block_specinmodelopt_gpt_hybrid_builder()so MTP heads can be added to ModelOpt-quantized models_freeze_base_for_mtp()helper that setsrequires_grad=Falseon all non-MTP parametersUse case: After NVFP4 QAD, load the quantized checkpoint with
--mtp-num-layers 1 --freeze-base-for-mtp, which adds randomly-initialized MTP heads and trains them with lm_loss while keeping the quantized base frozen.Test plan
freeze_base_for_mtp=True, verify onlymtp.layers.*params haverequires_grad=True--mtp-num-layers 1 --freeze-base-for-mtp, verify model builds and MTP params are randomly initialized🤖 Generated with Claude Code