Expose fully_parallel_save in save_megatron_model#3207
Expose fully_parallel_save in save_megatron_model#3207nic-nvidia wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
Conversation
Add fully_parallel_save parameter to AutoBridge.save_megatron_model() and model_load_save.save_megatron_model(), forwarded to CheckpointConfig. Defaults to True (no behavior change). Callers can pass False to disable FullyParallelSaveStrategyWrapper, which deadlocks when the distributed world includes ranks that do not participate in the save (e.g., vLLM inference workers in NeMo RL non-colocated setups). Needed by NVIDIA-NeMo/RL#2226.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughA new optional parameter Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
…_megatron_model The all_gather_object in determine_global_metadata (validation.py:518) uses the default PG. When some ranks take longer to build their state dict (e.g., due to expert parallelism), the collective times out. For one-time conversion saves, validation is unnecessary and can be safely skipped. Also adds distributed_timeout_minutes for callers that need longer timeouts during large model saves.
|
/ok to test 664a8a8 |
|
@nic-nvidia plz fix test fail |
|
Hi @nic-nvidia, |
Summary
Add
fully_parallel_saveparameter toAutoBridge.save_megatron_model()and the underlyingmodel_load_save.save_megatron_model(), forwarded toCheckpointConfig.Defaults to
True— no behavior change for existing callers.Problem
save_megatron_model()always creates aCheckpointConfigwithfully_parallel_save=True(the dataclass default). This activatesFullyParallelSaveStrategyWrapper, which callsall_gather_objecton DP sub-groups. When the distributed world includes ranks that never enter the save path (e.g., vLLM inference workers in NeMo RL non-colocated setups), these collectives deadlock permanently.Callers have no way to disable this behavior through the public API.
Fix
Thread
fully_parallel_save: bool = Truethrough both layers:AutoBridge.save_megatron_model()→model_load_save.save_megatron_model()→CheckpointConfig(..., fully_parallel_save=...)Motivation
Needed by NVIDIA-NeMo/RL#2226 which fixes a deadlock during HF-to-Megatron checkpoint conversion in non-colocated training/inference setups.
Test plan
fully_parallel_save=True)Summary by CodeRabbit