Skip to content

Fix MIMO optimizer setup for frozen modules#4790

Draft
liding-nv wants to merge 1 commit into
NVIDIA:mainfrom
liding-nv:mimo-frozen-modules
Draft

Fix MIMO optimizer setup for frozen modules#4790
liding-nv wants to merge 1 commit into
NVIDIA:mainfrom
liding-nv:mimo-frozen-modules

Conversation

@liding-nv
Copy link
Copy Markdown

What does this PR do ?

Problem

get_mimo_optimizer builds an inner optimizer for every non-None module on the local rank, even when that module has zero trainable parameters (e.g. a fully frozen vision encoder during projector-only training). The resulting placeholder optimizers crash later in setup. The wider _get_param_groups path also all-gathers param-group keys over the WORLD group, which doesn't match the per-module optimizer group MIMO operates on.

Changes

  • megatron/core/models/mimo/optimizer.py: add _module_has_any_trainable_parameters(module, pg_collection.intra_dist_opt) — an all-reduce-MAX over the module's optimizer group of the local trainable-param count. Skip the optimizer build when no rank in the group has trainable parameters.

  • megatron/core/optimizer/__init__.py: add an optional process_group=None kwarg to _get_param_groups / _get_param_groups_and_buffers, plumbed through _get_megatron_emerging_optimizer and get_megatron_optimizer, so the cross-rank all_gather_object(params_key, …) can target a specific group (MIMO passes intra_dist_opt). Default None preserves current behavior for every existing caller.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Signed-off-by: Li Ding <liding@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 14, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant