Skip to content

fix mimo optimizer checkpoint metadata restore#4791

Draft
liding-nv wants to merge 1 commit into
NVIDIA:mainfrom
liding-nv:mimo-ckpt-metadata
Draft

fix mimo optimizer checkpoint metadata restore#4791
liding-nv wants to merge 1 commit into
NVIDIA:mainfrom
liding-nv:mimo-ckpt-metadata

Conversation

@liding-nv
Copy link
Copy Markdown

Problem

Distributed-checkpoint load for MimoOptimizer can return the sharded tensor state plus the extracted MIMO per-module param-groups but without the original nested "optimizer" common-state wrapper. The inner optimizer's load_state_dict then KeyErrors on the missing wrapper. Separately, param_state_sharding_type is not extracted/restored across save+load — a real divergence for non-colocated rank module ownership, where rank 0 may own the language module while rank 1 owns the vision module.

Changes

Save side: extract param_state_sharding_type into a ShardedObject keyed optimizer.mimo.<module>.<suffix>.param_state_sharding_type so it round-trips through DistCkpt.

Load side: restore param_state_sharding_type from the _mimo_param_state_sharding_type* key, and reconstruct {'optimizer': {'param_groups': ...}} via setdefault so the inner optimizer's load_state_dict sees the keys it expects.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Signed-off-by: Li Ding <liding@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 14, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant