Skip to content

Commit 11f4b16

Browse files
kajalj22claude
andcommitted
Add nvidia-modelopt to mcore extra dependencies
megatron-bridge unconditionally imports modelopt at module load time (auto_bridge.py imports is_quantized). The old MLM workspace's setup.py bundled nvidia-modelopt as a direct dependency; with that workspace removed, modelopt is no longer available under --extra mcore. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Kajal Jain <kajalj@nvidia.com>
1 parent 7ead535 commit 11f4b16

2 files changed

Lines changed: 3 additions & 0 deletions

File tree

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,7 @@ mcore = [
123123
"transformer-engine[pytorch,core_cu13] @ git+https://github.com/NVIDIA/TransformerEngine.git@v2.14.1",
124124
"megatron-core",
125125
"megatron-bridge",
126+
"nvidia-modelopt[torch]; sys_platform != 'darwin'",
126127
"onnxscript",
127128
# Flash-attn version should be selected to satisfy both TE + vLLM requirements (xformers in particular)
128129
# https://github.com/NVIDIA/TransformerEngine/blob/v2.3/transformer_engine/pytorch/attention/dot_product_attention/utils.py#L108

uv.lock

Lines changed: 2 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)