Commit 52f8783
authored
Update news of Nemotron=3-Super is supported on Megatron-Bridge (#1025)
### What does this PR do?
Type of change: Documentation
Add Nemotron-3-Super launch news entries to the README "Latest News"
section:
A new entry highlighting that [NeMo Megatron
Bridge](https://github.com/NVIDIA-NeMo/Megatron-Bridge) now supports
Nemotron-3-Super quantization (PTQ) and export workflows using the Model
Optimizer library, with a link to the [Quantization (PTQ and QAT)
guide](https://github.com/NVIDIA-NeMo/Megatron-Bridge/blob/super-v3/docs/models/llm/nemotron3-super.md#quantization-ptq-and-qat).
### Usage
```markdown
N/A — documentation-only change (README.md update).
```
### Testing
No testing required; this is a documentation-only change to README.md.
### Before your PR is "*Ready for review*"
Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)
and your commits are signed (`git commit -s -S`).
Make sure you read and follow the [Security Best
Practices](https://github.com/NVIDIA/Model-Optimizer/blob/main/SECURITY.md#security-coding-practices-for-contributors)
(e.g. avoiding hardcoded `trust_remote_code=True`, `torch.load(...,
weights_only=False)`, `pickle`, etc.).
- Is this change backward compatible?: N/A
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: N/A
- Did you write any new necessary tests?: N/A
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
N/A
### Additional Information
Related links:
- Megatron Bridge Nemotron 3 Super docs:
https://github.com/NVIDIA-NeMo/Megatron-Bridge/blob/super-v3/docs/models/llm/nemotron3-super.md
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Documentation**
* Updated latest news section with announcement of March 2026 release:
NeMo Megatron Bridge now provides full support for Nemotron-3-Super
quantization capabilities, supporting both Post-Training Quantization
(PTQ) and Quantization-Aware Training (QAT) approaches
* Added detailed documentation covering export workflows via the Model
Optimizer library with direct reference links to comprehensive
quantization guides
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: James Shen <yueshen@nvidia.com>1 parent 34a9fc7 commit 52f8783
1 file changed
Lines changed: 1 addition & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
27 | 27 | | |
28 | 28 | | |
29 | 29 | | |
| 30 | + | |
30 | 31 | | |
31 | 32 | | |
32 | 33 | | |
| |||
0 commit comments