Skip to content

[#12477][feat] AutoDeploy: Mistral4 Eagle Support#12759

Draft
govind-ramnarayan wants to merge 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:gramnarayan/mistral4-eagle-v2
Draft

[#12477][feat] AutoDeploy: Mistral4 Eagle Support#12759
govind-ramnarayan wants to merge 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:gramnarayan/mistral4-eagle-v2

Conversation

@govind-ramnarayan
Copy link
Copy Markdown
Collaborator

@coderabbitai summary

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Add Eagle one-model speculative decoding support for Mistral-Small-4-119B:

- Mistral4-specific Eagle layer (Mistral4EagleMLA, Mistral4EagleMLP,
  Mistral4EagleDecoderLayer) with dispatch table entry
- EagleOneModelFactory gains TargetModelExportInfo/DraftModelExportInfo
  with scalar-sentinel DCE prevention matching hf.py pattern
- Factory delegation so Mistral4's custom target factory is used
- Mistral3 wrapper delegation methods (get_input/output_embeddings, config)
- FP8 checkpoint loading improvements for Mistral4
- Hidden-state capture guard against double-apply
- MLA RoPE deinterleave hook ordering fix for Eagle path
- LlmArgs Eagle config defaults and MTP one-model routing

Configs: mistral4_eagle_119b.yaml (8-GPU), mistral_small_4_119b_eagle.yaml,
mistral_small_4_119b_lite.yaml, mistral_small_4_119b_torch_mla.yaml

Tests: hierarchical unit tests (AD ops vs PyTorch reference), hidden-state
capture detection, torch.export + AD pipeline integration, 1-layer and
3-layer Eagle one-model E2E smoke, layer subgraph debug, framework-level
spec-dec config/KV-cache tests.

Signed-off-by: Govind Ramnarayan <gramnarayan@nvidia.com>
Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>
eagle3_model_arch: mistral_large3
eagle3_layers_to_capture: [-1]
speculative_model_kwargs:
num_hidden_layers: 2
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not be overridden

@@ -0,0 +1,15 @@
# Config for Mistral Small 4 119B with Eagle3 speculative decoding.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait which one is used? This other the other configs?

@@ -0,0 +1,12 @@
runtime: trtllm
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be here; tests only

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant