Skip to content

ExecuTorch MLX delegate#16718

Closed
metascroy wants to merge 92 commits intomainfrom
mlx-delegate
Closed

ExecuTorch MLX delegate#16718
metascroy wants to merge 92 commits intomainfrom
mlx-delegate

Conversation

@metascroy
Copy link
Copy Markdown
Contributor

@metascroy metascroy commented Jan 20, 2026

Summary

This PR adds an MLX backend for ExecuTorch, enabling Metal-accelerated inference on Apple Silicon. It runs Llama, Qwen, Gemma, Whisper, Voxtral, and Parakeet models end-to-end, with 637 passing op tests and multithreaded execution support. For many models, it offers best performance among all ExecuTorch backends on Apple Silicon, offering 2-6x speedups over what was previously possible with ExecuTorch, and up to 30% smaller model sizes compared to XNNPACK due to BF16 support and tied quantized embedding support.

The PR is large due to extensive op coverage, testing, and documentation, but almost all changes are confined to backends/mlx/. The design is described in backends/mlx/README.md.

Suggested review approach:

  1. Review files outside backends/mlx/ carefully — these integrate with ExecuTorch's build system and are the most likely to need changes.
  2. For backends/mlx/, focus on structural design (see README) and test coverage (CI job is .github/workflows/mlx.yml)

Prerequisite PRs

These fixes were developed alongside the MLX backend. Once merged, this PR can be rebased to remove the duplicated changes:

  • #17257 — Improve lowering time with NamedDataMap
  • #17679 — Allow transform passes in etLLM
  • #17678 — Fix dynamic shape bug in remove_noop_pass
  • #17378 — Fix pocketfft intermittent bus errors on macOS (upstream fix)

Tests

CI is defined in .github/workflows/mlx.yml:

  • test_ops.py: 637 passing op tests
  • Multithreading: launches models on 50 threads, verifies correctness
  • GenAI E2E: parakeet, voxtral, etLLM (stories110m), HF LLMs (llama, qwen, gemma)
  • backend-tester: 380 passed, 1 failed, 86 skipped operator tests; 34 passed, 3 failed, 5 skipped model tests

The 1 failing operator test is a test-side issue being fixed in #17539. The 3 failing model tests will be addressed in follow-ups — they are not an initial focus compared to the GenAI models above.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jan 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16718

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 2 Cancelled Jobs, 1 Unrelated Failure

As of commit d8ee9d2 with merge base 25f2a3f (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 20, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@metascroy metascroy force-pushed the mlx-delegate branch 7 times, most recently from e0e015c to 240b241 Compare February 25, 2026 01:06
@metascroy metascroy changed the title [draft] MLX delegate ExecuTorch MLX delegate Feb 25, 2026
Comment thread backends/mlx/examples/llm/export_llm_hf.py
@@ -451,7 +451,9 @@ def to_backend(self, partitioners: Optional[List[Partitioner]]) -> "LLMEdgeManag
return self

def to_edge_transform_and_lower(
self, partitioners: Optional[List[Partitioner]]
self,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove once #17679 lands

# Only do this check if all the dims are static.
if all(isinstance(dim, int) for dim in orig_tensor.size()):
if orig_tensor.shape == node.meta["val"].shape:
output_tensor = node.meta["val"]
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove once #17678 lands

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved that PR

Comment thread backends/mlx/examples/llm/export_llm_hf.py
metascroy added a commit that referenced this pull request Feb 27, 2026
This introduces a CSE pass to ExecuTorch, which eliminates common
subexpressions that occur in exported programs.

This pass was first developed as part of the MLX delegate
(#16718) to optimize
transformers, but I'm introducing it to ExecuTorch more generally
because I believe it could benefit many other backends.

Examples of common subexpressions that occur in transformers:

* Repeated mask constructions per layer (only needs to be done once)
* Repeated extraction of symints from 1d tensors for cache position
(emits .item calls, which cause tensor materialization)

This pass eliminates these inefficiencies without having to rewrite the
model.


@dataclass
class PatternMatch:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

XNNPack has their own pattern matching util iirc. Could we just offer this in Exir cc @GregoryComer

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to reuse any pre-existing utilities if they're in exir.

If the XNNPACK one is based on the torch.fx pattern finding util, I don't know if it'll work, though. I recall it being very bad at fuzzy matching of args/kwargs (like you needed exact matches, so you have to write like 3-4 pattern matchers for each op, and it's a bit error prone)

Comment thread backends/mlx/examples/voxtral/export_voxtral_hf.py

logger.info("Exporting audio preprocessor with MLX backend...")

model = WhisperAudioProcessor(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename if this is used in models other than Whisper (like here in Voxtral)

@metascroy metascroy mentioned this pull request Mar 3, 2026
@metascroy
Copy link
Copy Markdown
Contributor Author

  • In the runtime unittest, can we compile with -Wall, -Werror, and -Wconversion, -Wsign-conversion, -Wshorten-64-to-32, asan, ubsan flag to uncover any security issues
  • Use overflow-safe arithmetic for all bounds checks. Use __builtin_add_overflow / __builtin_mul_overflow or c10's safe arithmetic utilities.
  • Check for null pointer deferences

I added a strict compile test, addressed security issues, and addressed null ptr references.

@metascroy
Copy link
Copy Markdown
Contributor Author

Continuing development here: #17803

@metascroy metascroy closed this Mar 24, 2026
metascroy added a commit that referenced this pull request Apr 7, 2026
Takes changes from #16718, but
strips all ops (except addmm) and examples.

Part2 will add back ops, and part 3 will add back examples.
jpiat added a commit to jpiat/executorch that referenced this pull request Apr 14, 2026
Takes changes from pytorch#16718, but
strips all ops (except addmm) and examples.

Part2 will add back ops, and part 3 will add back examples.
jpiat pushed a commit to jpiat/executorch that referenced this pull request Apr 14, 2026
Takes changes from pytorch#16718, but
strips all ops (except addmm) and examples.

Part2 will add back ops, and part 3 will add back examples.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants