Skip to content

[Executorch] Add non-flash SDPA for decode#18648

Open
kimishpatel wants to merge 2 commits intogh/kimishpatel/221/basefrom
gh/kimishpatel/221/head
Open

[Executorch] Add non-flash SDPA for decode#18648
kimishpatel wants to merge 2 commits intogh/kimishpatel/221/basefrom
gh/kimishpatel/221/head

Conversation

@kimishpatel
Copy link
Copy Markdown
Contributor

@kimishpatel kimishpatel commented Apr 1, 2026

Stack from ghstack (oldest at bottom):

Add cpu_sdpa template function in op_sdpa_impl.h that provides a
simpler SDPA implementation using standard GEMM (no tiling). This is
useful as a baseline and for cases where flash attention is not optimal.

The implementation uses a single SeqDim parameter for all tensors and
supports causal masking, attention masks, GQA, and multi-threading.

During decode (seq_len == 1), the tiled flash attention implementation
has unnecessary overhead from its blocking/tiling logic. The simpler
unfused SDPA path using direct GEMM is more efficient for single-query
attention, yielding ~25-30% decode throughput improvement on S25
(41 -> 53 tok/s for 1.4B parameter model).

This makes cpu_sdpa always available (previously gated behind
ET_USE_UNFUSED_SDPA) and dispatches to it when seq_len == 1 and
inputs are not quantized. Prefill continues to use flash attention.

Differential Revision: D96044318

Add cpu_sdpa template function in op_sdpa_impl.h that provides a
simpler SDPA implementation using standard GEMM (no tiling). This is
useful as a baseline and for cases where flash attention is not optimal.

The implementation uses a single SeqDim parameter for all tensors and
supports causal masking, attention masks, GQA, and multi-threading.

During decode (seq_len == 1), the tiled flash attention implementation
has unnecessary overhead from its blocking/tiling logic. The simpler
unfused SDPA path using direct GEMM is more efficient for single-query
attention, yielding ~25-30% decode throughput improvement on S25
(41 -> 53 tok/s for 1.4B parameter model).

This makes cpu_sdpa always available (previously gated behind
ET_USE_UNFUSED_SDPA) and dispatches to it when seq_len == 1 and
inputs are not quantized. Prefill continues to use flash attention.

Differential Revision: [D96044318](https://our.internmc.facebook.com/intern/diff/D96044318/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 1, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18648

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 2 Cancelled Jobs

As of commit beb7b11 with merge base fb1618e (image):

NEW FAILURE - The following job has failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 1, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

Add cpu_sdpa template function in op_sdpa_impl.h that provides a
simpler SDPA implementation using standard GEMM (no tiling). This is
useful as a baseline and for cases where flash attention is not optimal.

The implementation uses a single SeqDim parameter for all tensors and
supports causal masking, attention masks, GQA, and multi-threading.

During decode (seq_len == 1), the tiled flash attention implementation
has unnecessary overhead from its blocking/tiling logic. The simpler
unfused SDPA path using direct GEMM is more efficient for single-query
attention, yielding ~25-30% decode throughput improvement on S25
(41 -> 53 tok/s for 1.4B parameter model).

This makes cpu_sdpa always available (previously gated behind
ET_USE_UNFUSED_SDPA) and dispatches to it when seq_len == 1 and
inputs are not quantized. Prefill continues to use flash attention.

Differential Revision: [D96044318](https://our.internmc.facebook.com/intern/diff/D96044318/)

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants