Skip to content

Support separate K/V seq dim in custom_sdpa op#18714

Open
kimishpatel wants to merge 1 commit intogh/kimishpatel/234/basefrom
gh/kimishpatel/234/head
Open

Support separate K/V seq dim in custom_sdpa op#18714
kimishpatel wants to merge 1 commit intogh/kimishpatel/234/basefrom
gh/kimishpatel/234/head

Conversation

@kimishpatel
Copy link
Copy Markdown
Contributor

@kimishpatel kimishpatel commented Apr 6, 2026

Stack from ghstack (oldest at bottom):

Previously custom_sdpa used a single is_seq_at_dim_2 flag for all
tensors. This meant v_only transpose required a runtime transpose
copy for K (converting from [B,H,S,D] to [B,S,H,D]), which caused
a 2.3x decode slowdown (15.35 vs 35.63 tok/s).

Now the C++ op accepts separate is_seq_dim_2, is_k_seq_dim_2,
is_v_seq_dim_2 flags so Q, K, V can each have independent layouts.
The Python layer passes K and V in their native cache layout
without any transpose, and the flash attention kernel handles the
mixed strides directly.

Changes:

  • op_sdpa_impl.h: cpu_flash_attention takes q_seq_dim, k_seq_dim,
    v_seq_dim instead of single seq_dim
  • op_sdpa.cpp/h: custom_sdpa_out takes 3 bool params
  • op_sdpa_aot.cpp: Updated schema strings and wrappers
  • sdpa.py: SDPACustom uses is_k_seq_at_dim_2 / is_v_seq_at_dim_2,
    Q always at dim 2, no input transposes
  • custom_kv_cache.py: update() returns native cache layout,
    added is_seq_at_dim_2 compat property
  • export_llama_lib.py: passes separate K/V flags

Differential Revision: D99677678

Previously custom_sdpa used a single is_seq_at_dim_2 flag for all
tensors. This meant v_only transpose required a runtime transpose
copy for K (converting from [B,H,S,D] to [B,S,H,D]), which caused
a 2.3x decode slowdown (15.35 vs 35.63 tok/s).

Now the C++ op accepts separate is_seq_dim_2, is_k_seq_dim_2,
is_v_seq_dim_2 flags so Q, K, V can each have independent layouts.
The Python layer passes K and V in their native cache layout
without any transpose, and the flash attention kernel handles the
mixed strides directly.

Changes:
- op_sdpa_impl.h: cpu_flash_attention takes q_seq_dim, k_seq_dim,
  v_seq_dim instead of single seq_dim
- op_sdpa.cpp/h: custom_sdpa_out takes 3 bool params
- op_sdpa_aot.cpp: Updated schema strings and wrappers
- sdpa.py: SDPACustom uses is_k_seq_at_dim_2 / is_v_seq_at_dim_2,
  Q always at dim 2, no input transposes
- custom_kv_cache.py: update() returns native cache layout,
  added is_seq_at_dim_2 compat property
- export_llama_lib.py: passes separate K/V flags

Differential Revision: [D99677678](https://our.internmc.facebook.com/intern/diff/D99677678/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 6, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18714

Note: Links to docs will display an error until the docs builds have been completed.

❌ 126 New Failures, 2 Cancelled Jobs

As of commit 45468e1 with merge base fb1618e (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 6, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@kimishpatel
Copy link
Copy Markdown
Contributor Author

submitted by accident, not meant to land immedidately

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant