Skip to content

Plumb transposed cache config through export pipeline#18712

Open
kimishpatel wants to merge 1 commit intogh/kimishpatel/232/basefrom
gh/kimishpatel/232/head
Open

Plumb transposed cache config through export pipeline#18712
kimishpatel wants to merge 1 commit intogh/kimishpatel/232/basefrom
gh/kimishpatel/232/head

Conversation

@kimishpatel
Copy link
Copy Markdown
Contributor

@kimishpatel kimishpatel commented Apr 6, 2026

Stack from ghstack (oldest at bottom):

Benchmarking shows transposed KV cache [B, H, S, D] significantly outperforms
standard layout [B, S, H, D] in custom_sdpa, especially at longer cache fills:
1.64x at start_pos=1024, 1.14x at start_pos=512, 1.13x for prefill seq_len=512
(Llama 3 8B config, Apple M-series). The improvement comes from better memory
locality in the attn_score @ V GEMM where V stride along S_kv changes from
H*D to D.

This commit replaces the hardcoded is_seq_at_dim_2=True # hacking temporarily
values in sdpa.py and custom_kv_cache.py with a proper configurable parameter
threaded through the export pipeline:

  • Add use_transposed_cache: bool = True to ModelConfig in llm_config.py
  • Thread it through _get_source_transforms in export_llama_lib.py
  • Add is_seq_at_dim_2 parameter to replace_kv_cache_with_custom_kv_cache
    and replace_sdpa_with_custom_op (defaulting to True for backward compat)

Also fixes:

  • torchao aarch64:matmul BUCK: deps -> exported_deps for :macro, fixing
    transitive header visibility on arm64
  • op_update_cache.cpp: %zd -> PRId64 for int64_t format strings

Authored with Claude.

Differential Revision: D99677679

Benchmarking shows transposed KV cache [B, H, S, D] significantly outperforms
standard layout [B, S, H, D] in custom_sdpa, especially at longer cache fills:
1.64x at start_pos=1024, 1.14x at start_pos=512, 1.13x for prefill seq_len=512
(Llama 3 8B config, Apple M-series). The improvement comes from better memory
locality in the attn_score @ V GEMM where V stride along S_kv changes from
H*D to D.

This commit replaces the hardcoded `is_seq_at_dim_2=True # hacking temporarily`
values in sdpa.py and custom_kv_cache.py with a proper configurable parameter
threaded through the export pipeline:

- Add `use_transposed_cache: bool = True` to ModelConfig in llm_config.py
- Thread it through _get_source_transforms in export_llama_lib.py
- Add `is_seq_at_dim_2` parameter to replace_kv_cache_with_custom_kv_cache
  and replace_sdpa_with_custom_op (defaulting to True for backward compat)

Also fixes:
- torchao aarch64:matmul BUCK: deps -> exported_deps for :macro, fixing
  transitive header visibility on arm64
- op_update_cache.cpp: %zd -> PRId64 for int64_t format strings

Authored with Claude.

Differential Revision: [D99677679](https://our.internmc.facebook.com/intern/diff/D99677679/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 6, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18712

Note: Links to docs will display an error until the docs builds have been completed.

❌ 9 New Failures, 2 Cancelled Jobs

As of commit 6f31ee3 with merge base fb1618e (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 6, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@kimishpatel
Copy link
Copy Markdown
Contributor Author

submitted by accident, not meant to land immedidately

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant