Skip to content

Metal backend: Add gated delta rule kernel for linear attention#18878

Merged
manuelcandales merged 29 commits intomainfrom
gh/manuelcandales/173/head
Apr 21, 2026
Merged

Metal backend: Add gated delta rule kernel for linear attention#18878
manuelcandales merged 29 commits intomainfrom
gh/manuelcandales/173/head

Conversation

@manuelcandales
Copy link
Copy Markdown
Contributor

Adds Metal kernel for the gated delta rule recurrence used by Qwen 3.5
MoE's GatedDeltaNet linear attention layers. Ported from the MLX delegate
PR (#18785) Metal shader. The kernel processes the full sequence
sequentially within a single GPU dispatch, keeping recurrent state in
per-thread registers.

Grid: [32, Dv, B*Hv], Threadgroup: [32, 4, 1]. Each simdgroup of 32
threads handles Dk/32 elements of the key dimension with SIMD reduction
for dot products.

The op mutates the recurrent state buffer in-place (mutates_args).
Instantiated for both real model (Dk=128, Dv=128, Hk=32, Hv=32) and
tiny test (Dk=64, Dv=64, Hk=4, Hv=4) dimensions.

Includes: Metal shader + C++ host dispatch, Python custom op definition
(metal::gated_delta_rule) with reference CPU impl and Meta impl, C shim
dict, fallback kernel registration, CMakeLists entry, and test module.

Authored with Claude.

[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18878

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@manuelcandales manuelcandales marked this pull request as draft April 14, 2026 16:27
[ghstack-poisoned]
[ghstack-poisoned]
@manuelcandales manuelcandales marked this pull request as ready for review April 14, 2026 22:24
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@metascroy
Copy link
Copy Markdown
Contributor

@claude review this code

@claude
Copy link
Copy Markdown

claude Bot commented Apr 17, 2026

Claude finished @metascroy's task in 2m 37s —— View job


  • Read all changed files
  • Analyze Metal shader and C++ host dispatch
  • Analyze Python custom op and reference implementation
  • Analyze tests
  • Post review

Review: Metal gated delta rule kernel

Overall this is a well-structured addition that follows existing Metal backend patterns. The kernel logic is correct and the in-place state mutation approach is sound. A few issues worth addressing:


Bug: Missing error check on output tensor creation (op_gated_delta_rule.mm:253-270)

The aoti_torch_create_tensor_from_blob_v2 return value is not checked. Compare with op_gather_qmv.mm:492-500 which checks both the return code and handle:

// op_gather_qmv.mm pattern:
AOTITorchError create_result = aoti_torch_create_tensor_from_blob_v2(
    out_contents_ptr, 2, output_sizes.data(), output_strides.data(),
    0, dtype, 13, 0, &out_tensor_handle, 0, nullptr, 0);

if (create_result != Error::Ok || !out_tensor_handle) {
  ET_LOG(Error, "...: Failed to create output tensor");
  aoti_torch_mps_free(out_contents_ptr);
  return Error::Internal;
}

In op_gated_delta_rule.mm:261-268, the return value is discarded and only the handle is null-checked. If aoti_torch_create_tensor_from_blob_v2 returns a non-Ok error but still sets a non-null handle, the error would be silently ignored. More importantly, even the null-handle path calls aoti_torch_mps_free without logging or returning the proper error code — the flow falls through to memory_to_n_tensor[y_ptr] = 1 for the freed pointer. Fix this →


Bug: Python reference impl silently assumes Hk == Hv (gated_delta_rule.py:46)

The reference implementation broadcasts k_t[:, :, None, :] as [B, Hk, 1, Dk] against state [B, Hv, Dv, Dk]. This broadcasting only works when Hk == Hv. The Metal kernel correctly handles Hk != Hv via hk_idx = hv_idx / (Hv / Hk), but the Python reference would produce wrong results (or crash) if someone ever instantiates with Hk != Hv.

Currently all instantiated kernels use Hk == Hv so this isn't a live bug, but it creates a correctness trap. Consider either:

  • Adding an assert: assert Hk == Hv, "Reference impl requires Hk == Hv"
  • Or implementing proper GQA key expansion in the reference: k_t = k_t.repeat_interleave(Hv // Hk, dim=1) before the state operations

Nit: Stale comment in instantiation block (op_gated_delta_rule.mm:124)

Line 124 says "Qwen 3.5 MoE dimensions: Dk=128, Dv=128, Hk=16, Hv=32" but line 143 instantiates Hk=32, Hv=32. The comment on line 142 explains this ("Hk=16 after repeat_interleave → 32") but the macro comment on line 124 is potentially confusing since the template instantiation itself uses the post-expansion values.


Nit: Tolerance relaxation for gather_qmv bfloat16 (test_modules.py:736-737)

The bfloat16 tolerances for gather_qmv were relaxed from atol=1e-1, rtol=1e-1 to atol=5.0, rtol=2e-1. An absolute tolerance of 5.0 is extremely loose — it means outputs can differ by up to 5.0 in absolute value. If this was needed to make tests pass, it may indicate a real precision issue in the gather_qmv kernel for bfloat16. Was this change intentional for this PR, or is it papering over a separate issue? It seems unrelated to the gated delta rule addition.


Minor observations (non-blocking):

  • The g decay gate comment on line 93 of the Metal kernel applies g_[hv_idx] directly (no exp()) — the PR description pseudocode says exp(g_t) but the op signature says "already exp'd". This is consistent but worth noting for documentation clarity: callers must pre-apply exp() to the gate.
  • The kernel's in-place state mutation (passing state_tensor as both arg 5 and arg 7) is thread-safe because each (b, hv, dv, dk) element is owned by exactly one thread — no cross-thread data races.
  • Test dimensions (Hk=4, Hv=4, Dk=64, Dv=64) only cover the Hk == Hv case. If GQA support (Hk < Hv) is intended to be supported by the Metal kernel, it would be valuable to add a test configuration for it (though this would also require fixing the Python reference).

[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
Base automatically changed from gh/manuelcandales/172/head to main April 21, 2026 17:53
[ghstack-poisoned]
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@manuelcandales manuelcandales merged commit d408a10 into main Apr 21, 2026
175 of 181 checks passed
@manuelcandales manuelcandales deleted the gh/manuelcandales/173/head branch April 21, 2026 18:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants