Skip to content

[ET-VK] Fix use-after-free in PrepackNode when TensorRefs are shared#18914

Merged
SS-JIA merged 1 commit intomainfrom
gh/SS-JIA/520/orig
Apr 15, 2026
Merged

[ET-VK] Fix use-after-free in PrepackNode when TensorRefs are shared#18914
SS-JIA merged 1 commit intomainfrom
gh/SS-JIA/520/orig

Conversation

@pytorchbot
Copy link
Copy Markdown
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #18906 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/520/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/520/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/520/orig
Differential Revision: D101009402
@diff-train-skip-merge

Pull Request resolved: #18906

When a model has shared/tied weights (e.g. tied embeddings in transformers), the serialization deduplicates them into a single TensorRef that multiple PrepackNodes reference. Previously, `PrepackNode::create_staging_buffer()` called `tref->free_buffer()` unconditionally after copying weight data to a GPU staging buffer. This meant the first PrepackNode to execute would free the underlying host memory, and subsequent PrepackNodes sharing the same TensorRef would read from a dangling pointer — producing garbage/NaN values in prepacked weight and bias tensors on the GPU.

The fix adds a `prepack_use_count` field to `TensorRef` that tracks how many PrepackNodes still need to read from it. Each PrepackNode increments the count in its constructor and decrements it after copying data. The buffer is only freed when the count reaches zero. This preserves the original eager-free behavior for non-shared weights (freeing immediately after the single consumer copies) while correctly deferring the free for shared weights until the last consumer is done — avoiding both the use-after-free and unnecessary peak memory increase.
ghstack-source-id: 367726483
@exported-using-ghexport

Differential Revision: [D101009402](https://our.internmc.facebook.com/intern/diff/D101009402/)
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner April 15, 2026 20:35
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 15, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18914

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 2 Pending, 1 Unrelated Failure

As of commit 21a359e with merge base 89e6416 (image):

NEW FAILURE - The following job has failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 15, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@SS-JIA SS-JIA merged commit f84a17c into main Apr 15, 2026
164 of 170 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/520/orig branch April 15, 2026 22:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants