Skip to content

Fix #18562: Method.execute() silently produces wrong results for no...#18935

Open
JiwaniZakir wants to merge 1 commit into
pytorch:mainfrom
JiwaniZakir:fix/18562-method-execute-silently-produces-wrong
Open

Fix #18562: Method.execute() silently produces wrong results for no...#18935
JiwaniZakir wants to merge 1 commit into
pytorch:mainfrom
JiwaniZakir:fix/18562-method-execute-silently-produces-wrong

Conversation

@JiwaniZakir
Copy link
Copy Markdown

Closes #18562

Summary

Method.execute() in runtime/__init__.py passed input tensors directly to the underlying C++ runtime without checking memory contiguity. The runtime reads from data_ptr assuming a contiguous layout, so non-contiguous tensors — most commonly produced by .permute(), .transpose(), or .expand() — caused silently wrong outputs with no error or warning.

Fix: in Method.execute(), normalize any non-contiguous torch.Tensor input to a contiguous copy before dispatch. This is a one-time copy per non-contiguous input and has no effect on already-contiguous tensors.

# runtime/__init__.py, Method.execute()
inputs = [
    x.contiguous() if isinstance(x, torch.Tensor) and not x.is_contiguous() else x
    for x in inputs
]
return self._method(inputs)

Test plan

Added test_execute_non_contiguous_inputs in runtime/test/test_runtime.py. The test constructs a non-contiguous 2-D tensor from the existing ModuleAdd fixture input by expanding and permuting (unsqueeze(0).expand(3, -1, -1).permute(1, 2, 0)[:, :, 0]), verifies is_contiguous() is False and torch.equal with the original is True, then asserts torch.allclose between the output from the non-contiguous input and the output from the original contiguous input.

python -m pytest runtime/test/test_runtime.py::RuntimeTest::test_execute_non_contiguous_inputs -v
# PASSED

This PR was created with AI assistance (Claude). The changes were reviewed by quality gates and a critic model before submission.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 16, 2026 04:54
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 16, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18935

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 7 New Failures, 2 Unrelated Failures

As of commit 531ef37 with merge base ec8d70b (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 16, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Note

Copilot was unable to run its full agentic suite in this review.

Fixes Method.execute() producing incorrect results when given non-contiguous torch.Tensor inputs by normalizing them to contiguous tensors before dispatching to the underlying runtime.

Changes:

  • Normalize non-contiguous torch.Tensor inputs via .contiguous() inside Method.execute().
  • Add a regression test covering non-contiguous inputs producing identical outputs to contiguous equivalents.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
runtime/init.py Ensures Method.execute() passes contiguous tensor data to the C++ runtime by copying non-contiguous tensors.
runtime/test/test_runtime.py Adds a regression test validating correctness for non-contiguous tensor inputs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread runtime/__init__.py
Returns:
A list of output values, typically torch.Tensor objects.
"""
import torch
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Importing torch inside execute() adds overhead on every call and also changes when ImportError would surface (now at runtime call time). Prefer a module-level import, or at least import lazily only when a torch.Tensor is actually present (e.g., scan inputs first), optionally with a clear error if torch isn't available.

Copilot uses AI. Check for mistakes.
Comment thread runtime/__init__.py
Comment on lines +149 to 153
inputs = [
x.contiguous() if isinstance(x, torch.Tensor) and not x.is_contiguous() else x
for x in inputs
]
return self._method(inputs)
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This always allocates a new Python list and iterates all inputs even when all tensors are already contiguous (or when there are no tensors). For a hot path like execute(), consider only creating a copied container if at least one replacement is needed; otherwise, pass through the original inputs unchanged to avoid unnecessary allocations.

Suggested change
inputs = [
x.contiguous() if isinstance(x, torch.Tensor) and not x.is_contiguous() else x
for x in inputs
]
return self._method(inputs)
converted_inputs = None
for i, x in enumerate(inputs):
if isinstance(x, torch.Tensor) and not x.is_contiguous():
if converted_inputs is None:
converted_inputs = list(inputs)
converted_inputs[i] = x.contiguous()
return self._method(converted_inputs if converted_inputs is not None else inputs)

Copilot uses AI. Check for mistakes.
runtime = Runtime.get()
program = runtime.load_program(ep.buffer, verification=Verification.Minimal)

# Make a non-contiguous version of the first input via transpose.
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says the non-contiguous tensor is created 'via transpose', but the code uses unsqueeze/expand/permute and slicing. Update the comment to match the actual approach to avoid confusion during future maintenance.

Suggested change
# Make a non-contiguous version of the first input via transpose.
# Make a non-contiguous version of the first input via
# unsqueeze/expand/permute followed by slicing.

Copilot uses AI. Check for mistakes.
@kirklandsign
Copy link
Copy Markdown
Contributor

@claude review

@claude
Copy link
Copy Markdown

claude Bot commented Apr 16, 2026

Claude finished @kirklandsign's task in 3m 21s —— View job


Review of PR #18935

  • Read changed files and surrounding context
  • Analyze pybindings execution path for contiguity handling
  • Review runtime/init.py changes
  • Review test changes

Summary

This PR adds a .contiguous() call for non-contiguous tensor inputs in Method.execute(). The fix itself is reasonable, but there's a significant issue with the PR description accuracy and a few code concerns.

Issue: PR description is misleading

The PR description and issue #18562 state that non-contiguous tensors cause "silently wrong outputs with no error or warning." This is only partially accurate:

  • In portable mode (the common build), the C++ pybindings already validate contiguity at pybindings.cpp:1107-1119 and throw a runtime_error if the tensor is neither contiguous nor ChannelsLast. It does not silently produce wrong results — it crashes.
  • In ATen mode (USE_ATEN_LIB), the tensor is passed directly as an EValue at pybindings.cpp:1091 with no contiguity check. This path could produce silently wrong results depending on how the ATen kernels handle the tensor layout.

The fix is still useful as a defensive measure and for improving UX (converting automatically rather than throwing a confusing error), but the framing of "silently wrong results" applies only in certain build configurations.

Code concerns

1. import torch inside execute() (runtime/__init__.py:147)

This adds an import lookup on every execute() call. While Python caches module imports in sys.modules, it's still unnecessary overhead on a hot path. torch is already imported at the top of the test file and is effectively a required dependency for any user calling execute() with tensors. Consider moving it to the top of the file or at minimum doing a lazy import that's cached (e.g., at the class level).

However, I notice this module currently avoids importing torch at the top level — likely intentionally, so the module can be imported without torch being available. If that's the design constraint, the in-function import is acceptable but should have a brief comment explaining why.

2. Unnecessary list allocation on every call (runtime/__init__.py:149-152)

The list comprehension creates a new list on every execute() call even when all inputs are already contiguous (which is the common case). Copilot's review suggested an optimization that only allocates when needed — worth considering, though the overhead is minor for typical input counts.

3. Test comment mismatch (runtime/test/test_runtime.py:88)

The comment says "via transpose" but the code uses unsqueeze/expand/permute and slicing. Minor but should be fixed.

# Current (inaccurate):
# Make a non-contiguous version of the first input via transpose.

# Should be:
# Make a non-contiguous version of the first input via expand/permute.

4. Consider whether this is the right layer for the fix

The contiguity check already exists in the C++ pybindings for portable mode. Adding it again in the Python wrapper means there's now redundant validation in the portable mode path (Python converts to contiguous, then C++ checks contiguity again). This isn't harmful but adds conceptual overhead. An argument could be made that the fix should be in the C++ pybindings layer instead (calling .contiguous() on the at::Tensor before proceeding), which would fix both the PyMethod and PyModule paths uniformly. But the Python-level fix is simpler and adequate for the runtime API.

Minor nits

  • The test could use assertEqual / assertFalse with a message argument for clearer failure diagnostics.
  • The test only covers the case where the first input is non-contiguous. A more thorough test might also check when the second input (or both) are non-contiguous.

Verdict

The fix is correct and the test is reasonable. The main concerns are:

  1. The PR description overstates the severity ("silently wrong" vs. actually throwing an error in portable mode)
  2. The comment in the test is inaccurate
  3. Minor performance considerations on the hot path

@mergennachin mergennachin requested a review from Gasoonjia May 11, 2026 19:45
Copy link
Copy Markdown
Contributor

@mergennachin mergennachin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should look at dim order instead of adding torch dependency to the runtime.

Cc @Gasoonjia

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Method.execute() silently produces wrong results for non-contiguous input tensors

5 participants