Skip to content

Fix SLEEF preprocessor macro name to match ATen vec headers#18645

Open
kimishpatel wants to merge 2 commits intogh/kimishpatel/218/basefrom
gh/kimishpatel/218/head
Open

Fix SLEEF preprocessor macro name to match ATen vec headers#18645
kimishpatel wants to merge 2 commits intogh/kimishpatel/218/basefrom
gh/kimishpatel/218/head

Conversation

@kimishpatel
Copy link
Copy Markdown
Contributor

@kimishpatel kimishpatel commented Apr 1, 2026

Stack from ghstack (oldest at bottom):

The ATen NEON vectorized math headers (vec128_float_neon.h) check for
AT_BUILD_ARM_VEC256_WITH_SLEEF to enable SLEEF intrinsics for exp(),
log(), etc. ExecuTorch's get_vec_preprocessor_flags() was defining
ET_BUILD_ARM_VEC256_WITH_SLEEF (wrong prefix), so the USE_SLEEF macro
always took the fallback path: map(std::exp) — scalar exp called
per-element with full vector load/store overhead wrapping it.

With this fix, Vectorized::exp() correctly dispatches to
Sleef_expf4_u10 on ARM, which is the intended behavior.

Differential Revision: D96044314

The ATen NEON vectorized math headers (vec128_float_neon.h) check for
AT_BUILD_ARM_VEC256_WITH_SLEEF to enable SLEEF intrinsics for exp(),
log(), etc. ExecuTorch's get_vec_preprocessor_flags() was defining
ET_BUILD_ARM_VEC256_WITH_SLEEF (wrong prefix), so the USE_SLEEF macro
always took the fallback path: map(std::exp) — scalar exp called
per-element with full vector load/store overhead wrapping it.

With this fix, Vectorized<float>::exp() correctly dispatches to
Sleef_expf4_u10 on ARM, which is the intended behavior.

Differential Revision: [D96044314](https://our.internmc.facebook.com/intern/diff/D96044314/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 1, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18645

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 Cancelled Jobs

As of commit 5fa0d58 with merge base fb1618e (image):

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 1, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

The ATen NEON vectorized math headers (vec128_float_neon.h) check for
AT_BUILD_ARM_VEC256_WITH_SLEEF to enable SLEEF intrinsics for exp(),
log(), etc. ExecuTorch's get_vec_preprocessor_flags() was defining
ET_BUILD_ARM_VEC256_WITH_SLEEF (wrong prefix), so the USE_SLEEF macro
always took the fallback path: map(std::exp) — scalar exp called
per-element with full vector load/store overhead wrapping it.

With this fix, Vectorized<float>::exp() correctly dispatches to
Sleef_expf4_u10 on ARM, which is the intended behavior.

Differential Revision: [D96044314](https://our.internmc.facebook.com/intern/diff/D96044314/)

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants