Skip to content

Add xsimd::get<>() for optimized compile-time element extraction#1294

Draft
DiamonDinoia wants to merge 1 commit intoxtensor-stack:masterfrom
DiamonDinoia:feat/optimize-elem-extraction
Draft

Add xsimd::get<>() for optimized compile-time element extraction#1294
DiamonDinoia wants to merge 1 commit intoxtensor-stack:masterfrom
DiamonDinoia:feat/optimize-elem-extraction

Conversation

@DiamonDinoia
Copy link
Copy Markdown
Contributor

Add a free function xsimd::get(batch) API mirroring std::get(tuple) for fast compile-time element extraction from SIMD batches.

Per-architecture optimized kernel::get overloads using the fastest available intrinsics:

  • SSE2: shuffle/shift + scalar convert
  • SSE4.1: pextrd/pextrq/pextrb/pextrw, bitcast + pextrd for float
  • AVX: vextractf128/vextracti128 + SSE4.1 delegate
  • AVX-512: vextracti64x4/vextractf32x4 + AVX delegate
  • NEON: vgetq_lane_* (single instruction for all types)
  • NEON64: vgetq_lane_f64

Also fixes a latent bug in the common fallback for complex batch compile-time get (wrong buffer type).

Add a free function xsimd::get<I>(batch) API mirroring std::get<I>(tuple)
for fast compile-time element extraction from SIMD batches.

Per-architecture optimized kernel::get overloads using the fastest
available intrinsics:
- SSE2: shuffle/shift + scalar convert
- SSE4.1: pextrd/pextrq/pextrb/pextrw, bitcast + pextrd for float
- AVX: vextractf128/vextracti128 + SSE4.1 delegate
- AVX-512: vextracti64x4/vextractf32x4 + AVX delegate
- NEON: vgetq_lane_* (single instruction for all types)
- NEON64: vgetq_lane_f64

Also fixes a latent bug in the common fallback for complex batch
compile-time get (wrong buffer type).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant