Skip to content

Commit 1ab1d0e

Browse files
authored
perf(geotiff): vectorise unpack_bits for bps=2/4/12 (#1713) (#1721)
Sub-byte pixel decoding for BitsPerSample in {2, 4, 12} was running Python loops over packed bytes, which was 100-200x slower than the numpy-strided equivalent on realistic tile sizes (165 ms vs 3 ms for 2M bps=4 pixels on this machine). Vectorise the three branches with masked-and-shifted slices. The written-position contract is preserved bit for bit: positions covered by the input buffer match the prior loop, and positions past the buffer are left as np.empty initial values exactly as before. Includes the deep-sweep state CSV update for Pass 7 of geotiff.
1 parent 7720ea9 commit 1ab1d0e

3 files changed

Lines changed: 272 additions & 30 deletions

File tree

.claude/sweep-performance-state.csv

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ fire,2026-03-31T18:00:00Z,SAFE,compute-bound,0,,
1818
flood,2026-03-31T18:00:00Z,SAFE,compute-bound,0,,
1919
focal,2026-03-31T18:00:00Z,SAFE,compute-bound,0,,
2020
geodesic,2026-03-31T18:00:00Z,N/A,compute-bound,0,,
21-
geotiff,2026-05-12,SAFE,IO-bound,1,1688,"Pass 4 (2026-05-10): re-audit after #1559 (centralise attrs across all read backends). New _populate_attrs_from_geo_info helper at __init__.py:301 runs once per read, not per-chunk -- no perf impact. Probe: 2560x2560 deflate-tiled file opened via read_geotiff_dask yields 400 tasks (4 tasks/chunk for 100 chunks), well under 1M cap. read_geotiff_gpu(1024x1024) returns cupy.ndarray end-to-end with no host round-trip (226ms incl. write+decode). No new HIGH/MEDIUM findings. SAFE/IO-bound holds. | Pass 3 (2026-05-10): SAFE/IO-bound. Audited 4 perf commits: #1558 (in-place NaN writes on uniquely-owned buffers correct), #1556 (fp-predictor ngjit ~297us/tile for 256x256 float32), #1552 (single cupy.concatenate + one .get() for batched D2H at _gpu_decode.py:870-913), #1551 (parallel decode threshold >=65536px engages 256x256 default at _reader.py:1121). Bench: 8192x8192 f32 deflate+pred2 256-tile write 782ms; 4096x4096 f32 deflate read 83ms with parallel decode. Deferred LOW (none filed, all <10% MEDIUM threshold): _writer.py:459/1109 redundant .copy() before predictor encode (~1% per tile), _compression.py:280 lzw_decompress dst[:n].copy() (~2% per LZW tile decode), _writer.py:1419 seg_np.copy() before in-place NaN substitution (negligible, conditional path), _CloudSource.read_range opens fresh fsspec handle per range (pre-existing, predates audit scope). nvCOMP per-tile D2H batching break-even confirmed (variable sizes need staging buffer, no win). | Pass 3 (2026-05-10): audited f157746,39322c3,f23ec8f,1aac3b7. All 5 commits correct. Redundant .copy() in _writer.py:459,1109 and _compression.py:280 (1-2% overhead, LOW). _CloudSource.read_range() per-call open is pre-existing arch issue. No HIGH/MEDIUM regressions. SAFE. | re-audit 2026-05-02: 6 commits since 2026-04-16 (predictor=3 CPU encode/decode, GPU predictor stride fix, validate_tile_layout, BigTIFF LONG8 offsets, AREA_OR_POINT VRT, per-tile alloc guard). 1M dask chunk cap intact at __init__.py:948; adler32 batch transfer intact at _gpu_decode.py:1825. New code is metadata validation and dispatcher logic with no extra materialization or per-tile sync points. No HIGH/MEDIUM regressions. | Pass 5 (2026-05-12): re-audit identified MEDIUM in _gpu_decode.py:1577 _try_nvcomp_from_device_bufs: per-tile cupy.empty + trailing cupy.concatenate doubled peak VRAM and added serial concat. Filed #1659 and fixed to single-buffer + pointer offsets (matches LZW/deflate/host-buffer patterns at L1847/L1878/L1114). Microbench (alloc+concat overhead only, not full nvCOMP latency): n=256 tile_bytes=65536 drops 3.66ms->0.69ms, n=256 tile_bytes=262144 drops 8.18ms->0.13ms. Tests: 5 new tests in test_nvcomp_from_device_bufs_single_alloc_1659.py (codec short-circuit, no-lib short-circuit, memory-guard contract, real ZSTD round-trip via nvCOMP, structural single-buffer check). 1458 existing geotiff tests pass, 3 unrelated matplotlib/py3.14 failures pre-existing. SAFE/IO-bound verdict holds. | Pass 6 (2026-05-12): re-audit on top of #1659. New HIGH in _try_kvikio_read_tiles at _gpu_decode.py:941: per-tile cupy.empty() + blocking IOFuture.get() inside loop serialised GDS reads to ~1 outstanding pread, missed parallelism the kvikio worker pool was designed for, paid per-tile cupy.empty setup (matches #1659 anti-pattern in nvCOMP path), and lacked _check_gpu_memory guard. Filed #1688 and fixed to single contiguous buffer + batched submit + guard. Microbench with 8-worker pool simulation: 256 tiles@1ms latency drops 256ms->38.7ms (~6.6x); single-thread simulation 256ms->28.5ms (9x). Tests: 9 new tests in test_kvikio_batched_pread_1688.py (kvikio-absent path, single-buffer pointer arithmetic, submit-before-get ordering, memory guard, partial-read fallback, round-trip data, zero-size/all-sparse tiles). All 1577 geotiff tests pass except pre-existing matplotlib/py3.14 failures."
21+
geotiff,2026-05-12,SAFE,IO-bound,0,1713,"Pass 7 (2026-05-12): re-audit identified 4 MEDIUM findings, all real, all backed by microbenches. (1) unpack_bits sub-byte loops for bps=2/4/12 in _compression.py:836-878 were 100-200x slower than vectorised numpy (filed #1713, fixed in this branch: bps=4 2M pixels drops from 165ms to 3ms = 55x; bps=2/12 similar). (2) _write_vrt_tiled at __init__.py:1708 uses scheduler='synchronous' on independent tile writes; measured 33% slowdown on 256-tile zstd write vs threads scheduler (filed #1714, no fix yet). (3) _nvcomp_batch_compress at _gpu_decode.py:2522-2526 still does per-tile cupy.get().tobytes() despite #1552 / #1659 fixing the same pattern elsewhere; measured 45% reduction with concat+single get on n=1024 (filed #1712, no fix yet). (4) _nvcomp_batch_compress at _gpu_decode.py:2457 uses per-tile cupy.empty allocations; 1024 tiles 16KB drops from 4.7ms to 1.0ms with single contiguous + views (bundled into #1712). Cat 6 OOM verdict: SAFE/IO-bound holds -- read_geotiff_dask caps task count at _MAX_DASK_CHUNKS=50_000 and per-chunk memory is bounded by chunk size. _inflate_tiles_kernel resource usage on Ampere: 67 regs/thread, 2896B local/thread, 8192B shared/block (LZW kernel: 29 regs, 24576B shared) -- register pressure under control; high local memory in inflate is unavoidable (LZ77 state) but only thread 0 in each block uses it. | Pass 4 (2026-05-10): re-audit after #1559 (centralise attrs across all read backends). New _populate_attrs_from_geo_info helper at __init__.py:301 runs once per read, not per-chunk -- no perf impact. Probe: 2560x2560 deflate-tiled file opened via read_geotiff_dask yields 400 tasks (4 tasks/chunk for 100 chunks), well under 1M cap. read_geotiff_gpu(1024x1024) returns cupy.ndarray end-to-end with no host round-trip (226ms incl. write+decode). No new HIGH/MEDIUM findings. SAFE/IO-bound holds. | Pass 3 (2026-05-10): SAFE/IO-bound. Audited 4 perf commits: #1558 (in-place NaN writes on uniquely-owned buffers correct), #1556 (fp-predictor ngjit ~297us/tile for 256x256 float32), #1552 (single cupy.concatenate + one .get() for batched D2H at _gpu_decode.py:870-913), #1551 (parallel decode threshold >=65536px engages 256x256 default at _reader.py:1121). Bench: 8192x8192 f32 deflate+pred2 256-tile write 782ms; 4096x4096 f32 deflate read 83ms with parallel decode. Deferred LOW (none filed, all <10% MEDIUM threshold): _writer.py:459/1109 redundant .copy() before predictor encode (~1% per tile), _compression.py:280 lzw_decompress dst[:n].copy() (~2% per LZW tile decode), _writer.py:1419 seg_np.copy() before in-place NaN substitution (negligible, conditional path), _CloudSource.read_range opens fresh fsspec handle per range (pre-existing, predates audit scope). nvCOMP per-tile D2H batching break-even confirmed (variable sizes need staging buffer, no win). | Pass 3 (2026-05-10): audited f157746,39322c3,f23ec8f,1aac3b7. All 5 commits correct. Redundant .copy() in _writer.py:459,1109 and _compression.py:280 (1-2% overhead, LOW). _CloudSource.read_range() per-call open is pre-existing arch issue. No HIGH/MEDIUM regressions. SAFE. | re-audit 2026-05-02: 6 commits since 2026-04-16 (predictor=3 CPU encode/decode, GPU predictor stride fix, validate_tile_layout, BigTIFF LONG8 offsets, AREA_OR_POINT VRT, per-tile alloc guard). 1M dask chunk cap intact at __init__.py:948; adler32 batch transfer intact at _gpu_decode.py:1825. New code is metadata validation and dispatcher logic with no extra materialization or per-tile sync points. No HIGH/MEDIUM regressions. | Pass 5 (2026-05-12): re-audit identified MEDIUM in _gpu_decode.py:1577 _try_nvcomp_from_device_bufs: per-tile cupy.empty + trailing cupy.concatenate doubled peak VRAM and added serial concat. Filed #1659 and fixed to single-buffer + pointer offsets (matches LZW/deflate/host-buffer patterns at L1847/L1878/L1114). Microbench (alloc+concat overhead only, not full nvCOMP latency): n=256 tile_bytes=65536 drops 3.66ms->0.69ms, n=256 tile_bytes=262144 drops 8.18ms->0.13ms. Tests: 5 new tests in test_nvcomp_from_device_bufs_single_alloc_1659.py (codec short-circuit, no-lib short-circuit, memory-guard contract, real ZSTD round-trip via nvCOMP, structural single-buffer check). 1458 existing geotiff tests pass, 3 unrelated matplotlib/py3.14 failures pre-existing. SAFE/IO-bound verdict holds. | Pass 6 (2026-05-12): re-audit on top of #1659. New HIGH in _try_kvikio_read_tiles at _gpu_decode.py:941: per-tile cupy.empty() + blocking IOFuture.get() inside loop serialised GDS reads to ~1 outstanding pread, missed parallelism the kvikio worker pool was designed for, paid per-tile cupy.empty setup (matches #1659 anti-pattern in nvCOMP path), and lacked _check_gpu_memory guard. Filed #1688 and fixed to single contiguous buffer + batched submit + guard. Microbench with 8-worker pool simulation: 256 tiles@1ms latency drops 256ms->38.7ms (~6.6x); single-thread simulation 256ms->28.5ms (9x). Tests: 9 new tests in test_kvikio_batched_pread_1688.py (kvikio-absent path, single-buffer pointer arithmetic, submit-before-get ordering, memory guard, partial-read fallback, round-trip data, zero-size/all-sparse tiles). All 1577 geotiff tests pass except pre-existing matplotlib/py3.14 failures."
2222
glcm,2026-03-31T18:00:00Z,SAFE,compute-bound,0,,"Downgraded to MEDIUM. da.stack without rechunk is scheduling overhead, not OOM risk."
2323
hillshade,2026-04-16T12:00:00Z,SAFE,compute-bound,0,,"Re-audit after Horn's method rewrite (PR 1175): clean stencil, map_overlap depth=(1,1), no materialization. Zero findings."
2424
hydro,2026-05-01,RISKY,memory-bound,0,1416,"Fixed-in-tree 2026-05-01: hand_mfd._hand_mfd_dask now assembles via da.map_blocks instead of eager da.block of pre-computed tiles (matches hand_dinf pattern). Remaining MEDIUM: sink_d8 CCL fully materializes labels (inherently global), flow_accumulation_mfd frac_bdry held in driver dict instead of memmap-backed BoundaryStore. D8 iterative paths (flow_accum/fill/watershed/basin/stream_*) use serial-tile sweep with memmap-backed boundary store -- per-tile RAM bounded but driver iterates O(diameter) times. flow_direction_*, flow_path/snap_pour_point/twi/hand_d8/hand_dinf are SAFE."

xrspatial/geotiff/_compression.py

Lines changed: 74 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -834,44 +834,89 @@ def unpack_bits(data: np.ndarray, bps: int, pixel_count: int) -> np.ndarray:
834834
out = np.unpackbits(data)[:pixel_count]
835835
return out.astype(np.uint8)
836836
elif bps == 2:
837-
# 4 pixels per byte, MSB-first
837+
# 4 pixels per byte, MSB-first. Vectorised across the packed
838+
# byte array so a large tile decodes in one pass instead of N
839+
# Python iterations (issue #1713). The strided assignments
840+
# below mirror the original per-byte loop's write pattern,
841+
# including the contract that positions beyond
842+
# ``len(data) * 4`` are left as the ``np.empty`` initial
843+
# garbage value -- matching the prior behaviour for short
844+
# input buffers.
838845
out = np.empty(pixel_count, dtype=np.uint8)
839-
for i in range(min(len(data), (pixel_count + 3) // 4)):
840-
b = data[i]
841-
base = i * 4
842-
if base < pixel_count:
843-
out[base] = (b >> 6) & 0x03
844-
if base + 1 < pixel_count:
845-
out[base + 1] = (b >> 4) & 0x03
846-
if base + 2 < pixel_count:
847-
out[base + 2] = (b >> 2) & 0x03
848-
if base + 3 < pixel_count:
849-
out[base + 3] = b & 0x03
846+
n_full_bytes = min(len(data), (pixel_count + 3) // 4)
847+
if n_full_bytes > 0:
848+
raw = data[:n_full_bytes]
849+
written = n_full_bytes * 4
850+
limit = min(written, pixel_count)
851+
# Compute each of the four bit-slots for every covered byte
852+
# then write the prefix that fits into ``out``.
853+
slot0 = (raw >> 6) & np.uint8(0x03)
854+
slot1 = (raw >> 4) & np.uint8(0x03)
855+
slot2 = (raw >> 2) & np.uint8(0x03)
856+
slot3 = raw & np.uint8(0x03)
857+
full_idx = limit - (limit % 4)
858+
if full_idx > 0:
859+
quart = full_idx // 4
860+
out[0:full_idx:4] = slot0[:quart]
861+
out[1:full_idx:4] = slot1[:quart]
862+
out[2:full_idx:4] = slot2[:quart]
863+
out[3:full_idx:4] = slot3[:quart]
864+
# Handle the tail (1 to 3 elements past the last full quad)
865+
tail = limit - full_idx
866+
tail_byte = full_idx // 4
867+
if tail >= 1:
868+
out[full_idx] = slot0[tail_byte]
869+
if tail >= 2:
870+
out[full_idx + 1] = slot1[tail_byte]
871+
if tail >= 3:
872+
out[full_idx + 2] = slot2[tail_byte]
850873
return out
851874
elif bps == 4:
852-
# 2 pixels per byte, high nibble first
875+
# 2 pixels per byte, high nibble first. Vectorised across the
876+
# packed byte array (issue #1713). Positions beyond
877+
# ``len(data) * 2`` keep the ``np.empty`` initial value,
878+
# matching the prior per-byte loop's contract for short
879+
# input buffers.
853880
out = np.empty(pixel_count, dtype=np.uint8)
854-
for i in range(min(len(data), (pixel_count + 1) // 2)):
855-
b = data[i]
856-
base = i * 2
857-
if base < pixel_count:
858-
out[base] = (b >> 4) & 0x0F
859-
if base + 1 < pixel_count:
860-
out[base + 1] = b & 0x0F
881+
n_full_bytes = min(len(data), (pixel_count + 1) // 2)
882+
if n_full_bytes > 0:
883+
raw = data[:n_full_bytes]
884+
written = n_full_bytes * 2
885+
limit = min(written, pixel_count)
886+
hi = (raw >> 4) & np.uint8(0x0F)
887+
lo = raw & np.uint8(0x0F)
888+
full_idx = limit - (limit % 2)
889+
if full_idx > 0:
890+
pairs = full_idx // 2
891+
out[0:full_idx:2] = hi[:pairs]
892+
out[1:full_idx:2] = lo[:pairs]
893+
if limit - full_idx >= 1:
894+
out[full_idx] = hi[full_idx // 2]
861895
return out
862896
elif bps == 12:
863-
# 2 pixels per 3 bytes, MSB-first
897+
# 2 pixels per 3 bytes, MSB-first. Vectorised across the
898+
# packed byte array (issue #1713). Pairs whose third byte is
899+
# past ``len(data)`` are skipped, preserving the prior
900+
# ``np.empty`` semantics for short input buffers.
864901
out = np.empty(pixel_count, dtype=np.uint16)
865902
n_pairs = pixel_count // 2
866903
remainder = pixel_count % 2
867-
for i in range(n_pairs):
868-
off = i * 3
869-
if off + 2 < len(data):
870-
b0 = int(data[off])
871-
b1 = int(data[off + 1])
872-
b2 = int(data[off + 2])
873-
out[i * 2] = (b0 << 4) | (b1 >> 4)
874-
out[i * 2 + 1] = ((b1 & 0x0F) << 8) | b2
904+
if n_pairs > 0:
905+
# Only those triples that fit entirely within ``data`` get
906+
# written. The prior code used ``off + 2 < len(data)``
907+
# (strict less than), which is satisfied when ``3i + 2 <
908+
# len(data)``. Solving for the largest valid ``i`` gives
909+
# ``i_max = (len(data) - 3) // 3``, so the pair count cap
910+
# is ``i_max + 1 = len(data) // 3``.
911+
max_pairs = len(data) // 3
912+
usable_pairs = min(n_pairs, max_pairs)
913+
if usable_pairs > 0:
914+
raw = data[:usable_pairs * 3].astype(np.uint16)
915+
b0 = raw[0::3]
916+
b1 = raw[1::3]
917+
b2 = raw[2::3]
918+
out[0:usable_pairs * 2:2] = (b0 << 4) | (b1 >> 4)
919+
out[1:usable_pairs * 2:2] = ((b1 & 0x0F) << 8) | b2
875920
if remainder and n_pairs * 3 + 1 < len(data):
876921
off = n_pairs * 3
877922
out[pixel_count - 1] = (int(data[off]) << 4) | (int(data[off + 1]) >> 4)

0 commit comments

Comments
 (0)