Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude/sweep-security-state.csv
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ fire,2026-04-25,,,,,"Clean. Despite the module's size hint, fire.py is purely pe
flood,2026-05-03,1437,MEDIUM,3,,Re-audit 2026-05-03. MEDIUM Cat 3 fixed in PR #1438 (travel_time and flood_depth_vegetation now validate mannings_n DataArray values are finite and strictly positive via _validate_mannings_n_dataarray helper). No remaining unfixed findings. Other categories clean: every allocation is same-shape as input; no flat index math; NaN propagation explicit in every backend; tan_slope clamped by _TAN_MIN; no CUDA kernels; no file I/O; every public API calls _validate_raster on DataArray inputs.
focal,2026-04-27,1284,HIGH,1,,"HIGH (fixed PR #1286): apply(), focal_stats(), and hotspots() accepted unbounded user-supplied kernels via custom_kernel(), which only checks shape parity. The kernel-size guard from #1241 (_check_kernel_memory) only ran inside circle_kernel/annulus_kernel, so a (50001, 50001) custom kernel on a 10x10 raster allocated ~10 GB on the kernel itself plus a much larger padded raster before any work -- same shape as the bilateral DoS in #1236. Fixed by adding _check_kernel_vs_raster_memory in focal.py and wiring it into apply(), focal_stats(), and hotspots() after custom_kernel() validation. All 134 focal tests + 19 bilateral tests pass. No other findings: 10 CUDA kernels all have proper bounds + stencil guards; _validate_raster called on every public entry point; hotspots already raises ZeroDivisionError on constant-value rasters; _focal_variety_cuda uses a fixed-size local buffer (silent truncation but bounded); _focal_std_cuda/_focal_var_cuda clamp the catastrophic-cancellation case via if var < 0.0: var = 0.0; no file I/O."
geodesic,2026-04-27,1283,HIGH,1,,"HIGH (fixed PR #1285): slope(method='geodesic') and aspect(method='geodesic') stack a (3, H, W) float64 array (data, lat, lon) before dispatch with no memory check. A large lat/lon-tagged raster passed to either function would OOM. Fixed by adding _check_geodesic_memory(rows, cols) in xrspatial/geodesic.py (mirrors morphology._check_kernel_memory): budgets 56 bytes/cell (24 stacked float64 + 4 float32 output + 24 padded copy + slack) and raises MemoryError when > 50% of available RAM; called from slope.py and aspect.py inside the geodesic branch before dispatch. No other findings: 6 CUDA kernels all have bounds guards (e.g. _run_gpu_geodesic_aspect at geodesic.py:395), custom 16x16 thread blocks avoid register spill, no shared memory, _validate_raster runs upstream in slope/aspect, all backends cast to float32, slope_mag < 1e-7 flat threshold prevents arctan2 NaN propagation, curvature correction uses hardcoded WGS84 R."
geotiff,2026-05-11,1614,MEDIUM,5,,"MEDIUM (Cat 5 XML injection, filed #1614): _build_gdal_metadata_xml in _geotags.py used plain f-strings to embed caller-supplied keys and values into the GDALMetadata XML payload (tag 42112), so a key or value carrying XML special chars (< > & "" ') silently produced malformed XML (ParseError on read -> attrs round-tripped as {}) or let a crafted key inject attributes into <Item> (e.g. name='foo"" malicious=""bar'). Fix mirrors #1607: route every text slot through xml.sax.saxutils.escape and every attribute slot through quoteattr; sample indices are emitted from int() casts. Same bug class as #1607 but on a separate code path the earlier sweep did not cover. Other categories clean (rest of geotiff already hardened by #1607/#1579/#1584/#1219/#1196/#1189). Reachable from to_geotiff, _write_vrt_tiled, and write_geotiff_gpu whenever a caller passes attrs['gdal_metadata'] as a dict."
geotiff,2026-05-11,1625,MEDIUM,1,,"Re-audit pass 14 2026-05-11: MEDIUM Cat 1 (decompression bomb, filed #1625): lerc_decompress_with_mask and jpeg2000_decompress called lerc.decode / glymur.Jp2k[:] with no pre-decode output-size bound. The post-decode size check in _decode_strip_or_tile fired only after the external library had already materialised the full buffer. A 94-byte LERC blob can declare a 64 MiB output; a kilobyte-sized blob can request multiple GB. Fix: added _check_lerc_bomb helper (queries lerc.getLercBlobInfo for declared nCols/nRows/nBands*dtype_bytes) and Jp2k.shape check in jpeg2000_decompress; both raise ValueError when declared output exceeds expected_size*1.05+1 cap, matching the deflate/zstd/lz4/packbits pattern from #1533. Wired expected_size through decompress() and _decode_strip_or_tile and _gpu_decode CPU fallback. JPEG codec is protected at the library level via Image.MAX_IMAGE_PIXELS so no wrapper-level cap is needed. Other categories remain clean (see prior pass notes)."
glcm,2026-04-24,1257,HIGH,1,,"HIGH (fixed #1257): glcm_texture() validated window_size only as >= 3 and distance only as >= 1, with no upper bound on either. _glcm_numba_kernel iterates range(r-half, r+half+1) for every pixel, so window_size=1_000_001 on a 10x10 raster ran ~10^14 loop iterations with all neighbors failing the interior bounds check (CPU DoS). On the dask backends depth = window_size // 2 + distance drove map_overlap padding, so a huge window also caused oversize per-chunk allocations (memory DoS). Fixed by adding max_val caps in the public entrypoint: window_size <= max(3, min(rows, cols)) and distance <= max(1, window_size // 2). One cap covers every backend because cupy and dask+cupy call through to the CPU kernel after cupy.asnumpy. No other HIGH findings: levels is already capped at 256 so the per-pixel np.zeros((levels, levels)) matrix in the kernel is bounded to 512 KB. No CUDA kernels. No file I/O. Quantization clips to [0, levels-1] before the kernel and NaN maps to -1 which the kernel filters with i_val >= 0. Entropy log(p) and correlation p / (std_i * std_j) are both guarded. All four backends use _validate_raster and cast to float64 before quantizing. MEDIUM (unfixed, Cat 1): the per-pixel np.zeros((levels, levels)) allocation inside the hot loop is a perf issue (levels=256 -> 512 KB alloc+free per pixel) but not a security issue because levels is bounded. Could be hoisted out of the loop or replaced with an in-place clear, but that is an efficiency concern, not security."
gpu_rtx,2026-04-29,1308,HIGH,1,,"HIGH (fixed #1308 / PR #1310): hillshade_rtx (gpu_rtx/hillshade.py:184) and viewshed_gpu (gpu_rtx/viewshed.py:269) allocated cupy device buffers sized by raster shape with no memory check. create_triangulation (mesh_utils.py:23-24) adds verts (12 B/px) + triangles (24 B/px) = 36 B/px; hillshade_rtx adds d_rays(32) + d_hits(16) + d_aux(12) + d_output(4) = 64 B/px (100 B/px total); viewshed_gpu adds d_rays(32) + d_hits(16) + d_visgrid(4) + d_vsrays(32) = 84 B/px (120 B/px total). A 30000x30000 raster asked for 90-108 GB of VRAM before cupy surfaced an opaque allocator error. Fixed by adding gpu_rtx/_memory.py with _available_gpu_memory_bytes() and _check_gpu_memory(func_name, h, w) helpers (cost_distance #1262 / sky_view_factor #1299 pattern, 120 B/px budget covers worst case, raises MemoryError when required > 50% of free VRAM, skips silently when memGetInfo() unavailable). Wired into both entry points after the cupy.ndarray type check and before create_triangulation. 9 new tests in test_gpu_rtx_memory.py (5 helper-unit + 4 end-to-end gated on has_rtx). All 81 existing hillshade/viewshed tests still pass. Cat 4 clean: all CUDA kernels (hillshade.py:25/62/106, viewshed.py:32/74/116, mesh_utils.py:50) have bounds guards; no shared memory, no syncthreads needed. MEDIUM not fixed (Cat 6): hillshade_rtx and viewshed_gpu do not call _validate_raster directly but parent hillshade() (hillshade.py:252) and viewshed() (viewshed.py:1707) already validate, so input validation runs before the gpu_rtx entry point - defense-in-depth, not exploitable. MEDIUM not fixed (Cat 2): mesh_utils.py:64-68 cast mesh_map_index to int32 in the triangle index buffer; overflows at H*W > 2.1B vertices (~46341x46341+) but the new memory guard rejects rasters that large first - documentation/clarity item rather than exploitable. MEDIUM not fixed (Cat 3): mesh_utils.py:19 scale = maxDim / maxH divides by zero on an all-zero raster, propagating inf/NaN into mesh vertex z-coords; separate follow-up. LOW not fixed (Cat 5): mesh_utils.write() opens user-supplied path without canonicalization but its only call site (mesh_utils.py:38-39) sits behind if False: in create_triangulation, not reachable in production."
hillshade,2026-04-27,,,,,"Clean. Cat 1: only allocation is the output np.empty(data.shape) at line 32 (cupy at line 165) and a _pad_array with hardcoded depth=1 (line 62) -- bounded by caller, no user-controlled amplifier. Azimuth/altitude are scalars and don't drive size. Cat 2: numba kernel uses range(1, rows-1) with simple (y, x) indexing; numba range loops promote to int64. Cat 3: math.sqrt(1.0 + xx_plus_yy) is always >= 1.0 (no neg sqrt, no div-by-zero); NaN elevation propagates correctly through dz_dx/dz_dy -> shaded -> output (the shaded < 0.0 / shaded > 1.0 clamps don't fire on NaN). Azimuth validated to [0, 360], altitude to [0, 90]. Cat 4: _gpu_calc_numba (line 107) guards both grid bounds and 3x3 stencil reads via i > 0 and i < shape[0]-1 and j > 0 and j < shape[1]-1; no shared memory. Cat 5: no file I/O. Cat 6: hillshade() calls _validate_raster (line 252) and _validate_scalar for both azimuth (253) and angle_altitude (254); all four backend paths cast to float32; tests parametrize int32/int64/float32/float64."
Expand Down
109 changes: 102 additions & 7 deletions xrspatial/geotiff/_compression.py
Original file line number Diff line number Diff line change
Expand Up @@ -1112,8 +1112,17 @@ def zstd_compress(data: bytes, level: int = 3) -> bytes:


def jpeg2000_decompress(data: bytes, width: int = 0, height: int = 0,
samples: int = 1) -> bytes:
"""Decompress a JPEG 2000 codestream. Requires ``glymur``."""
samples: int = 1, expected_size: int = 0) -> bytes:
"""Decompress a JPEG 2000 codestream. Requires ``glymur``.

When ``expected_size`` > 0 the wrapper inspects the codestream's
declared ``shape`` and ``dtype`` via :class:`glymur.Jp2k` (which
parses only the SIZ marker and does not trigger pixel decoding)
and raises ``ValueError`` when ``np.prod(shape) * dtype_bytes``
exceeds ``expected_size * 1.05 + 1`` bytes. This blocks
decompression-bomb attacks where a tiny on-disk JPEG 2000 tile
declares multi-gigabyte output dimensions.
"""
if not JPEG2000_AVAILABLE:
raise ImportError(
"glymur is required to read JPEG 2000-compressed TIFFs. "
Expand All @@ -1126,6 +1135,24 @@ def jpeg2000_decompress(data: bytes, width: int = 0, height: int = 0,
os.write(fd, data)
os.close(fd)
jp2 = _glymur.Jp2k(tmp)
if expected_size > 0:
try:
shape = jp2.shape
dtype = np.dtype(getattr(jp2, 'dtype', np.uint8))
except Exception:
shape = None
dtype = None
if shape is not None and dtype is not None:
declared = int(np.prod(shape)) * dtype.itemsize
cap = _max_output_with_margin(expected_size)
if declared > cap:
raise ValueError(
f"jpeg2000 decode would exceed expected size: "
f"declared output is {declared} bytes (shape "
f"{shape}, {dtype.itemsize} B/sample), cap is "
f"{cap} (expected {expected_size}). Likely a "
f"decompression bomb."
)
arr = jp2[:]
return arr.tobytes()
finally:
Expand Down Expand Up @@ -1174,21 +1201,75 @@ def jpeg2000_compress(data: bytes, width: int, height: int,
_lerc = None


# LERC dataType code -> bytes/sample. Mirrors the enum in the LERC C++
# header: 0 int8, 1 uint8, 2 int16, 3 uint16, 4 int32, 5 uint32,
# 6 float32, 7 float64. Used by the decompression-bomb pre-check on
# ``lerc.getLercBlobInfo`` so the blob's declared decoded byte count can
# be validated before ``lerc.decode`` allocates the full buffer.
_LERC_DTYPE_BYTES = {0: 1, 1: 1, 2: 2, 3: 2, 4: 4, 5: 4, 6: 4, 7: 8}


def _check_lerc_bomb(data: bytes, expected_size: int) -> None:
"""Reject LERC blobs whose declared output exceeds the bomb cap.

``lerc.getLercBlobInfo`` parses the blob header without decoding,
returning ``(errCode, version, dataType, nDim, nCols, nRows,
nBands, ...)``. We compute ``nCols * nRows * nBands * dtype_bytes``
and raise ``ValueError`` when the projected output exceeds the
same margin cap (``expected_size * 1.05 + 1``) used by every other
codec wrapper. Skipping when ``expected_size <= 0`` matches the
existing convention: a zero (or unset) expected size disables the
cap so direct callers and round-trip tests still work.
"""
if expected_size <= 0:
return
try:
info = _lerc.getLercBlobInfo(data)
except Exception:
# If the header itself is malformed, hand the blob to lerc.decode
# so it produces the canonical error rather than masking it here.
return
if len(info) < 7:
return
data_type = int(info[2])
n_cols = int(info[4])
n_rows = int(info[5])
n_bands = int(info[6])
bytes_per_sample = _LERC_DTYPE_BYTES.get(data_type)
if bytes_per_sample is None:
return
declared = n_cols * n_rows * n_bands * bytes_per_sample
cap = _max_output_with_margin(expected_size)
if declared > cap:
raise ValueError(
f"lerc decode would exceed expected size: declared output is "
f"{declared} bytes ({n_cols}x{n_rows}x{n_bands}, "
f"{bytes_per_sample} B/sample), cap is {cap} "
f"(expected {expected_size}). Likely a decompression bomb."
)


def lerc_decompress(data: bytes, width: int = 0, height: int = 0,
samples: int = 1) -> bytes:
samples: int = 1, expected_size: int = 0) -> bytes:
"""Decompress LERC data. Requires the ``lerc`` package.

Returns the raw decoded pixel bytes. Any LERC valid-mask is dropped
here; masked pixels are returned as LERC's zero fill (the wire
format's default). Callers that need to honour the file's nodata
value should use :func:`lerc_decompress_with_mask` instead and apply
nodata at the array level once dtype is known.

When ``expected_size`` > 0 the wrapper queries the blob's declared
output size via :func:`lerc.getLercBlobInfo` and raises
``ValueError`` when it exceeds ``expected_size * 1.05 + 1`` bytes,
matching the bomb cap applied by every other codec wrapper.
"""
decoded_bytes, _mask = lerc_decompress_with_mask(data)
decoded_bytes, _mask = lerc_decompress_with_mask(
data, expected_size=expected_size)
return decoded_bytes


def lerc_decompress_with_mask(data: bytes):
def lerc_decompress_with_mask(data: bytes, expected_size: int = 0):
"""Decompress LERC data and return ``(bytes, valid_mask_or_None)``.

``valid_mask`` is ``None`` when LERC reports the block is fully
Expand All @@ -1198,11 +1279,21 @@ def lerc_decompress_with_mask(data: bytes):
pixels the encoder flagged as invalid. LERC zero fills masked
positions in the data array, so the returned mask is the only
signal that lets a reader restore the file's nodata sentinel.

When ``expected_size`` > 0 the wrapper queries the blob's declared
output size via :func:`lerc.getLercBlobInfo` and raises
``ValueError`` when it exceeds ``expected_size * 1.05 + 1`` bytes
(decompression-bomb guard). A 94-byte LERC blob can otherwise
request 64 MiB of host memory because LERC compresses constant
blocks at >700,000:1; without this pre-check the post-decode size
check in :func:`_decode_strip_or_tile` fires only after the bomb
has already been materialised.
"""
if not LERC_AVAILABLE:
raise ImportError(
"lerc is required to read LERC-compressed TIFFs. "
"Install it with: pip install lerc")
_check_lerc_bomb(data, expected_size)
result = _lerc.decode(data)
# lerc.decode returns (result_code, data_array, valid_mask, ...)
if result[0] != 0:
Expand Down Expand Up @@ -1355,13 +1446,17 @@ def decompress(data, compression: int, expected_size: int = 0,
zstd_decompress(data, expected_size), dtype=np.uint8)
elif compression == COMPRESSION_JPEG2000:
return np.frombuffer(
jpeg2000_decompress(data, width, height, samples), dtype=np.uint8)
jpeg2000_decompress(data, width, height, samples,
expected_size=expected_size),
dtype=np.uint8)
elif compression == COMPRESSION_LZ4:
return np.frombuffer(
lz4_decompress(data, expected_size), dtype=np.uint8)
elif compression == COMPRESSION_LERC:
return np.frombuffer(
lerc_decompress(data, width, height, samples), dtype=np.uint8)
lerc_decompress(data, width, height, samples,
expected_size=expected_size),
dtype=np.uint8)
else:
raise ValueError(f"Unsupported compression type: {compression}")

Expand Down
7 changes: 5 additions & 2 deletions xrspatial/geotiff/_gpu_decode.py
Original file line number Diff line number Diff line change
Expand Up @@ -1933,7 +1933,9 @@ def gpu_decode_tiles(
for i, tile in enumerate(compressed_tiles):
start = i * tile_bytes
chunk = np.frombuffer(
jpeg2000_decompress(tile, tile_width, tile_height, samples),
jpeg2000_decompress(
tile, tile_width, tile_height, samples,
expected_size=tile_bytes),
dtype=np.uint8)
raw_host[start:start + min(len(chunk), tile_bytes)] = \
chunk[:tile_bytes] if len(chunk) >= tile_bytes else \
Expand All @@ -1953,7 +1955,8 @@ def gpu_decode_tiles(
any_lerc_mask = False
for i, tile in enumerate(compressed_tiles):
start = i * tile_bytes
decoded_bytes, valid_mask = lerc_decompress_with_mask(tile)
decoded_bytes, valid_mask = lerc_decompress_with_mask(
tile, expected_size=tile_bytes)
chunk = np.frombuffer(decoded_bytes, dtype=np.uint8)
raw_host[start:start + min(len(chunk), tile_bytes)] = \
chunk[:tile_bytes] if len(chunk) >= tile_bytes else \
Expand Down
6 changes: 5 additions & 1 deletion xrspatial/geotiff/_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -776,7 +776,11 @@ def _decode_strip_or_tile(data_slice, compression, width, height, samples,
# valid-mask which the generic decompress() dispatcher discards.
# We capture it here so masked pixels can be restored to nodata
# below, instead of leaking LERC's zero fill into the output.
decoded_bytes, lerc_mask = lerc_decompress_with_mask(data_slice)
# Forward ``expected`` so the wrapper rejects bombs at the
# blob-header level rather than after the full buffer is
# materialised (issue #1625).
decoded_bytes, lerc_mask = lerc_decompress_with_mask(
data_slice, expected_size=expected)
chunk = np.frombuffer(decoded_bytes, dtype=np.uint8)
else:
chunk = decompress(data_slice, compression, expected,
Expand Down
Loading
Loading