feat(redis): Support streaming spans #6083
4 issues
find-bugs: Found 4 issues (1 medium, 3 low)
Medium
Spans leak when Redis command raises exception in async execute_command - `sentry_sdk/integrations/redis/_async_common.py:145-156`
In _sentry_execute_command, spans are entered manually via __enter__() but if await old_execute_command() raises an exception (e.g., connection error, timeout), neither db_span.__exit__() nor cache_span.__exit__() are called. This causes spans to never be closed/sent, leading to incorrect timing data and potential memory leaks. The code should use try/finally or context managers to ensure proper cleanup.
Also found at:
sentry_sdk/integrations/redis/_sync_common.py:151-152
Low
Missing capture_internal_exceptions wrapper around span data setters in async code - `sentry_sdk/integrations/redis/_async_common.py:147-148`
The async version of _sentry_execute_command does not wrap set_db_data_fn(), _set_client_data(), and _set_cache_data() calls with capture_internal_exceptions(), unlike the sync version in _sync_common.py. If these functions throw an exception (e.g., due to unexpected Redis client state), it could propagate to the user's code instead of being handled internally. This inconsistency could cause unexpected failures in async Redis operations.
Inconsistent DB_DRIVER_NAME value between StreamedSpan and Span branches - `sentry_sdk/integrations/redis/modules/queries.py:55`
The SPANDATA.DB_DRIVER_NAME attribute is set to different values depending on span type: "redis" for StreamedSpan (line 55) vs "redis-py" for Span (line 68). This inconsistency appears to be unintentional since both branches should report the same driver name. This could cause confusion in Sentry dashboards when comparing spans from different modes, as they would show different driver names for the same underlying Redis client.
StreamedSpan pipeline data never set - commands list discarded silently - `sentry_sdk/integrations/redis/utils.py:115-134`
In _set_pipeline_data, when the span is a StreamedSpan, the code builds the commands list (lines 119-125) but never sets any attributes on the span. The redis.is_cluster, redis.transaction tags and redis.commands data are all skipped for streamed spans, meaning this computed data is completely discarded. This causes loss of observability data for users in span streaming mode.
Also found at:
sentry_sdk/integrations/redis/utils.py:140-147
Duration: 2m 9s · Tokens: 1.5M in / 14.2k out · Cost: $2.49 (+merge: $0.00, +fix_gate: $0.01)
Annotations
Check warning on line 156 in sentry_sdk/integrations/redis/_async_common.py
sentry-warden / warden: find-bugs
Spans leak when Redis command raises exception in async execute_command
In `_sentry_execute_command`, spans are entered manually via `__enter__()` but if `await old_execute_command()` raises an exception (e.g., connection error, timeout), neither `db_span.__exit__()` nor `cache_span.__exit__()` are called. This causes spans to never be closed/sent, leading to incorrect timing data and potential memory leaks. The code should use try/finally or context managers to ensure proper cleanup.
Check warning on line 152 in sentry_sdk/integrations/redis/_sync_common.py
sentry-warden / warden: find-bugs
[F4C-LTU] Spans leak when Redis command raises exception in async execute_command (additional location)
In `_sentry_execute_command`, spans are entered manually via `__enter__()` but if `await old_execute_command()` raises an exception (e.g., connection error, timeout), neither `db_span.__exit__()` nor `cache_span.__exit__()` are called. This causes spans to never be closed/sent, leading to incorrect timing data and potential memory leaks. The code should use try/finally or context managers to ensure proper cleanup.