Skip to content

more fixes

bf93c59
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Closed

[do not merge] feat: Span streaming & new span API #5317

more fixes
bf93c59
Select commit
Loading
Failed to load commit list.
GitHub Actions / warden: code-review completed Feb 25, 2026 in 17m 57s

9 issues

code-review: Found 9 issues (2 high, 5 medium, 2 low)

High

StreamedSpan not imported at runtime causing NameError - `sentry_sdk/integrations/graphene.py:167`

The StreamedSpan class is only imported within the TYPE_CHECKING block (line 30), which means it's not available at runtime. When span streaming is enabled and the code reaches line 167 (if isinstance(_graphql_span, StreamedSpan):), it will raise NameError: name 'StreamedSpan' is not defined. Other integrations like celery/__init__.py and strawberry.py correctly import StreamedSpan at the top level, not just for type checking.

Also found at:

  • sentry_sdk/integrations/redis/utils.py:152-160
StreamedSpan created but not started - spans will silently fail to send - `sentry_sdk/integrations/rust_tracing.py:216-220`

When sentry_sdk.traces.start_span() is called (lines 216, 231), the returned StreamedSpan is assigned to scope.span but never started via start() or context manager entry. When on_close() later calls finish() (which calls end() -> __exit__()), the span will fail to end properly because _context_manager_state was never set. The error is silently caught by capture_internal_exceptions(), so spans created in streaming mode will never be sent to Sentry.

Medium

Redundant span serialization causes O(2n) serialization overhead - `sentry_sdk/_span_batcher.py:78-81`

Each span is serialized via _to_transport_format() twice: once in _estimate_size() during add() for size tracking, and again in _flush() when building the envelope. For high-throughput scenarios with many spans, this doubles the serialization cost. Consider caching the serialized form or computing size from pre-cached data.

StreamedSpan status always set to ERROR regardless of status parameter - `sentry_sdk/integrations/celery/__init__.py:104-105`

The _set_status function receives a status parameter (e.g., "aborted" or "internal_error"), but when the span is a StreamedSpan, it always sets SpanStatus.ERROR instead of using the passed status value. Since StreamedSpan.set_status() accepts Union[SpanStatus, str], it can handle string values directly. This causes control flow exceptions (which should have "aborted" status) to be incorrectly marked as "error", losing the semantic distinction between expected control flow aborts and actual errors.

Spans not closed on exception in async Redis execute_command - `sentry_sdk/integrations/redis/_async_common.py:135-146`

In _sentry_execute_command, both db_span and cache_span are entered manually via __enter__() but if await old_execute_command(self, name, *args, **kwargs) raises an exception, the corresponding __exit__() calls are never executed. This causes the spans to remain open indefinitely, leading to missing telemetry data and potential memory leaks. The exception info is also not passed to __exit__, so the span status won't reflect the error condition.

Also found at:

  • sentry_sdk/integrations/redis/_sync_common.py:135
Async resolve() missing set_op() and set_origin() for StreamedSpan - `sentry_sdk/integrations/strawberry.py:314-321`

In the async resolve() method, when self.graphql_span is a StreamedSpan, the code does not call span.set_op(OP.GRAPHQL_RESOLVE) or span.set_origin(StrawberryIntegration.origin). However, the sync resolve() method (lines 357-358) correctly sets both. This inconsistency means async GraphQL resolvers will have incomplete span metadata in streaming mode, potentially affecting trace visualization and filtering in Sentry.

Converting None sample_rate to string produces "None" literal in baggage - `sentry_sdk/scope.py:1306`

_update_sample_rate_from_segment converts span.sample_rate to string without checking for None. When _set_sampling_decision returns early (e.g., tracing disabled, invalid sample rate, or rate is 0), sample_rate remains None. This causes str(None) = "None" to be stored in baggage's sentry_items["sample_rate"], which propagates an invalid value through trace context that downstream services may fail to parse correctly.

Low

Unused 'json' import - `sentry_sdk/_span_batcher.py:1`

The json module is imported on line 1 but never used in the file. The size estimation uses str(span_dict) and the flush uses PayloadRef(json={...}) which doesn't require the json module directly. This is a minor code quality issue with no runtime impact.

Missing name check in populate_from_segment may add empty transaction to baggage - `sentry_sdk/tracing_utils.py:816-820`

The populate_from_segment method doesn't check if segment._name is truthy before adding it to sentry_items['transaction'], unlike populate_from_transaction which checks if transaction.name. If an empty string is passed as the span name, the baggage will include an empty transaction value, which differs from the original behavior that would skip setting the transaction key entirely.


Duration: 17m 54s · Tokens: 13.7M in / 155.3k out · Cost: $18.91 (+extraction: $0.02, +merge: $0.00)

Annotations

Check failure on line 167 in sentry_sdk/integrations/graphene.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

StreamedSpan not imported at runtime causing NameError

The `StreamedSpan` class is only imported within the `TYPE_CHECKING` block (line 30), which means it's not available at runtime. When span streaming is enabled and the code reaches line 167 (`if isinstance(_graphql_span, StreamedSpan):`), it will raise `NameError: name 'StreamedSpan' is not defined`. Other integrations like `celery/__init__.py` and `strawberry.py` correctly import `StreamedSpan` at the top level, not just for type checking.

Check failure on line 160 in sentry_sdk/integrations/redis/utils.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

[HCZ-ZPE] StreamedSpan not imported at runtime causing NameError (additional location)

The `StreamedSpan` class is only imported within the `TYPE_CHECKING` block (line 30), which means it's not available at runtime. When span streaming is enabled and the code reaches line 167 (`if isinstance(_graphql_span, StreamedSpan):`), it will raise `NameError: name 'StreamedSpan' is not defined`. Other integrations like `celery/__init__.py` and `strawberry.py` correctly import `StreamedSpan` at the top level, not just for type checking.

Check failure on line 220 in sentry_sdk/integrations/rust_tracing.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

StreamedSpan created but not started - spans will silently fail to send

When `sentry_sdk.traces.start_span()` is called (lines 216, 231), the returned `StreamedSpan` is assigned to `scope.span` but never started via `start()` or context manager entry. When `on_close()` later calls `finish()` (which calls `end()` -> `__exit__()`), the span will fail to end properly because `_context_manager_state` was never set. The error is silently caught by `capture_internal_exceptions()`, so spans created in streaming mode will never be sent to Sentry.

Check warning on line 81 in sentry_sdk/_span_batcher.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

Redundant span serialization causes O(2n) serialization overhead

Each span is serialized via `_to_transport_format()` twice: once in `_estimate_size()` during `add()` for size tracking, and again in `_flush()` when building the envelope. For high-throughput scenarios with many spans, this doubles the serialization cost. Consider caching the serialized form or computing size from pre-cached data.

Check warning on line 105 in sentry_sdk/integrations/celery/__init__.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

StreamedSpan status always set to ERROR regardless of status parameter

The `_set_status` function receives a `status` parameter (e.g., "aborted" or "internal_error"), but when the span is a `StreamedSpan`, it always sets `SpanStatus.ERROR` instead of using the passed `status` value. Since `StreamedSpan.set_status()` accepts `Union[SpanStatus, str]`, it can handle string values directly. This causes control flow exceptions (which should have "aborted" status) to be incorrectly marked as "error", losing the semantic distinction between expected control flow aborts and actual errors.

Check warning on line 146 in sentry_sdk/integrations/redis/_async_common.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

Spans not closed on exception in async Redis execute_command

In `_sentry_execute_command`, both `db_span` and `cache_span` are entered manually via `__enter__()` but if `await old_execute_command(self, name, *args, **kwargs)` raises an exception, the corresponding `__exit__()` calls are never executed. This causes the spans to remain open indefinitely, leading to missing telemetry data and potential memory leaks. The exception info is also not passed to `__exit__`, so the span status won't reflect the error condition.

Check warning on line 135 in sentry_sdk/integrations/redis/_sync_common.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

[LDK-JQQ] Spans not closed on exception in async Redis execute_command (additional location)

In `_sentry_execute_command`, both `db_span` and `cache_span` are entered manually via `__enter__()` but if `await old_execute_command(self, name, *args, **kwargs)` raises an exception, the corresponding `__exit__()` calls are never executed. This causes the spans to remain open indefinitely, leading to missing telemetry data and potential memory leaks. The exception info is also not passed to `__exit__`, so the span status won't reflect the error condition.

Check warning on line 321 in sentry_sdk/integrations/strawberry.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

Async resolve() missing set_op() and set_origin() for StreamedSpan

In the async `resolve()` method, when `self.graphql_span` is a `StreamedSpan`, the code does not call `span.set_op(OP.GRAPHQL_RESOLVE)` or `span.set_origin(StrawberryIntegration.origin)`. However, the sync `resolve()` method (lines 357-358) correctly sets both. This inconsistency means async GraphQL resolvers will have incomplete span metadata in streaming mode, potentially affecting trace visualization and filtering in Sentry.

Check warning on line 1306 in sentry_sdk/scope.py

See this annotation in the file changed.

@github-actions github-actions / warden: code-review

Converting None sample_rate to string produces "None" literal in baggage

`_update_sample_rate_from_segment` converts `span.sample_rate` to string without checking for None. When `_set_sampling_decision` returns early (e.g., tracing disabled, invalid sample rate, or rate is 0), `sample_rate` remains None. This causes `str(None)` = `"None"` to be stored in baggage's `sentry_items["sample_rate"]`, which propagates an invalid value through trace context that downstream services may fail to parse correctly.