Skip to content

Merge branch 'release/2.52.0a7' into feat/span-first-2

a3000cf
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Closed

[do not merge] feat: Span streaming & new span API #5317

Merge branch 'release/2.52.0a7' into feat/span-first-2
a3000cf
Select commit
Loading
Failed to load commit list.
GitHub Actions / warden: find-bugs completed Feb 25, 2026 in 18m 53s

11 issues

find-bugs: Found 11 issues (2 high, 5 medium, 4 low)

High

API incompatibility: sentry_sdk.traces.start_span doesn't accept keyword arguments used by callers - `sentry_sdk/ai/utils.py:542`

When span streaming is enabled, get_start_span_function() returns sentry_sdk.traces.start_span, which has signature start_span(name: str, attributes: Optional[Attributes] = None, parent_span: Optional[StreamedSpan] = None). However, all existing callers (anthropic.py, litellm.py, google_genai, langchain.py, mcp.py, openai_agents, pydantic_ai) invoke the returned function with keyword arguments op=..., name=..., origin=... which the new API doesn't accept. This will cause a TypeError at runtime when span streaming is enabled.

StreamedSpan.finish() fails when span not used as context manager - `sentry_sdk/integrations/strawberry.py:192-234`

In streaming mode, spans are created via sentry_sdk.traces.start_span() but are not used as context managers nor explicitly started via .start(). When .finish() is called later, it invokes __exit__() which tries to access self._context_manager_state that was never set (since __enter__ was never called). This causes an AttributeError that is silently caught by capture_internal_exceptions(), preventing the span from being properly ended and sent to Sentry.

Also found at:

  • sentry_sdk/traces.py:698

Medium

StreamedSpan not closed on error in Anthropic integration - `sentry_sdk/integrations/anthropic.py:610-612`

When using the new span streaming mode (_experiments={"trace_lifecycle": "stream"}), if an exception occurs during Anthropic API calls, the StreamedSpan created in _sentry_patched_create_common will not be properly closed. The span is started via span.__enter__() but the error cleanup in the finally block only handles legacy Span objects (via isinstance(span, Span)), leaving StreamedSpan objects open. This results in spans without end timestamps and potential data loss.

Also found at:

  • sentry_sdk/integrations/redis/_async_common.py:135-146
  • sentry_sdk/integrations/stdlib.py:182-186
Control flow exceptions (Retry, Ignore, Reject) incorrectly marked as ERROR - `sentry_sdk/integrations/celery/__init__.py:105`

When a Celery task raises control flow exceptions (Retry, Ignore, Reject), the _set_status("aborted") call sets SpanStatus.ERROR for StreamedSpan. However, these are not actual errors - they are expected control flow mechanisms in Celery. A Retry indicates the task will be retried, Ignore means the result is intentionally ignored, and Reject means the task was rejected. Marking these as ERROR may cause misleading error counts in monitoring dashboards.

Missing custom_sampling_context in Celery span streaming mode breaks custom samplers - `sentry_sdk/integrations/celery/__init__.py:330-337`

In the _wrap_tracer function's span streaming path (lines 330-337), the celery_job custom sampling context containing task name, args, and kwargs is not set, unlike the legacy path which passes it to start_transaction. Users with custom traces_sampler functions that make decisions based on celery job information will not receive this context in span streaming mode, potentially causing unexpected sampling behavior.

UnboundLocalError when Redis command raises exception - `sentry_sdk/integrations/redis/_sync_common.py:148`

In the finally block (lines 143-150), when old_execute_command raises an exception, value is never assigned. The subsequent call to _set_cache_data(cache_span, self, cache_properties, value) on line 148 references the undefined variable value, causing an UnboundLocalError. While this is caught by capture_internal_exceptions(), it's a regression from the old code that only called _set_cache_data after successful command execution.

Also found at:

  • sentry_sdk/_span_batcher.py:68-70
NoOpStreamedSpan returns inconsistent trace_id/span_id values - `sentry_sdk/traces.py:777-783`

The NoOpStreamedSpan.to_traceparent() method (line 770-775) returns the real trace_id and span_id from the propagation context, but the span_id (line 778-779) and trace_id (line 782-783) properties return hardcoded "000000" values. This inconsistency means code accessing .trace_id or .span_id directly gets different data than code parsing to_traceparent(). This could cause subtle bugs in trace correlation or header propagation where different parts of the system see different trace identifiers for the same span.

Low

isinstance check order may cause incorrect behavior for NoOpStreamedSpan in transaction setter - `sentry_sdk/scope.py:813-819`

In the transaction setter (line 813), the check isinstance(self._span, StreamedSpan) will also match NoOpStreamedSpan since NoOpStreamedSpan is a subclass of StreamedSpan. This means when a NoOpStreamedSpan is the current span, the setter will emit a deprecation warning and return early, instead of silently proceeding (as done in set_transaction_name at line 828-829). The behavior is inconsistent with how NoOpStreamedSpan is handled elsewhere - in set_transaction_name, NoOpStreamedSpan is explicitly checked first and returns silently without warning.

Potential AttributeError when legacy Span is active in streaming mode - `sentry_sdk/scope.py:1247-1282`

In start_streamed_span at line 1247, parent_span is assigned from self.span which has type Union[Span, StreamedSpan]. If a legacy Span is assigned to the scope while streaming mode is active, accessing parent_span.segment at line 1282 will raise AttributeError since the legacy Span class doesn't have a segment attribute. This is mitigated by the API design where integrations check has_span_streaming_enabled() before using the streaming API, but could occur if user code directly assigns a legacy span to the scope.

Race condition allows duplicate span capture - `sentry_sdk/traces.py:428-444`

The _end method checks _finished at line 428 and sets _finished = True at line 444, after calling scope._capture_span(self) at line 442. This non-atomic sequence allows concurrent calls to _end (e.g., from different threads sharing a span) to both pass the _finished check and capture the same span twice. While unlikely in typical usage patterns where spans are used within a single context manager block, this could result in duplicate telemetry data being sent to Sentry.

Unused import `should_send_default_pii` in create_streaming_span_decorator - `sentry_sdk/tracing_utils.py:1058`

The function create_streaming_span_decorator imports should_send_default_pii from sentry_sdk.scope (line 1058) but never uses it. This appears to be copy-paste from create_span_decorator where it's actually used (lines 971 and 1016). The import adds unnecessary overhead and suggests the function may be missing expected PII handling logic that exists in the non-streaming version.


Duration: 18m 48s · Tokens: 24.6M in / 186.3k out · Cost: $31.73 (+extraction: $0.02, +merge: $0.01)

Annotations

Check failure on line 542 in sentry_sdk/ai/utils.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

API incompatibility: sentry_sdk.traces.start_span doesn't accept keyword arguments used by callers

When span streaming is enabled, `get_start_span_function()` returns `sentry_sdk.traces.start_span`, which has signature `start_span(name: str, attributes: Optional[Attributes] = None, parent_span: Optional[StreamedSpan] = None)`. However, all existing callers (anthropic.py, litellm.py, google_genai, langchain.py, mcp.py, openai_agents, pydantic_ai) invoke the returned function with keyword arguments `op=...`, `name=...`, `origin=...` which the new API doesn't accept. This will cause a TypeError at runtime when span streaming is enabled.

Check failure on line 234 in sentry_sdk/integrations/strawberry.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

StreamedSpan.finish() fails when span not used as context manager

In streaming mode, spans are created via `sentry_sdk.traces.start_span()` but are not used as context managers nor explicitly started via `.start()`. When `.finish()` is called later, it invokes `__exit__()` which tries to access `self._context_manager_state` that was never set (since `__enter__` was never called). This causes an `AttributeError` that is silently caught by `capture_internal_exceptions()`, preventing the span from being properly ended and sent to Sentry.

Check failure on line 698 in sentry_sdk/traces.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

[QG3-UL3] StreamedSpan.finish() fails when span not used as context manager (additional location)

In streaming mode, spans are created via `sentry_sdk.traces.start_span()` but are not used as context managers nor explicitly started via `.start()`. When `.finish()` is called later, it invokes `__exit__()` which tries to access `self._context_manager_state` that was never set (since `__enter__` was never called). This causes an `AttributeError` that is silently caught by `capture_internal_exceptions()`, preventing the span from being properly ended and sent to Sentry.

Check warning on line 612 in sentry_sdk/integrations/anthropic.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

StreamedSpan not closed on error in Anthropic integration

When using the new span streaming mode (`_experiments={"trace_lifecycle": "stream"}`), if an exception occurs during Anthropic API calls, the StreamedSpan created in `_sentry_patched_create_common` will not be properly closed. The span is started via `span.__enter__()` but the error cleanup in the `finally` block only handles legacy `Span` objects (via `isinstance(span, Span)`), leaving `StreamedSpan` objects open. This results in spans without end timestamps and potential data loss.

Check warning on line 146 in sentry_sdk/integrations/redis/_async_common.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

[5LW-VAF] StreamedSpan not closed on error in Anthropic integration (additional location)

When using the new span streaming mode (`_experiments={"trace_lifecycle": "stream"}`), if an exception occurs during Anthropic API calls, the StreamedSpan created in `_sentry_patched_create_common` will not be properly closed. The span is started via `span.__enter__()` but the error cleanup in the `finally` block only handles legacy `Span` objects (via `isinstance(span, Span)`), leaving `StreamedSpan` objects open. This results in spans without end timestamps and potential data loss.

Check warning on line 186 in sentry_sdk/integrations/stdlib.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

[5LW-VAF] StreamedSpan not closed on error in Anthropic integration (additional location)

When using the new span streaming mode (`_experiments={"trace_lifecycle": "stream"}`), if an exception occurs during Anthropic API calls, the StreamedSpan created in `_sentry_patched_create_common` will not be properly closed. The span is started via `span.__enter__()` but the error cleanup in the `finally` block only handles legacy `Span` objects (via `isinstance(span, Span)`), leaving `StreamedSpan` objects open. This results in spans without end timestamps and potential data loss.

Check warning on line 105 in sentry_sdk/integrations/celery/__init__.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

Control flow exceptions (Retry, Ignore, Reject) incorrectly marked as ERROR

When a Celery task raises control flow exceptions (Retry, Ignore, Reject), the `_set_status("aborted")` call sets `SpanStatus.ERROR` for StreamedSpan. However, these are not actual errors - they are expected control flow mechanisms in Celery. A Retry indicates the task will be retried, Ignore means the result is intentionally ignored, and Reject means the task was rejected. Marking these as ERROR may cause misleading error counts in monitoring dashboards.

Check warning on line 337 in sentry_sdk/integrations/celery/__init__.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

Missing custom_sampling_context in Celery span streaming mode breaks custom samplers

In the `_wrap_tracer` function's span streaming path (lines 330-337), the `celery_job` custom sampling context containing task name, args, and kwargs is not set, unlike the legacy path which passes it to `start_transaction`. Users with custom traces_sampler functions that make decisions based on celery job information will not receive this context in span streaming mode, potentially causing unexpected sampling behavior.

Check warning on line 148 in sentry_sdk/integrations/redis/_sync_common.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

UnboundLocalError when Redis command raises exception

In the `finally` block (lines 143-150), when `old_execute_command` raises an exception, `value` is never assigned. The subsequent call to `_set_cache_data(cache_span, self, cache_properties, value)` on line 148 references the undefined variable `value`, causing an `UnboundLocalError`. While this is caught by `capture_internal_exceptions()`, it's a regression from the old code that only called `_set_cache_data` after successful command execution.

Check warning on line 70 in sentry_sdk/_span_batcher.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

[W8E-SPW] UnboundLocalError when Redis command raises exception (additional location)

In the `finally` block (lines 143-150), when `old_execute_command` raises an exception, `value` is never assigned. The subsequent call to `_set_cache_data(cache_span, self, cache_properties, value)` on line 148 references the undefined variable `value`, causing an `UnboundLocalError`. While this is caught by `capture_internal_exceptions()`, it's a regression from the old code that only called `_set_cache_data` after successful command execution.

Check warning on line 783 in sentry_sdk/traces.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

NoOpStreamedSpan returns inconsistent trace_id/span_id values

The `NoOpStreamedSpan.to_traceparent()` method (line 770-775) returns the real trace_id and span_id from the propagation context, but the `span_id` (line 778-779) and `trace_id` (line 782-783) properties return hardcoded `"000000"` values. This inconsistency means code accessing `.trace_id` or `.span_id` directly gets different data than code parsing `to_traceparent()`. This could cause subtle bugs in trace correlation or header propagation where different parts of the system see different trace identifiers for the same span.