Skip to content

Apply suggestions from code review

72d9637
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Closed

[do not merge] feat: Span streaming & new span API #5317

Apply suggestions from code review
72d9637
Select commit
Loading
Failed to load commit list.
GitHub Actions / warden completed Feb 25, 2026 in 42m 18s

28 issues

High

UnboundLocalError when transaction methods fail in streaming mode - `sentry_sdk/integrations/celery/__init__.py:324-337`

In _wrap_tracer, span_ctx is declared (line 324) but only assigned inside the capture_internal_exceptions() block (line 337). If an exception occurs in transaction.set_origin(), transaction.set_source(), or transaction.set_op() after transaction is assigned, the check if transaction is None (line 362) will pass, but span_ctx will be unbound. This causes an UnboundLocalError at line 365 when with span_ctx: is executed. The user-visible consequence is a runtime crash in Celery task execution when these methods fail.

StreamedSpan is never started, causing finish() to silently fail - `sentry_sdk/integrations/graphene.py:151-166`

In streaming mode, sentry_sdk.traces.start_span() returns a span that must be used as a context manager (with with statement) or explicitly started via .start() before calling .finish(). The current code creates the span but never enters it, so _context_manager_state is uninitialized. When finish() calls __exit__(), the attempt to access _context_manager_state raises an AttributeError that is silently caught by capture_internal_exceptions(). The span is never sent to Sentry in streaming mode.

Also found at:

  • sentry_sdk/integrations/rust_tracing.py:216-241
API incompatibility: sentry_sdk.traces.start_span rejects op and origin parameters causing TypeError - `sentry_sdk/ai/utils.py:539-542`

The get_start_span_function() returns sentry_sdk.traces.start_span when streaming mode is enabled or the current span is a StreamedSpan. However, sentry_sdk.traces.start_span(name, attributes, parent_span) has a fixed signature that doesn't accept op, origin, or other keyword arguments. Integrations (e.g., Anthropic at line 408-412) call the returned function with op=..., name=..., origin=..., which will raise TypeError: start_span() got unexpected keyword argument 'op' when streaming mode is enabled, breaking all AI integrations.

Also found at:

  • sentry_sdk/integrations/graphene.py:151-166
UnboundLocalError when span setup fails in _wrap_tracer - `sentry_sdk/integrations/celery/__init__.py:324-337`

In the _wrap_tracer function, if an exception occurs after transaction is assigned (line 332) but before span_ctx is assigned (line 337), the code proceeds past the if transaction is None check (line 362) and attempts to use span_ctx in with span_ctx: (line 365). Since span_ctx was declared but never assigned, this causes an UnboundLocalError. This can happen if set_origin, set_source, or set_op throw an exception, which gets silently caught by capture_internal_exceptions().

Medium

Duplicate span serialization causes unnecessary CPU overhead - `sentry_sdk/_span_batcher.py:77-81`

Every span is serialized twice: once in _estimate_size() (line 80) via _to_transport_format(), and again during _flush() (line 134). For spans with large attribute dictionaries, this doubles the serialization cost. The size estimation could use a simpler heuristic (e.g., counting attribute keys/values) instead of full serialization.

Also found at:

  • sentry_sdk/integrations/redis/_async_common.py:135-137
  • sentry_sdk/integrations/redis/_sync_common.py:135
Streaming path does not set custom_sampling_context for traces_sampler - `sentry_sdk/integrations/asgi.py:225-227`

In the legacy path (line 272), custom_sampling_context={"asgi_scope": scope} is passed to start_transaction, making it available to traces_sampler callbacks. However, the streaming path (lines 225-227) does not call sentry_scope.set_custom_sampling_context({"asgi_scope": scope}) before start_span(). Users with custom traces_sampler functions that rely on asgi_scope for sampling decisions will not have access to it when using streaming mode.

Also found at:

  • sentry_sdk/integrations/strawberry.py:261-263
StreamedSpan always sets ERROR status regardless of input argument - `sentry_sdk/integrations/celery/__init__.py:104-105`

The _set_status function ignores the status parameter for StreamedSpan instances, always setting SpanStatus.ERROR. This means both _set_status("aborted") (called for Celery control flow exceptions like Retry, Ignore, Reject) and _set_status("internal_error") result in the same ERROR status. For control flow exceptions, this may incorrectly characterize intentional retries/ignores as errors.

redis.is_cluster attribute missing when name is empty - `sentry_sdk/integrations/redis/utils.py:152-160`

The refactored _set_client_data function now only sets redis.is_cluster when name is truthy, whereas the original code always set it unconditionally. This is a behavioral change that will cause the redis.is_cluster tag/attribute to be missing from spans when name is empty or None, potentially affecting monitoring and debugging capabilities.

Async resolve() missing set_op() and set_origin() for StreamedSpan - `sentry_sdk/integrations/strawberry.py:314-320`

In the async SentryAsyncExtension.resolve() method, when creating a StreamedSpan, the code does not call span.set_op(OP.GRAPHQL_RESOLVE) or span.set_origin(StrawberryIntegration.origin). However, the sync SentrySyncExtension.resolve() method does include these calls (lines 356-357). This inconsistency means async GraphQL resolvers using span streaming will produce spans without the operation type and origin metadata, resulting in incomplete tracing data in Sentry.

Also found at:

  • sentry_sdk/traces.py:707-711
  • sentry_sdk/traces.py:691
Missing return after 'Discarding transaction' warning leads to inconsistent behavior - `sentry_sdk/traces.py:418-419`

On line 418-419, when self.sampled is None, a warning "Discarding transaction without sampling decision" is logged, but execution continues. The span may still be captured on line 434 if self.segment.sampled is truthy (which shouldn't happen when sampled is None, but creates ambiguous control flow). The log message claims the transaction is being discarded, but there's no return to enforce this, unlike the self.sampled is False check which properly returns on line 416.

...and 18 more

4 skills analyzed
Skill Findings Duration Cost
code-review 12 30m 47s $17.00
find-bugs 16 20m 3s $29.58
skill-scanner 0 34m 40s $7.76
security-review 0 41m 25s $11.03

Duration: 126m 55s · Tokens: 43.6M in / 442.1k out · Cost: $65.49 (+extraction: $0.08, +merge: $0.01, +dedup: $0.02)