Skip to content

merge master

eb1d6a7
Select commit
Loading
Failed to load commit list.
Merged

feat(anthropic): Emit AI Client Spans for synchronous messages.stream() #5565

merge master
eb1d6a7
Select commit
Loading
Failed to load commit list.
@sentry/warden / warden: code-review completed Mar 13, 2026 in 4m 14s

4 issues

code-review: Found 4 issues (1 high, 3 low)

High

Runtime ValueError: _collect_ai_data returns 4 values but only 3 are unpacked - `sentry_sdk/integrations/anthropic.py:423-432`

The _collect_ai_data function returns a 4-tuple (model, usage, content_blocks, response_id) but this code only unpacks 3 values. This will cause a ValueError: too many values to unpack at runtime when iterating over stream events. The unpacking should include response_id as the 4th value.

Also found at:

  • sentry_sdk/integrations/anthropic.py:484-493

Low

Test missing assertion for GEN_AI_SYSTEM span attribute - `tests/integrations/anthropic/test_anthropic.py:421-424`

The new test_stream_messages test is missing an assertion for SPANDATA.GEN_AI_SYSTEM == "anthropic" which is present in the similar test_streaming_create_message test (line 306). This reduces test coverage for the new messages.stream() functionality, as the test doesn't verify that the gen_ai.system attribute is correctly set on the span.

Also found at:

  • tests/integrations/anthropic/test_anthropic.py:840-843
Test missing assertion for GEN_AI_RESPONSE_ID span attribute - `tests/integrations/anthropic/test_anthropic.py:440-443`

The new test_stream_messages test is missing an assertion for SPANDATA.GEN_AI_RESPONSE_ID which is present in the similar test_streaming_create_message test (line 325). This means the test doesn't verify that the response ID is correctly captured from the streaming response, reducing coverage for this important traceability attribute.

Copy-paste error corrupts comment with function name - `tests/integrations/anthropic/test_anthropic.py:2919`

The comment on line 2919 incorrectly contains the function name test_stream_messages_input_tokens_include_cache_read_streaming instead of the expected numerical result 2865. This appears to be a copy-paste error during editing. While this doesn't affect runtime behavior, it makes the comment confusing and misleading for developers reading the test.


Duration: 4m 7s · Tokens: 1.6M in / 19.3k out · Cost: $2.23 (+extraction: $0.01, +merge: $0.00, +fix_gate: $0.01)

Annotations

Check failure on line 432 in sentry_sdk/integrations/anthropic.py

See this annotation in the file changed.

@sentry-warden sentry-warden / warden: code-review

Runtime ValueError: _collect_ai_data returns 4 values but only 3 are unpacked

The `_collect_ai_data` function returns a 4-tuple `(model, usage, content_blocks, response_id)` but this code only unpacks 3 values. This will cause a `ValueError: too many values to unpack` at runtime when iterating over stream events. The unpacking should include `response_id` as the 4th value.

Check failure on line 493 in sentry_sdk/integrations/anthropic.py

See this annotation in the file changed.

@sentry-warden sentry-warden / warden: code-review

[EUX-7NQ] Runtime ValueError: _collect_ai_data returns 4 values but only 3 are unpacked (additional location)

The `_collect_ai_data` function returns a 4-tuple `(model, usage, content_blocks, response_id)` but this code only unpacks 3 values. This will cause a `ValueError: too many values to unpack` at runtime when iterating over stream events. The unpacking should include `response_id` as the 4th value.