Skip to content

Borked a typing import

cee1173
Select commit
Loading
Failed to load commit list.
Merged

refactor(openai): Split token counting by API for easier deprecation #5930

Borked a typing import
cee1173
Select commit
Loading
Failed to load commit list.
@sentry/warden / warden: code-review completed Apr 2, 2026 in 2m 26s

1 issue

code-review: Found 1 issue (1 medium)

Medium

Manual output token counting missing for non-streaming Responses API - `sentry_sdk/integrations/openai.py:291-295`

In _calculate_responses_token_usage, manual output token counting only handles streaming_message_responses but lacks a fallback for non-streaming responses. The completions version (line 220-223) has elif hasattr(response, 'choices') to extract output from non-streaming responses, but the responses version has no equivalent elif hasattr(response, 'output') branch. When the Responses API doesn't provide usage data and streaming_message_responses is None (non-streaming case at line 637), output tokens will incorrectly remain 0 instead of being manually counted.


Duration: 2m 26s · Tokens: 968.8k in / 10.9k out · Cost: $1.47 (+extraction: $0.01)

Annotations

Check warning on line 295 in sentry_sdk/integrations/openai.py

See this annotation in the file changed.

@sentry-warden sentry-warden / warden: code-review

Manual output token counting missing for non-streaming Responses API

In `_calculate_responses_token_usage`, manual output token counting only handles `streaming_message_responses` but lacks a fallback for non-streaming responses. The completions version (line 220-223) has `elif hasattr(response, 'choices')` to extract output from non-streaming responses, but the responses version has no equivalent `elif hasattr(response, 'output')` branch. When the Responses API doesn't provide usage data and `streaming_message_responses` is `None` (non-streaming case at line 637), output tokens will incorrectly remain 0 instead of being manually counted.