Skip to content

Borked a typing import

cee1173
Select commit
Loading
Failed to load commit list.
Merged

refactor(openai): Split token counting by API for easier deprecation #5930

Borked a typing import
cee1173
Select commit
Loading
Failed to load commit list.
@sentry/warden / warden: find-bugs completed Apr 2, 2026 in 4m 36s

1 issue

find-bugs: Found 1 issue (1 low)

Low

Missing manual output token counting for non-streaming Responses API without usage data - `sentry_sdk/integrations/openai.py:633-638`

In _calculate_responses_token_usage (lines 291-295), when output_tokens == 0 and streaming_message_responses is None, there's no fallback to manually count output tokens from response.output. The analogous function _calculate_completions_token_usage has this fallback (lines 220-223) which counts from response.choices. For non-streaming Responses API calls where the API doesn't return usage data and tiktoken is configured, output tokens won't be recorded.


Duration: 4m 36s · Tokens: 2.5M in / 19.3k out · Cost: $3.28 (+extraction: $0.00)