feat: unique fake responses ids#3079
Closed
bantucaravan wants to merge 4 commits intoopenai:mainfrom
Closed
Conversation
replace with uid with fake id prefix Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
Member
|
Thanks for the suggestion. We currently don't plan to make these changes, to avoid potential breaking changes to the SDK's behavior. That said, if we see a need for this in the future, we may revisit it as an opt-in feature. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
I replaced the static fake id used to simulate a Responses API Response id or ResponseOutputItem id with a unique id. The unique ID includes a prefix that identifies it as fake. The fake id pattern is used for all conversions of ChatCompletions API output into Responses API format
Previously when using ChatCompletions APIs, every output message had the same ID value. You can imagine why this is annoying and problematic. Now all output items and responses have unique ids that are identifiable as fake (not created by the real OpenAI Responses API).
The only major complication came up with streaming responses. Here I added an output_message_id to the streaming state. This allows all ResponseOutputMessage events in a streaming response have the same fake id.
My understanding of the vision for the chat completions conversion is that every ChatCompletions streaming response is converted to produce a single Responses API ResponseOutputMessage item. Special attention from the reviewer on this point is appreciated.
Example:
Old ->
'__fake_id__'New ->
'__fake_id__63456181-ca62-4520-9767-7d081aafac46'Test Plan
Updated the tests to check the new fake ids.
For streaming checks, I added checks that ensured that all events associated with the same ResponseOutputMessage or function call or reasoning all have the same ID. The same is done for litellm. any_llm tests mock the whole ChatCmplStreamHandler.handle_stream() so no changed needed.
I added a new test to make sure that multiple function calls in a single streaming response get different but consistent ids.
For future work, it might make sense to break the streaming tests into tests for ChatCmplStreamHandler.handle_stream() doing most of the work and lighter tests for .stream_response() in chat_completions, litlellm, any_llm respectively.