fix: add call_id to function_call item if it doesn't have one (Gemini use case)#1959
fix: add call_id to function_call item if it doesn't have one (Gemini use case)#1959lzjjeff wants to merge 3 commits intoopenai:mainfrom
Conversation
|
How do you use the model? We prefer putting this kind of workaround to LiteLLM integration side: https://openai.github.io/openai-agents-python/models/litellm/ |
Thank you for your review. I haven’t used LiteLLM, I’m using AsyncAzureOpenAI to build the model. |
|
If you use either OpenAI or Azure OpenAI Service, you can directly use the OpenAI API client. However, for other models, we don't recommend it. Indeed, quite basic chat examples should work. However, when you start using other features like tool calling, structured outputs, and so on, the details could vary. For this reason, we generally recommend using LiteLLM to fill the gap. Please note that, even with LiteLLM, there could be some differences due to models' capabilities and requirements. Going back to the topic here, if our LiteLLM adapter can do something extra for your use case, we may be able to have such custom logic there. |
Thanks for the clarification! That makes sense. In my case, the missing call_id issue mainly occurs when using non-OpenAI models (like Gemini) via AsyncAzureOpenAI. I agree that having this logic on the LiteLLM integration side would be cleaner. If LiteLLM plans to standardize call_id handling (especially for third-party model adapters), I’d be happy to switch. |
|
This PR is stale because it has been open for 10 days with no activity. |
I found that when using models like Gemini, the
call_idinfunction_callitem is empty. This causes difficulty in accurately matchingfunction_callandfunction_call_outputduring parallel tool calling.To address this, I added a check and filling for missing
call_idvalues insrc/agents/_run_impl.py. Thecall_idformat follows the GPT series models’ convention — a 22-character random string ID with a prefixcall_.