Commit e385ebd
committed
fix: remove global add_function_to_prompt — breaks native tool calling
Setting `litellm.add_function_to_prompt = True` globally forces ALL
models through text-based tool calling, even models that support
native function calling (Groq, OpenAI, Anthropic).
When this flag is set, LiteLLM injects tool definitions into the
system prompt as text. Models then output XML-style function tags
(`<function=name {...} </function>`) instead of proper `tool_calls`
JSON. Providers like Groq reject this with `tool_use_failed`.
Proof: Direct `litellm.completion()` without this flag returns proper
`tool_calls` JSON with `finish_reason: "tool_calls"`. With the flag,
the same model fails.
The fix removes the global default. Models that need text-based tool
calling can opt in per-instance:
LiteLlm(model="ollama/qwen2", add_function_to_prompt=True)
Models with native tool calling work without any flag:
LiteLlm(model="groq/llama-3.3-70b-versatile")
Fixes: kagent-dev/kagent#1532
Related: huggingface/smolagents#1119, BerriAI/litellm#110011 parent 30b904e commit e385ebd
1 file changed
+6
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
185 | 185 | | |
186 | 186 | | |
187 | 187 | | |
188 | | - | |
| 188 | + | |
| 189 | + | |
| 190 | + | |
| 191 | + | |
| 192 | + | |
| 193 | + | |
189 | 194 | | |
190 | 195 | | |
191 | 196 | | |
| |||
0 commit comments