fix: preserve LiteLLM text with tool calls#1450
Conversation
Signed-off-by: Asim Arshad <asim48@gmail.com>
1ee6425 to
be573ef
Compare
There was a problem hiding this comment.
Code Review
This pull request refactors the _transform_output method in the LiteLLM chat adapter to improve readability and ensure that both text content and tool calls are correctly captured in the output. It also introduces unit tests to verify these transformations. Review feedback suggests using an explicit "is not None" check for text content to avoid skipping empty strings and recommends using a generator expression within content.extend() for improved memory efficiency.
| if update and update.model_dump(exclude_none=True): | ||
| content: list[MessageTextContent | MessageToolCallContent] = [] | ||
| text = update.content or getattr(update, "reasoning_content", None) | ||
| if text: |
There was a problem hiding this comment.
The current check if text: will evaluate to False for an empty string "". This means if update.content is an empty string, no MessageTextContent will be created. The previous implementation effectively preserved empty strings. To maintain this behavior and only skip None values, it's safer to use if text is not None:. This ensures that an explicit empty string from the model is preserved as a text content part.
| if text: | |
| if text is not None: |
| content.extend( | ||
| [ | ||
| MessageToolCallContent( | ||
| id=call.id or "", | ||
| tool_name=call.function.name or "", | ||
| args=call.function.arguments, | ||
| ) | ||
| for call in tool_calls | ||
| ] | ||
| ) |
There was a problem hiding this comment.
For better memory efficiency and conciseness, you can use a generator expression directly within content.extend() instead of creating a temporary list with a list comprehension.
content.extend(
MessageToolCallContent(
id=call.id or "",
tool_name=call.function.name or "",
args=call.function.arguments,
)
for call in tool_calls
)Signed-off-by: Asim Arshad <asim48@gmail.com>
2ed7657 to
4150b78
Compare
Tomas2D
left a comment
There was a problem hiding this comment.
I would say that this is breaking change. Can the bug be fixed without merging it together?
Signed-off-by: Asim Arshad <asim48@gmail.com>
Signed-off-by: Asim Arshad <asim48@gmail.com>
|
Thanks, agreed. I pushed a follow-up that no longer merges text and tool calls into the same The updated shape preserves LiteLLM text as a separate output chunk when tool calls are also present, while keeping the tool-call message shape separate. It still fixes the missing Re-verified locally:
|
Tomas2D
left a comment
There was a problem hiding this comment.
What is the difference after your changes?
|
The updated version avoids the breaking shape I had in the first pass. Before this PR, when LiteLLM returned both After the latest changes:
So the tool-call message shape is kept separate; the only behavioral addition is preserving text that LiteLLM already returned instead of discarding it. |
Summary
reasoning_contenton a delta/messageFixes #1448.
Verification
uv run --no-sync ruff check beeai_framework/adapters/litellm/chat.py tests/backend/providers/test_litellm_chat.pyuv run --no-sync ruff format beeai_framework/adapters/litellm/chat.py tests/backend/providers/test_litellm_chat.py --checkuv run --no-sync pytest tests/backend/providers/test_litellm_chat.py -qgit diff --check