Skip to content

fix: preserve LiteLLM text with tool calls#1450

Open
asim48-ctrl wants to merge 5 commits into
i-am-bee:mainfrom
asim48-ctrl:codex/litellm-preserve-text-tool-calls
Open

fix: preserve LiteLLM text with tool calls#1450
asim48-ctrl wants to merge 5 commits into
i-am-bee:mainfrom
asim48-ctrl:codex/litellm-preserve-text-tool-calls

Conversation

@asim48-ctrl
Copy link
Copy Markdown

Summary

  • preserve LiteLLM assistant text content when tool calls are present
  • avoid AttributeError when LiteLLM omits reasoning_content on a delta/message
  • add focused regression coverage for text+tool-call and missing-reasoning cases

Fixes #1448.

Verification

  • uv run --no-sync ruff check beeai_framework/adapters/litellm/chat.py tests/backend/providers/test_litellm_chat.py
  • uv run --no-sync ruff format beeai_framework/adapters/litellm/chat.py tests/backend/providers/test_litellm_chat.py --check
  • uv run --no-sync pytest tests/backend/providers/test_litellm_chat.py -q
  • git diff --check

@asim48-ctrl asim48-ctrl requested a review from a team as a code owner May 13, 2026 10:17
@dosubot dosubot Bot added the size:M This PR changes 30-99 lines, ignoring generated files. label May 13, 2026
@github-actions github-actions Bot added the python Python related functionality label May 13, 2026
Signed-off-by: Asim Arshad <asim48@gmail.com>
@asim48-ctrl asim48-ctrl force-pushed the codex/litellm-preserve-text-tool-calls branch from 1ee6425 to be573ef Compare May 13, 2026 10:18
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the _transform_output method in the LiteLLM chat adapter to improve readability and ensure that both text content and tool calls are correctly captured in the output. It also introduces unit tests to verify these transformations. Review feedback suggests using an explicit "is not None" check for text content to avoid skipping empty strings and recommends using a generator expression within content.extend() for improved memory efficiency.

if update and update.model_dump(exclude_none=True):
content: list[MessageTextContent | MessageToolCallContent] = []
text = update.content or getattr(update, "reasoning_content", None)
if text:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current check if text: will evaluate to False for an empty string "". This means if update.content is an empty string, no MessageTextContent will be created. The previous implementation effectively preserved empty strings. To maintain this behavior and only skip None values, it's safer to use if text is not None:. This ensures that an explicit empty string from the model is preserved as a text content part.

Suggested change
if text:
if text is not None:

Comment on lines +319 to +328
content.extend(
[
MessageToolCallContent(
id=call.id or "",
tool_name=call.function.name or "",
args=call.function.arguments,
)
for call in tool_calls
]
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better memory efficiency and conciseness, you can use a generator expression directly within content.extend() instead of creating a temporary list with a list comprehension.

                content.extend(
                    MessageToolCallContent(
                        id=call.id or "",
                        tool_name=call.function.name or "",
                        args=call.function.arguments,
                    )
                    for call in tool_calls
                )

Signed-off-by: Asim Arshad <asim48@gmail.com>
@asim48-ctrl asim48-ctrl force-pushed the codex/litellm-preserve-text-tool-calls branch from 2ed7657 to 4150b78 Compare May 13, 2026 10:33
Copy link
Copy Markdown
Contributor

@Tomas2D Tomas2D left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say that this is breaking change. Can the bug be fixed without merging it together?

Signed-off-by: Asim Arshad <asim48@gmail.com>
Signed-off-by: Asim Arshad <asim48@gmail.com>
@dosubot dosubot Bot added size:S This PR changes 10-29 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels May 13, 2026
@asim48-ctrl
Copy link
Copy Markdown
Author

Thanks, agreed. I pushed a follow-up that no longer merges text and tool calls into the same AssistantMessage.

The updated shape preserves LiteLLM text as a separate output chunk when tool calls are also present, while keeping the tool-call message shape separate. It still fixes the missing reasoning_content AttributeError path.

Re-verified locally:

  • uv run --no-sync ruff check beeai_framework/adapters/litellm/chat.py tests/backend/providers/test_litellm_chat.py
  • uv run --no-sync ruff format beeai_framework/adapters/litellm/chat.py tests/backend/providers/test_litellm_chat.py --check
  • uv run --no-sync pytest tests/backend/providers/test_litellm_chat.py -q
  • git diff --check

Copy link
Copy Markdown
Contributor

@Tomas2D Tomas2D left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the difference after your changes?

@asim48-ctrl
Copy link
Copy Markdown
Author

The updated version avoids the breaking shape I had in the first pass.

Before this PR, when LiteLLM returned both content and tool_calls, BeeAI only emitted the tool-call message and dropped the text content.

After the latest changes:

  • text-only responses still produce one text AssistantMessage
  • tool-call-only responses still produce one tool-call AssistantMessage
  • text + tool-call responses produce two output items: first the text AssistantMessage, then the existing tool-call AssistantMessage shape
  • missing reasoning_content no longer raises AttributeError

So the tool-call message shape is kept separate; the only behavioral addition is preserving text that LiteLLM already returned instead of discarding it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python Python related functionality size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The _transform_output method of ChatModel may discard text content or override reasoning content with possible side effect of AttributeError

2 participants