Skip to content

fix: 增强核心请求的容错能力与 Provider 兼容性优化#5610

Closed
a61995987 wants to merge 48 commits intoAstrBotDevs:devfrom
a61995987:master
Closed

fix: 增强核心请求的容错能力与 Provider 兼容性优化#5610
a61995987 wants to merge 48 commits intoAstrBotDevs:devfrom
a61995987:master

Conversation

@a61995987
Copy link
Copy Markdown
Contributor

@a61995987 a61995987 commented Mar 1, 2026

fix:兼容openai适配器的非标返回
fix:将上游空回复纳入回退重试
fix: 新增本地图片预压缩机制 避免原图体积过大造成的413错误
fix: 增强 _ensure_img_caption 的容错性避免解析图片失败时造成的框架整体性崩溃

Modifications / 改动点

fix:兼容openai适配器的非标返回:
修改文件:astrbot\core\provider\sources\openai_source.py

  1. _query_stream方法中的async for chunk in stream循环中增加了兼容非标准返回处理的处理 补全可能缺失的补全tool_call.type字段
  2. 修改原Saving chunk state error异常日志 现在增加了更详细的日志输出 方便调试

fix:将上游空回复纳入回退重试:
修改文件:astrbot\core\agent\runners\tool_loop_agent_runner.py

  1. 增加空回复判断辅助函数_is_empty_llm_response
  2. _iter_llm_responses_with_fallback方法中的async for resp in self._iter_llm_responses(include_model=idx == 0)循环中使用_is_empty_llm_response来判断回复是否合法 并在不合法时触发回退机制

修改文件:astrbot\core\exceptions.py

  1. 增加了一个LLMEmptyResponseError异常类型用于处理空回复异常

fix: 新增本地图片预压缩机制 避免原图体积过大造成的413错误:
修改文件:astrbot\core\astr_main_agent.py

  1. 新增_compress_image_internal函数 用于对传入的本地图片进行压缩 原样返回远程图片url
  2. _ensure_img_caption函数中引入新增的压缩函数
  3. _process_quote_message函数中引入新增的函数

fix: 增强 _ensure_img_caption 的容错性避免解析图片失败时造成的框架整体性崩溃:
1.修改_ensure_img_caption函数 增加finally块并清空image_urls 避免处理错误时导致的框架整体性崩溃

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

现在开启流式后可以正常解析非标返回(上游为newapi的gemini)
stream

空回复显示报错:
0454f82a-8bdd-44b2-a283-4953a124733c

空回复回退日志:
image

本地大图压缩:
image

容错性增强:
image


Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

由 Sourcery 提供的摘要

处理非标准的 OpenAI 流式响应,并为上游返回空回复的情况添加回退行为。

Bug 修复:

  • 规范化 OpenAI 适配器流式响应中缺失的 tool_call.type 字段,以保持兼容性。
  • 当非流式的上游聊天模型在没有工具调用的情况下返回空响应时,触发回退到下一个服务提供方,而不是悄无声息地失败。

功能增强:

  • 改进聊天流分片状态错误的日志记录,增加更详细的异常信息和分片上下文。
Original summary in English

Summary by Sourcery

Handle non-standard OpenAI streaming responses and add fallback behavior for empty upstream replies.

Bug Fixes:

  • Normalize missing tool_call.type fields in streamed OpenAI adapter responses to maintain compatibility.
  • Trigger fallback to next provider when a non-streaming upstream chat model returns an empty response without tool calls instead of failing silently.

Enhancements:

  • Improve logging for chat stream chunk state errors with more detailed exception and chunk context.

由 Sourcery 提供的摘要

改进 OpenAI 流式处理和回退检测在 LLM 响应中的健壮性与诊断能力。

错误修复(Bug Fixes):

  • 通过填充缺失的 tool_call.type 字段来规范化非标准的 OpenAI 流式数据块,以保持兼容性。
  • 优化对“有意义的”流式 LLM 输出的检测逻辑,在将某个提供方标记为已产生流式内容之前更加严格,从而改进回退行为。

功能增强(Enhancements):

  • 在保存聊天流数据块状态失败时,新增更详细的调试和警告日志,包括异常细节和数据块内容。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。

增强项:

  • 改进针对聊天流分片状态处理错误的日志记录,提供更清晰的异常消息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范化 OpenAI 流式数据块,从而保持与非标准适配器的兼容性。

增强功能:

  • 增强对聊天流状态错误的日志记录,包含详细的异常信息和完整的数据块上下文,以便更轻松地进行调试。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。

增强项:

  • 改进针对聊天流分片状态处理错误的日志记录,提供更清晰的异常消息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 流式响应,以保持兼容性。

增强:

  • 增强对聊天流片段状态错误的日志记录,在调试输出中包含异常类型和片段内容等详细信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。

增强项:

  • 改进针对聊天流分片状态处理错误的日志记录,提供更清晰的异常消息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范化 OpenAI 流式数据块,从而保持与非标准适配器的兼容性。

增强功能:

  • 增强对聊天流状态错误的日志记录,包含详细的异常信息和完整的数据块上下文,以便更轻松地进行调试。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范化流式工具调用载荷,以保持与非标准 OpenAI 适配器的兼容性。

增强项:

  • 改进针对聊天流分片状态处理错误的日志记录,提供更清晰的异常消息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

Bug 修复:

  • 通过补齐缺失的 type 字段来规范化流式工具调用负载,从而保持与非标准的 OpenAI 兼容适配器的兼容性。

增强:

  • 提升聊天流分片状态处理失败时的日志记录能力,增加更详细的调试和警告输出,包括异常信息。
Original summary in English

Summary by Sourcery

提升在 LLM 提供方中对 OpenAI 兼容的流式传输和回退处理的鲁棒性与可观测性。

Bug 修复:

  • 通过填充缺失的 tool_call.type 字段来规范非标准的 OpenAI 兼容流式响应,避免断言错误。
  • 当非流式上游对话模型返回空响应且没有工具调用时,触发提供方回退,而不是将其默认为有效输出并静默处理。

增强项:

  • 加强与对话流片段状态处理失败相关的日志记录,增加更详细的调试和警告信息。
Original summary in English

Summary by Sourcery

Improve robustness and observability of OpenAI-compatible streaming and fallback handling in LLM providers.

Bug Fixes:

  • Normalize non-standard OpenAI-compatible streaming responses by filling missing tool_call.type fields to avoid assertion errors.
  • Trigger provider fallback when a non-streaming upstream chat model returns an empty response without tool calls, instead of silently treating it as valid output.

Enhancements:

  • Enhance logging around chat stream chunk state handling failures with more detailed debug and warning messages.

由 Sourcery 提供的总结

改进对 LLM 响应处理和 OpenAI 流式集成的健壮性,包括在上游返回空响应时更好的回退行为。

Bug 修复:

  • 将实际上为空的、非流式的 LLM assistant/tool 响应视为无效,并在可能的情况下触发回退到下一家提供商。
  • 规范非标准的 OpenAI 流式 tool_call 负载,通过补全缺失的 type 字段来避免分片处理错误。

增强功能:

  • 新增 LLMEmptyResponseError,在所有回退策略耗尽后,用带有上下文元数据的方式暴露不可恢复的空 assistant 响应。
  • 强化与 OpenAI 聊天流分片状态处理错误相关的日志记录和上下文信息,以便更容易进行调试。
Original summary in English

Summary by Sourcery

Improve robustness of LLM response handling and OpenAI streaming integration, including better fallback behavior for empty upstream replies.

Bug Fixes:

  • Treat effectively empty non-streaming LLM assistant/tool responses as invalid and trigger fallback to the next provider when possible.
  • Normalize non-standard OpenAI streaming tool_call payloads by filling missing type fields to avoid chunk handling errors.

Enhancements:

  • Add an LLMEmptyResponseError to surface unrecoverable empty assistant responses with contextual metadata after all fallbacks are exhausted.
  • Strengthen logging and context around OpenAI chat stream chunk state handling errors for easier debugging.

Bug 修复:

  • 将实际上为空的非流式 LLM 响应视为无效,触发回退到下一个服务商,而不是在表面成功的情况下静默通过。
  • 规范化非标准的 OpenAI 流式 tool_call 负载,通过填充缺失的 type 字段,避免在处理分片时出现断言错误。

增强功能:

  • 引入专门的 LLMEmptyResponseError,在所有回退策略耗尽后仍收到不可恢复的空回复时抛出,并附带上下文元数据以便更容易调试。
  • 收紧 OpenAI 源中的流式状态处理逻辑,并在相关警告日志中补充模型和服务商信息,以提升可观测性。
Original summary in English

由 Sourcery 提供的总结

改进对 LLM 响应处理和 OpenAI 流式集成的健壮性,包括在上游返回空响应时更好的回退行为。

Bug 修复:

  • 将实际上为空的、非流式的 LLM assistant/tool 响应视为无效,并在可能的情况下触发回退到下一家提供商。
  • 规范非标准的 OpenAI 流式 tool_call 负载,通过补全缺失的 type 字段来避免分片处理错误。

增强功能:

  • 新增 LLMEmptyResponseError,在所有回退策略耗尽后,用带有上下文元数据的方式暴露不可恢复的空 assistant 响应。
  • 强化与 OpenAI 聊天流分片状态处理错误相关的日志记录和上下文信息,以便更容易进行调试。
Original summary in English

Summary by Sourcery

Improve robustness of LLM response handling and OpenAI streaming integration, including better fallback behavior for empty upstream replies.

Bug Fixes:

  • Treat effectively empty non-streaming LLM assistant/tool responses as invalid and trigger fallback to the next provider when possible.
  • Normalize non-standard OpenAI streaming tool_call payloads by filling missing type fields to avoid chunk handling errors.

Enhancements:

  • Add an LLMEmptyResponseError to surface unrecoverable empty assistant responses with contextual metadata after all fallbacks are exhausted.
  • Strengthen logging and context around OpenAI chat stream chunk state handling errors for easier debugging.

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Mar 1, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

此拉取请求主要解决了两个关键问题:一是提升了对非标准OpenAI API响应的兼容性,特别是处理了可能缺失的tool_call.type字段;二是优化了用户体验,通过引入回退机制和明确的错误消息,避免了上游渠道返回空响应时用户长时间静默等待的情况,确保用户能及时获得反馈。

Highlights

  • 兼容非标准OpenAI返回: 兼容OpenAI适配器的非标准返回,特别是补全缺失的tool_call.type字段。
  • 上游空回复回退与报错: 增加上游渠道空回复时的回退机制和错误显示,避免用户长时间静默等待。
Changelog
  • astrbot/core/agent/runners/tool_loop_agent_runner.py
    • 增加了在聊天模型返回空回复且无工具调用时触发回退的条件判断,此改动目前仅适用于非流式请求。
  • astrbot/core/provider/sources/openai_source.py
    • 修改了_query_stream方法,以兼容非标准返回(补全可能缺失的tool_call.type字段)。
    • 改进了保存块状态时的异常日志输出,提供了更详细的调试信息。
Activity
  • 作者 a61995987 创建了此拉取请求。
  • 拉取请求中包含了截图,展示了非标准流式解析和上游空回复处理的修复效果。
  • 作者已确认此更改不是破坏性变更,经过了充分测试并提供了验证步骤和截图,未引入新依赖,且不包含恶意代码。
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@a61995987 a61995987 changed the title fix:兼容openai适配器的非标返回 fix:增加上游渠道空回时候的fallback与显示报错(回复给用户终端) 避免原静默处理导致的无效等待 fix:兼容openai适配器的非标返回 Mar 1, 2026
@dosubot dosubot bot added the area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. label Mar 1, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 1 个问题,并留下了一些高层次的反馈:

  • _query_stream 的错误处理逻辑中,你现在会在 warning 级别、并带有 exc_info=True 的情况下记录整个 chunk 对象;建议考虑截断/结构化这些数据,或者将其日志级别降低到 debug,以避免日志过大以及潜在的敏感内容泄露。
  • _iter_llm_responses_with_fallback 中新的空响应回退条件目前只检查 resp.completion_textresp.tools_call_args;如果还有其他非文本输出(例如图片、函数调用 ID 或元数据)也可以代表有效回复,你可能需要把它们也纳入“空响应”的判断中,以避免误触发回退逻辑。
给 AI Agent 的提示
Please address the comments from this code review:

## Overall Comments
- In the `_query_stream` error handler you now log the entire `chunk` object at warning level with `exc_info=True`; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
- The new empty-response fallback condition in `_iter_llm_responses_with_fallback` only checks `resp.completion_text` and `resp.tools_call_args`; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="318-324" />
<code_context>
+                            if tool_call.type is None:
+                                tool_call.type = "function"
+
             try:
                 state.handle_chunk(chunk)
             except Exception as e:
-                logger.warning("Saving chunk state error: " + str(e))
+                logger.warning(
+                    f"Saving chunk state error: {type(e).__name__}: {e}. Chunk data: {chunk}",
+                    exc_info=True,
+                )
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Logging full chunk objects on errors may be heavy and potentially sensitive; consider truncating or summarizing.

Logging the full `chunk` can inflate log volume and may include user content you don’t want stored verbatim. Consider logging a truncated/summary view (e.g., `repr(chunk)[:N]` or just key identifiers) or moving the full payload behind a debug-level log, while keeping the exception type/message at warning level.

```suggestion
            try:
                state.handle_chunk(chunk)
            except Exception as e:
                # Log a truncated summary of the chunk at warning level to avoid
                # excessive log volume and potential sensitive data exposure.
                max_chunk_log_len = 500
                chunk_repr = repr(chunk)
                if len(chunk_repr) > max_chunk_log_len:
                    chunk_repr = chunk_repr[:max_chunk_log_len] + "... [truncated]"
                logger.warning(
                    "Saving chunk state error: %s: %s. Chunk summary (repr, max %d chars): %s",
                    type(e).__name__,
                    e,
                    max_chunk_log_len,
                    chunk_repr,
                    exc_info=True,
                )
                # Keep full payload only at debug level for deeper troubleshooting.
                logger.debug("Full chunk data for error: %r", chunk, exc_info=True)
```
</issue_to_address>

Sourcery 对开源项目是免费的——如果你喜欢我们的代码审查,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进后续的审查质量。
Original comment in English

Hey - I've found 1 issue, and left some high level feedback:

  • In the _query_stream error handler you now log the entire chunk object at warning level with exc_info=True; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
  • The new empty-response fallback condition in _iter_llm_responses_with_fallback only checks resp.completion_text and resp.tools_call_args; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In the `_query_stream` error handler you now log the entire `chunk` object at warning level with `exc_info=True`; consider either truncating/structuring this data or lowering it to debug to avoid excessively large logs and potential leakage of sensitive content.
- The new empty-response fallback condition in `_iter_llm_responses_with_fallback` only checks `resp.completion_text` and `resp.tools_call_args`; if other non-textual outputs (e.g., images, function call ids, or metadata) can represent a valid reply, you may want to include them in the emptiness check to avoid incorrectly triggering fallback.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="318-324" />
<code_context>
+                            if tool_call.type is None:
+                                tool_call.type = "function"
+
             try:
                 state.handle_chunk(chunk)
             except Exception as e:
-                logger.warning("Saving chunk state error: " + str(e))
+                logger.warning(
+                    f"Saving chunk state error: {type(e).__name__}: {e}. Chunk data: {chunk}",
+                    exc_info=True,
+                )
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Logging full chunk objects on errors may be heavy and potentially sensitive; consider truncating or summarizing.

Logging the full `chunk` can inflate log volume and may include user content you don’t want stored verbatim. Consider logging a truncated/summary view (e.g., `repr(chunk)[:N]` or just key identifiers) or moving the full payload behind a debug-level log, while keeping the exception type/message at warning level.

```suggestion
            try:
                state.handle_chunk(chunk)
            except Exception as e:
                # Log a truncated summary of the chunk at warning level to avoid
                # excessive log volume and potential sensitive data exposure.
                max_chunk_log_len = 500
                chunk_repr = repr(chunk)
                if len(chunk_repr) > max_chunk_log_len:
                    chunk_repr = chunk_repr[:max_chunk_log_len] + "... [truncated]"
                logger.warning(
                    "Saving chunk state error: %s: %s. Chunk summary (repr, max %d chars): %s",
                    type(e).__name__,
                    e,
                    max_chunk_log_len,
                    chunk_repr,
                    exc_info=True,
                )
                # Keep full payload only at debug level for deeper troubleshooting.
                logger.debug("Full chunk data for error: %r", chunk, exc_info=True)
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/core/provider/sources/openai_source.py Outdated
@dosubot
Copy link
Copy Markdown

dosubot bot commented Mar 1, 2026

Related Documentation

Checked 1 published document(s) in 1 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This Pull Request introduces compatibility fixes for non-standard OpenAI adapter responses in openai_source.py, ensuring tool_call.type is not empty in streaming responses, and improves error logging. It also adds a fallback mechanism in tool_loop_agent_runner.py for non-streaming requests with empty upstream responses, enhancing system robustness and debuggability. A potential security vulnerability was identified in openai_source.py where logging raw LLM response chunks could lead to the leakage of sensitive information or PII. It is recommended to sanitize or remove this raw data from log messages. Additionally, there is an opportunity to improve code readability by reducing nesting, which has been detailed in a specific review comment.

Comment thread astrbot/core/provider/sources/openai_source.py Outdated
Comment thread astrbot/core/provider/sources/openai_source.py Outdated
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我在这里给了一些整体性的反馈:

  • openai_source._query_stream 中规范化 tool_call.type 时,建议仅当 SDK 对象实际包含 function 负载(例如 tool_call.function is not None)时,才强制将其设为 'function',以避免未来的非 function 工具类型或其他扩展被无意间覆盖。
  • _is_empty_llm_response 启发式当前忽略了 resp.role;如果提供方未来返回那种文本为空但 role != 'assistant' 的、类似错误的响应,你可能需要对这类情况做特殊处理,避免将其误判为“空的助手回复”而触发回退逻辑。
给 AI 代理的提示词
Please address the comments from this code review:

## Overall Comments
- When normalizing `tool_call.type` in `openai_source._query_stream`, consider only forcing it to `'function'` if the SDK object actually has a `function` payload (e.g. `tool_call.function is not None`) so that future non-`function` tool types or other extensions are not inadvertently masked.
- The `_is_empty_llm_response` heuristic currently ignores `resp.role`; if the provider ever returns error-like responses with empty text but `role != 'assistant'`, you may want to special-case those to avoid incorrectly treating them as empty assistant replies and triggering fallback.

Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎分享 ✨
帮我变得更有用!请对每条评论点 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English

Hey - I've left some high level feedback:

  • When normalizing tool_call.type in openai_source._query_stream, consider only forcing it to 'function' if the SDK object actually has a function payload (e.g. tool_call.function is not None) so that future non-function tool types or other extensions are not inadvertently masked.
  • The _is_empty_llm_response heuristic currently ignores resp.role; if the provider ever returns error-like responses with empty text but role != 'assistant', you may want to special-case those to avoid incorrectly treating them as empty assistant replies and triggering fallback.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- When normalizing `tool_call.type` in `openai_source._query_stream`, consider only forcing it to `'function'` if the SDK object actually has a `function` payload (e.g. `tool_call.function is not None`) so that future non-`function` tool types or other extensions are not inadvertently masked.
- The `_is_empty_llm_response` heuristic currently ignores `resp.role`; if the provider ever returns error-like responses with empty text but `role != 'assistant'`, you may want to special-case those to avoid incorrectly treating them as empty assistant replies and triggering fallback.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

_is_empty_llm_response仅对角色为assistant或tool的情况做空回复判断
@a61995987
Copy link
Copy Markdown
Contributor Author

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我在这里给出了一些整体性的反馈:

  • step() 中检查 assistant 是否为空时,可以考虑复用 _is_empty_llm_response,而不是再次实现一套空响应启发式逻辑,这样如果之后更新启发式逻辑,各处行为也能保持一致。
  • _iter_llm_responses_with_fallback 中记录空响应回退日志时,可以考虑像 openai_source.py 新增的日志那样,包含模型/服务提供方的标识,这样更容易追踪是哪一个上游返回了空回复。
  • _query_stream 中对于每个缺少 tool_call.type 的分片都打 debug 日志,在长流式响应里可能会非常嘈杂;可以考虑每个流只记录一次,或者在首次出现之后改用更低的日志级别。
面向 AI Agent 的提示词
Please address the comments from this code review:

## Overall Comments
- Consider reusing `_is_empty_llm_response` in the `step()` empty-assistant check instead of duplicating the empty-response heuristic so behavior stays consistent if the heuristic is updated later.
- When logging the empty-response fallback in `_iter_llm_responses_with_fallback`, you might include model/provider identifiers (similar to the new logging in `openai_source.py`) to make it easier to trace which upstream returned the empty reply.
- The debug log for each chunk with missing `tool_call.type` in `_query_stream` could become noisy for long streams; consider logging only once per stream or at a lower verbosity level after the first occurrence.

Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎分享给更多人 ✨
帮我变得更有用!请对每一条评论点 👍 或 👎,我会根据你的反馈改进后续的评审质量。
Original comment in English

Hey - I've left some high level feedback:

  • Consider reusing _is_empty_llm_response in the step() empty-assistant check instead of duplicating the empty-response heuristic so behavior stays consistent if the heuristic is updated later.
  • When logging the empty-response fallback in _iter_llm_responses_with_fallback, you might include model/provider identifiers (similar to the new logging in openai_source.py) to make it easier to trace which upstream returned the empty reply.
  • The debug log for each chunk with missing tool_call.type in _query_stream could become noisy for long streams; consider logging only once per stream or at a lower verbosity level after the first occurrence.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider reusing `_is_empty_llm_response` in the `step()` empty-assistant check instead of duplicating the empty-response heuristic so behavior stays consistent if the heuristic is updated later.
- When logging the empty-response fallback in `_iter_llm_responses_with_fallback`, you might include model/provider identifiers (similar to the new logging in `openai_source.py`) to make it easier to trace which upstream returned the empty reply.
- The debug log for each chunk with missing `tool_call.type` in `_query_stream` could become noisy for long streams; consider logging only once per stream or at a lower verbosity level after the first occurrence.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown

@SourceryAI SourceryAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了两个问题,并给出了一些整体性的反馈:

  • _is_empty_llm_response 中,所有非 Comp.Plain 的组件都被视为有意义的内容;如果 result_chain 里存在仅元数据用途的组件类型,建议改为“白名单”明确哪些组件类型算作有意义内容,以避免在结构上为空但包含非 Plain 组件的 chunk 时错误地阻止 fallback。
  • OpenAISource._query_stream 中,目前为 tool_call.type 使用了硬编码字符串 "function",建议改用共享常量或 SDK 提供的枚举(如果有的话),以避免底层类型值变更时出现偏差,并使意图更加清晰。
给 AI Agents 的提示词
请根据这份代码评审中的评论进行修改:

## 总体评论
-`_is_empty_llm_response` 中,所有非 `Comp.Plain` 的组件都被视为有意义的内容;如果 `result_chain` 里存在仅元数据用途的组件类型,建议改为“白名单”明确哪些组件类型算作有意义内容,以避免在结构上为空但包含非 Plain 组件的 chunk 时错误地阻止 fallback。
-`OpenAISource._query_stream` 中,目前为 `tool_call.type` 使用了硬编码字符串 `"function"`,建议改用共享常量或 SDK 提供的枚举(如果有的话),以避免底层类型值变更时出现偏差,并使意图更加清晰。

## 逐条评论

### 评论 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="332-338" />
<code_context>
             has_stream_output = False
             try:
                 async for resp in self._iter_llm_responses(include_model=idx == 0):
+                    # 对于流式 chunk,不立即检查是否为空,因为单个 chunk 可能只是元数据/心跳
</code_context>
<issue_to_address>
**suggestion:** 使用 logger.exception(或包含 exc_info)来保留 traceback,以便在 chunk 处理失败时保留完整的异常栈信息。

当前仅记录了异常消息,这会让协议/流处理相关问题更难排查。请在日志中记录 traceback,例如:

```python
try:
    state.handle_chunk(chunk)
except Exception:
    logger.exception(
        "[%s] Saving chunk state error (provider: %s)",
        self.get_model(),
        self.provider_config.get("id", "unknown"),
    )
```

这样既保留了模型/提供方上下文信息,又能暴露完整的调用栈。

```suggestion
            try:
                state.handle_chunk(chunk)
            except Exception:
                logger.exception(
                    "[%s] Saving chunk state error (provider: %s)",
                    self.get_model(),
                    self.provider_config.get("id", "unknown"),
                )
```
</issue_to_address>

### 评论 2
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="224" />
<code_context>
         else:
             yield await self.provider.text_chat(**payload)

+    def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
+        """Check if an LLM response is effectively empty.
+
</code_context>
<issue_to_address>
**issue (complexity):** 建议将新的空响应处理逻辑拆分成更小的辅助函数和具名布尔变量,在保持行为不变的前提下,简化控制流并提升可读性。

你可以保留当前新增的行为,但通过拆分职责、使用更具声明性的辅助方法来降低控制流复杂度。

**1. 拆分 `_is_empty_llm_response` 并简化循环**

相比手动修改标志位与单个职责混杂的辅助函数,可以拆分成更小的辅助函数并利用 `any()````python
def _has_text_content(self, resp: LLMResponse) -> bool:
    return bool((resp.completion_text or "").strip() or (resp.reasoning_content or "").strip())

def _has_result_chain_content(self, resp: LLMResponse) -> bool:
    if not resp.result_chain or not resp.result_chain.chain:
        return False

    return any(
        (
            isinstance(comp, Comp.Plain) and bool(comp.text and comp.text.strip())
        ) or (
            not isinstance(comp, Comp.Plain)  # non-Plain components are valid content
        )
        for comp in resp.result_chain.chain
    )

def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
    return (
        not self._has_text_content(resp)
        and not resp.tools_call_args
        and not self._has_result_chain_content(resp)
    )
```

这种写法在保持语义不变的同时,减少了分支,并让每条规则都可以独立测试。

**2. 为 fallback 条件中的布尔表达式命名**`_iter_llm_responses_with_fallback` 中,将复合的 `if` 条件拆分成具名变量,有助于快速理解:

```python
is_assistant_or_tool = resp.role in ("assistant", "tool")
is_empty = self._is_empty_llm_response(resp)
should_try_fallback = is_assistant_or_tool and is_empty and not is_last_candidate

if should_try_fallback:
    logger.warning(
        "Chat Model %s returns empty response, trying fallback to next provider.",
        candidate_id,
    )
    break
```

这样可以保持原有逻辑,同时提高可读性。

**3. 压缩错误上下文的构造方式**

`step` 中错误信息的构造可以更简洁一些,同时保留相同的上下文信息:

```python
base_msg = "LLM returned empty assistant message with no tool calls."
ctx = {
    "model_id": getattr(self.run_context, "model_id", None),
    "provider_id": getattr(self.run_context, "provider_id", None),
    "run_id": getattr(self.run_context, "run_id", None),
}
context_str = ", ".join(f"{k}={v}" for k, v in ctx.items() if v is not None)
if context_str:
    base_msg = f"{base_msg} Context: {context_str}."
raise LLMEmptyResponseError(base_msg)
```

如果你预期在其他地方也会复用这一模式,可以考虑将其抽成一个小的 `_format_empty_response_error()` 辅助函数,即便保持为内联版本,也能让该分支在视觉上更轻量。
</issue_to_address>

Hi @a61995987! 👋

感谢你通过评论 @sourcery-ai review 来体验 Sourcery!🚀

安装 sourcery-ai bot 即可在每个 Pull Request 上获得自动代码评审 ✨

帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的评审质量。
Original comment in English

Hey - I've found 2 issues, and left some high level feedback:

  • In _is_empty_llm_response, all non-Comp.Plain components are treated as meaningful content; if there are metadata-only component types in result_chain, consider whitelisting which component types count as meaningful to avoid suppressing fallback on structurally empty but non-plain chunks.
  • In OpenAISource._query_stream, the hard-coded string "function" for tool_call.type could be replaced with a shared constant or SDK enum (if available) to avoid drift if the underlying type values change and to make the intent clearer.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_is_empty_llm_response`, all non-`Comp.Plain` components are treated as meaningful content; if there are metadata-only component types in `result_chain`, consider whitelisting which component types count as meaningful to avoid suppressing fallback on structurally empty but non-plain chunks.
- In `OpenAISource._query_stream`, the hard-coded string `"function"` for `tool_call.type` could be replaced with a shared constant or SDK enum (if available) to avoid drift if the underlying type values change and to make the intent clearer.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="332-338" />
<code_context>
             has_stream_output = False
             try:
                 async for resp in self._iter_llm_responses(include_model=idx == 0):
+                    # 对于流式 chunk,不立即检查是否为空,因为单个 chunk 可能只是元数据/心跳
</code_context>
<issue_to_address>
**suggestion:** Use logger.exception (or include exc_info) to preserve the traceback when chunk handling fails.

Only the exception message is currently logged, which makes diagnosing protocol/stream issues harder. Please log with traceback, e.g.:

```python
try:
    state.handle_chunk(chunk)
except Exception:
    logger.exception(
        "[%s] Saving chunk state error (provider: %s)",
        self.get_model(),
        self.provider_config.get("id", "unknown"),
    )
```

This keeps the model/provider context while exposing the full stack trace.

```suggestion
            try:
                state.handle_chunk(chunk)
            except Exception:
                logger.exception(
                    "[%s] Saving chunk state error (provider: %s)",
                    self.get_model(),
                    self.provider_config.get("id", "unknown"),
                )
```
</issue_to_address>

### Comment 2
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="224" />
<code_context>
         else:
             yield await self.provider.text_chat(**payload)

+    def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
+        """Check if an LLM response is effectively empty.
+
</code_context>
<issue_to_address>
**issue (complexity):** Consider decomposing the new empty-response handling into smaller helpers and named booleans to keep the behavior while simplifying control flow and readability.

You can keep the new behavior but reduce control-flow complexity by splitting responsibilities and using more declarative helpers.

**1. Decompose `_is_empty_llm_response` and simplify the loop**

Instead of manual flag mutation and a mixed-responsibility helper, you can factor it into smaller helpers and use `any()`:

```python
def _has_text_content(self, resp: LLMResponse) -> bool:
    return bool((resp.completion_text or "").strip() or (resp.reasoning_content or "").strip())

def _has_result_chain_content(self, resp: LLMResponse) -> bool:
    if not resp.result_chain or not resp.result_chain.chain:
        return False

    return any(
        (
            isinstance(comp, Comp.Plain) and bool(comp.text and comp.text.strip())
        ) or (
            not isinstance(comp, Comp.Plain)  # non-Plain components are valid content
        )
        for comp in resp.result_chain.chain
    )

def _is_empty_llm_response(self, resp: LLMResponse) -> bool:
    return (
        not self._has_text_content(resp)
        and not resp.tools_call_args
        and not self._has_result_chain_content(resp)
    )
```

This keeps the semantics but flattens branching and makes each rule independently testable.

**2. Name the boolean parts of the fallback condition**

The compound `if` in `_iter_llm_responses_with_fallback` becomes easier to scan if intermediate booleans are named:

```python
is_assistant_or_tool = resp.role in ("assistant", "tool")
is_empty = self._is_empty_llm_response(resp)
should_try_fallback = is_assistant_or_tool and is_empty and not is_last_candidate

if should_try_fallback:
    logger.warning(
        "Chat Model %s returns empty response, trying fallback to next provider.",
        candidate_id,
    )
    break
```

This preserves the logic but improves readability.

**3. Compact error context construction**

The error construction in `step` can be tightened while keeping the same context in the message:

```python
base_msg = "LLM returned empty assistant message with no tool calls."
ctx = {
    "model_id": getattr(self.run_context, "model_id", None),
    "provider_id": getattr(self.run_context, "provider_id", None),
    "run_id": getattr(self.run_context, "run_id", None),
}
context_str = ", ".join(f"{k}={v}" for k, v in ctx.items() if v is not None)
if context_str:
    base_msg = f"{base_msg} Context: {context_str}."
raise LLMEmptyResponseError(base_msg)
```

If you expect to reuse this pattern elsewhere, you could move it into a small `_format_empty_response_error()` helper, but even this inline version makes the branch visually lighter.
</issue_to_address>

Hi @a61995987! 👋

Thanks for trying out Sourcery by commenting with @sourcery-ai review! 🚀

Install the sourcery-ai bot to get automatic code reviews on every pull request ✨

Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/core/provider/sources/openai_source.py Outdated
Comment thread astrbot/core/agent/runners/tool_loop_agent_runner.py
@a61995987
Copy link
Copy Markdown
Contributor Author

@Soulter 请审查 是否需要根据AI给出的建议进行修改?
主要是:在 step() 中检查 assistant 是否为空时,可以考虑复用 _is_empty_llm_response,而不是再次实现一套空响应启发式逻辑,这样如果之后更新启发式逻辑,各处行为也能保持一致。

这可能会让代码变动量变得很大

@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Mar 10, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New security issues found

Comment thread skills/packetCaptureTool/scripts/start_packet_capture.py Outdated
@a61995987 a61995987 changed the title fix:兼容openai适配器的非标返回 fix: 增强核心请求的容错能力与 Provider 兼容性优化 Mar 10, 2026
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:XXL This PR changes 1000+ lines, ignoring generated files. labels Mar 10, 2026
增强 _ensure_img_caption 的容错性避免解析图片失败时造成的框架整体性崩溃
@a61995987 a61995987 changed the base branch from master to dev March 15, 2026 16:33
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Mar 16, 2026
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:XL This PR changes 500-999 lines, ignoring generated files. labels Mar 16, 2026
@a61995987
Copy link
Copy Markdown
Contributor Author

该PR变动过大 故会在后续使用干净的分支进行分散提交 暂时关闭

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants