Skip to content

[Bug] 执行多步 Function Calling 任务时,经常因 EmptyModelOutputError 导致工作流中断 #7365

@xmbhjQAQ

Description

@xmbhjQAQ

What happened / 发生了什么

在执行需要多轮复杂工具调用(例如搭配jshookmcp, Scrapling-Skill等工具)的任务时,AstrBot 极易中断并抛出 EmptyModelOutputError。
感觉像是问题在 astrbot/core/provider/sources/openai_source.py_parse_openai_completion 校验逻辑中。
在长上下文高压或retry机制下,系统可能意外丢失了本地的 tools 列表,或者发生了名称不匹配。此时框架会静默丢弃模型的工具调用,导致 llm_response.tools_call_args 为空。
而在 openai_source.py (796 - 799 行)中:

if (
    not has_text_output
    and not has_reasoning_output
    and not llm_response.tools_call_args  # 这里依赖了二次解析的结果,而非原始响应
):
    raise EmptyModelOutputError(...)

由于这里没有检查原始的 choice.message.tool_calls,导致系统看着自己解析空的列表,终止了任务。

Reproduce / 如何复现?

开启较为复杂的工具(如jshookmcp)执行较为复杂的任务。

AstrBot version, deployment method (e.g., Windows Docker Desktop deployment), provider used, and messaging platform used. / AstrBot 版本、部署方式(如 Windows Docker Desktop 部署)、使用的提供商、使用的消息平台适配器

4.22.3 Docker

OS

Windows

Logs / 报错日志

[19:53:49.345] [Core] [DBUG] [sources.openai_source:492]: completion: ChatCompletion(id='[REDACTED_ID]', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content='', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=[ChatCompletionMessageFunctionToolCall(id='[REDACTED_CALL_ID]', function=Function(arguments='{"code": "import subprocess\\nimport tempfile\\nimport os\\n\\n# 直接尝试使用scrapling提取登录页面\\nurl = \\"https://[REDACTED_DOMAIN]/login?service=[REDACTED_SERVICE_URL]\\"\\nprint(f\\"正在使用scrapling提取页面: {url}\\")\\n\\n# 使用临时文件\\nwith tempfile.NamedTemporaryFile(mode=\'w\', suffix=\'.html\', delete=False) as tmp:\\n    temp_file = tmp.name\\n\\ntry:\\n    # 尝试使用fetch命令\\n    cmd = [\\"scrapling\\", \\"extract\\", \\"fetch\\", url, temp_file, \\"--network-idle\\", \\"--timeout\\", \\"60000\\", \\"--ai-targeted\\"]\\n    \\n    print(\\"执行命令:\\", \\" \\".join(cmd))\\n    result = subprocess.run(cmd, capture_output=True, text=True, timeout=120)\\n    \\n    print(\\"stdout:\\", result.stdout[:1000])\\n    print(\\"stderr:\\", result.stderr[:500])\\n    print(\\"returncode:\\", result.returncode)\\n    \\n    # 读取结果\\n    if os.path.exists(temp_file):\\n        with open(temp_file, \'r\', encoding=\'utf-8\') as f:\\n            content = f.read(2000)\\n            print(\\"\\\\n页面内容前2000字符:\\", content)\\n    else:\\n        print(\\"临时文件不存在\\")\\n        \\nfinally:\\n    # 清理临时文件\\n    if os.path.exists(temp_file):\\n        os.unlink(temp_file)"}', name='astrbot_execute_ipython'), type='function')], reasoning_content=''))], created=1775332429, model='deepseek-v3.2', object='chat.completion', service_tier='default', system_fingerprint=None, usage=CompletionUsage(completion_tokens=365, prompt_tokens=29115, total_tokens=29480, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=365, rejected_prediction_tokens=None), prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0)))

[19:53:49.346] [Core] [ERRO] [v4.22.3] [sources.openai_source:801]: OpenAI completion has no usable output: ChatCompletion(id='[REDACTED_ID]', ... [REDACTED_CONTENT] ... finish_reason=tool_calls).

[19:53:49.357] [Core] [ERRO] [v4.22.3] [core.astr_agent_run_util:244]: Traceback (most recent call last):
  File "/AstrBot/astrbot/core/astr_agent_run_util.py", line 124, in run_agent
    async for resp in agent_runner.step():
  File "/AstrBot/astrbot/core/agent/runners/tool_loop_agent_runner.py", line 552, in step
    llm_resp, _ = await self._resolve_tool_exec(llm_resp)
  File "/AstrBot/astrbot/core/agent/runners/tool_loop_agent_runner.py", line 996, in _resolve_tool_exec
    requery_resp = await self.provider.text_chat(
  File "/AstrBot/astrbot/core/provider/sources/openai_source.py", line 1064, in text_chat
    ) = await self._handle_api_error(
  File "/AstrBot/astrbot/core/provider/sources/openai_source.py", line 1012, in _handle_api_error
    raise e
  File "/AstrBot/astrbot/core/provider/sources/openai_source.py", line 1052, in text_chat
    llm_response = await self._query(payloads, func_tool)
  File "/AstrBot/astrbot/core/provider/sources/openai_source.py", line 494, in _query
    llm_response = await self._parse_openai_completion(completion, tools)
  File "/AstrBot/astrbot/core/provider/sources/openai_source.py", line 802, in _parse_openai_completion
    raise EmptyModelOutputError(
astrbot.core.exceptions.EmptyModelOutputError: OpenAI completion has no usable output. response_id=[REDACTED_ID], finish_reason=tool_calls

[19:53:49.359] [Core] [DBUG] [pipeline.context_utils:95]: hook(OnLLMResponseEvent) -> astrbot - record_llm_resp_to_ltm
[19:53:49.364] [Core] [INFO] [respond.stage:184]: Prepare to send - [REDACTED_USER_ID]: Error occurred during AI execution.
Error Type: EmptyModelOutputError
Error Message: OpenAI completion has no usable output. response_id=[REDACTED_ID], finish_reason=tool_calls

具体问题点还需要进一步分析(换了好多模型,普遍存在这个问题)

Are you willing to submit a PR? / 你愿意提交 PR 吗?

  • Yes!

Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    area:providerThe bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner.bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions