Skip to content

Commit 645c482

Browse files
committed
fix(provider): restore parameter transparency in core LLM provider adapters
核心对话适配器(OpenAI, Anthropic, Gemini)在准备请求 Payload 时未对 kwargs 进行合并,导致插件层传入的自定义参数(如 max_tokens, temperature, timeout 等)失效,回退到提供商的保守默认值。本次修复确保了各主流模型适配器对请求参数的完整透传。
1 parent 5e63635 commit 645c482

3 files changed

Lines changed: 5 additions & 1 deletion

File tree

astrbot/core/provider/sources/anthropic_source.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -516,6 +516,7 @@ async def text_chat(
516516
model = model or self.get_model()
517517

518518
payloads = {"messages": new_messages, "model": model}
519+
payloads.update(kwargs)
519520

520521
# Anthropic has a different way of handling system prompts
521522
if system_prompt:
@@ -572,6 +573,7 @@ async def text_chat_stream(
572573
model = model or self.get_model()
573574

574575
payloads = {"messages": new_messages, "model": model}
576+
payloads.update(kwargs)
575577

576578
# Anthropic has a different way of handling system prompts
577579
if system_prompt:

astrbot/core/provider/sources/gemini_source.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -758,6 +758,7 @@ async def text_chat(
758758
model = model or self.get_model()
759759

760760
payloads = {"messages": context_query, "model": model}
761+
payloads.update(kwargs)
761762

762763
retry = 10
763764
keys = self.api_keys.copy()
@@ -813,6 +814,7 @@ async def text_chat_stream(
813814
model = model or self.get_model()
814815

815816
payloads = {"messages": context_query, "model": model}
817+
payloads.update(kwargs)
816818

817819
retry = 10
818820
keys = self.api_keys.copy()

astrbot/core/provider/sources/openai_source.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -619,8 +619,8 @@ async def _prepare_chat_payload(
619619
context_query.extend(tcr.to_openai_messages())
620620

621621
model = model or self.get_model()
622-
623622
payloads = {"messages": context_query, "model": model}
623+
payloads.update(kwargs)
624624

625625
self._finally_convert_payload(payloads)
626626

0 commit comments

Comments
 (0)