Skip to content

Gemma 4 - OpenAIResponseChatGenerator Tool Input error #11040

@2-ulfhednar-2

Description

@2-ulfhednar-2

Describe the bug
When running the ResponseChatGenerator in an Agent WITHOUT the exit conditions being a tool (So the tool feeds back into the generator) you get the following.
haystack.core.errors.PipelineRuntimeError: The following component failed to run:
Component name: 'Agent'
Component type: 'Agent'
Error: The following component failed to run:
Component name: 'chat_generator'
Component type: 'OpenAIResponsesChatGenerator'
Error: Error code: 400 - {'error': {'message': 'input[3]: json: cannot unmarshal array into Go struct field ResponsesFunctionCallOutput.output of type string', 'type': 'invalid_request_error', 'param': None, 'code': None}}

This happens due to the responses endpoint expecting a string while the tool output is given as a list in openai_responses.py (around line 877)

Error message
haystack.core.errors.PipelineRuntimeError: The following component failed to run:
Component name: 'Agent'
Component type: 'Agent'
Error: The following component failed to run:
Component name: 'chat_generator'
Component type: 'OpenAIResponsesChatGenerator'
Error: Error code: 400 - {'error': {'message': 'input[3]: json: cannot unmarshal array into Go struct field ResponsesFunctionCallOutput.output of type string', 'type': 'invalid_request_error', 'param': None, 'code': None}}

Expected behavior
The tool output should be properly processed by the Agent.
This will become important in case you configure retrieval or generally searches as tools

Additional context
The Model was Gemma4 26B run locally through Ollama
A patch created by Opus could fix this issue

# --- Patch ---
import haystack.components.generators.chat.openai_responses as responses_mod
_original = responses_mod._convert_chat_message_to_responses_api_format

def _patched(message):
    results = _original(message)
    for item in results:
        if item.get("type") == "function_call_output" and not isinstance(item.get("output"), str):
            item["output"] = json.dumps(item["output"])
    return results

responses_mod._convert_chat_message_to_responses_api_format = _patched
# --- End Patch ---

To Reproduce
Run Ollama locally
Use a script similar to this

client = OpenAIResponsesChatGenerator(
    model="gemma4:26b",
    api_base_url="http://localhost:11434/v1",
    api_key=Secret.from_token("None"),
    streaming_callback=print_streaming_chunk,
)

def calculate(expression: str) -> str:
    try:
        result = eval(expression, {"__builtins__": {}})
        return json.dumps({"result": result})
    except Exception as e:
        return json.dumps({"error": str(e)})

calculator_tool = Tool(
    name="calculator",
    description="Evaluate basic math expressions.",
    parameters={
        "type": "object",
        "properties": {
            "expression": {
                "type": "string",
                "description": "Math expression to evaluate",
            },
        },
        "required": ["expression"],
    },
    function=calculate,
)

agent = Agent(
    chat_generator=client,
    tools=[calculator_tool],
)

response = agent.run(messages=[ChatMessage.from_user("What is 7 * (4 + 2)?")])
print(response["messages"])

You will get the error shown earlier.
Notice that exit condition for the Agent has to be dropped.
With exit condition you will just get the tool output essentially

FAQ Check

System:

  • OS: Windows 11
  • GPU/CPU: RX7900XTX / Ryzen 7700X
  • Haystack version (commit or version number): 2.27.0
  • DocumentStore: -
  • Reader: -
  • Retriever: -

Note
Ultimately this could also be caused by a bug in the Ollama Endpoint.
However applying the fix solved the issue.
I currently dont have any vLLM models running with the Responses endpoint so I can't currently check this properly

OpenAI reference
https://developers.openai.com/api/reference/resources/responses/methods/create#(resource)%20responses%20%3E%20(method)%20create

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions