Skip to content

Commit c085c09

Browse files
committed
fix streaming, remove exit_conditions example
1 parent 55e9d3a commit c085c09

1 file changed

Lines changed: 6 additions & 5 deletions

File tree

tutorials/43_Building_a_Tool_Calling_Agent.ipynb

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@
122122
"\n",
123123
"# Run the agent with a query\n",
124124
"result = agent.run(\n",
125-
" messages=[ChatMessage.from_user(\"Find information about Haystack\")]\n",
125+
" messages=[ChatMessage.from_user(\"Find information about Haystack AI framework\")]\n",
126126
")\n",
127127
"\n",
128128
"# Print the final response\n",
@@ -137,13 +137,13 @@
137137
"The Agent has a couple of optional parameters that let you customize it's behavior:\n",
138138
"- `system_prompt` for defining a system prompt with instructions for the Agent's LLM\n",
139139
"- `exit_conditions` that will cause the agent to return. It's a list of strings and the items can be `\"text\"`, which means that the Agent will exit as soon as the LLM replies only with a text response,\n",
140-
"or specific tool names.\n",
140+
"or specific tool names, which make the Agent return right after a tool with that name was called.\n",
141141
"- `state_schema` for the State that is shared across one agent invocation run. It defines extra information – such as documents or context – that tools can read from or write to during execution. You can use this schema to pass parameters that tools can both produce and consume.\n",
142142
"- `streaming_callback` to stream the tokens from the LLM directly to output.\n",
143143
"- `raise_on_tool_invocation_failure` to decide if the agent should raise an exception when a tool invocation fails. If set to False, the exception will be turned into a chat message and passed to the LLM. It can then try to improve with the next tool invocation.\n",
144144
"- `max_agent_steps` to limit how many times the Agent can call tools and prevent endless loops.\n",
145145
"\n",
146-
"In the previous example, the Agent was allowed to call the tool multiple times until the default exit condition was met: a text response. Now, let's return right after calling `web_tool` or after the LLM generated text, whatever happens first. Let's also use streaming so that we see the tokens of the response while they are being generated."
146+
"When `exit_conditions` is set to the default [\"text\"], you can enable streaming so that we see the tokens of the response while they are being generated."
147147
]
148148
},
149149
{
@@ -152,11 +152,12 @@
152152
"metadata": {},
153153
"outputs": [],
154154
"source": [
155+
"from haystack.components.generators.utils import print_streaming_chunk\n",
156+
"\n",
155157
"agent = Agent(\n",
156158
" chat_generator=OpenAIChatGenerator(),\n",
157159
" tools=[web_tool],\n",
158-
" exit_conditions=[\"text\", \"web_tool\"],\n",
159-
" streaming_callback=lambda chunk: print(chunk.content, end=\"\", flush=True),\n",
160+
" streaming_callback=print_streaming_chunk,\n",
160161
")\n",
161162
"\n",
162163
"result = agent.run(\n",

0 commit comments

Comments
 (0)