|
122 | 122 | "\n", |
123 | 123 | "# Run the agent with a query\n", |
124 | 124 | "result = agent.run(\n", |
125 | | - " messages=[ChatMessage.from_user(\"Find information about Haystack\")]\n", |
| 125 | + " messages=[ChatMessage.from_user(\"Find information about Haystack AI framework\")]\n", |
126 | 126 | ")\n", |
127 | 127 | "\n", |
128 | 128 | "# Print the final response\n", |
|
137 | 137 | "The Agent has a couple of optional parameters that let you customize it's behavior:\n", |
138 | 138 | "- `system_prompt` for defining a system prompt with instructions for the Agent's LLM\n", |
139 | 139 | "- `exit_conditions` that will cause the agent to return. It's a list of strings and the items can be `\"text\"`, which means that the Agent will exit as soon as the LLM replies only with a text response,\n", |
140 | | - "or specific tool names.\n", |
| 140 | + "or specific tool names, which make the Agent return right after a tool with that name was called.\n", |
141 | 141 | "- `state_schema` for the State that is shared across one agent invocation run. It defines extra information – such as documents or context – that tools can read from or write to during execution. You can use this schema to pass parameters that tools can both produce and consume.\n", |
142 | 142 | "- `streaming_callback` to stream the tokens from the LLM directly to output.\n", |
143 | 143 | "- `raise_on_tool_invocation_failure` to decide if the agent should raise an exception when a tool invocation fails. If set to False, the exception will be turned into a chat message and passed to the LLM. It can then try to improve with the next tool invocation.\n", |
144 | 144 | "- `max_agent_steps` to limit how many times the Agent can call tools and prevent endless loops.\n", |
145 | 145 | "\n", |
146 | | - "In the previous example, the Agent was allowed to call the tool multiple times until the default exit condition was met: a text response. Now, let's return right after calling `web_tool` or after the LLM generated text, whatever happens first. Let's also use streaming so that we see the tokens of the response while they are being generated." |
| 146 | + "When `exit_conditions` is set to the default [\"text\"], you can enable streaming so that we see the tokens of the response while they are being generated." |
147 | 147 | ] |
148 | 148 | }, |
149 | 149 | { |
|
152 | 152 | "metadata": {}, |
153 | 153 | "outputs": [], |
154 | 154 | "source": [ |
| 155 | + "from haystack.components.generators.utils import print_streaming_chunk\n", |
| 156 | + "\n", |
155 | 157 | "agent = Agent(\n", |
156 | 158 | " chat_generator=OpenAIChatGenerator(),\n", |
157 | 159 | " tools=[web_tool],\n", |
158 | | - " exit_conditions=[\"text\", \"web_tool\"],\n", |
159 | | - " streaming_callback=lambda chunk: print(chunk.content, end=\"\", flush=True),\n", |
| 160 | + " streaming_callback=print_streaming_chunk,\n", |
160 | 161 | ")\n", |
161 | 162 | "\n", |
162 | 163 | "result = agent.run(\n", |
|
0 commit comments