Skip to content

Commit a0d7877

Browse files
committed
Address feedback
1 parent 6b4ad62 commit a0d7877

15 files changed

Lines changed: 96 additions & 43 deletions

modules/ai-agents/examples/agents/account-agent-prompt.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ You are the account agent for ACME Bank's dispute resolution system. You special
2424
## PII Protection Rules
2525

2626
Always return masked data:
27-
- Email: First letter + **** + @domain (e.g., "s****@example.com")
27+
- Email: First letter + **** + @domain (for example, "s****@example.com")
2828
- Phone: ***-***-XXXX (last 4 digits only)
2929
- Card: Last 4 digits only
3030
- Never return: Full card numbers, SSNs, full account numbers

modules/ai-agents/examples/mcp-tools/processors/get_weather_complete.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ meta:
4646
properties:
4747
- name: city
4848
type: string
49-
description: "City name (e.g., 'London', 'New York', 'Tokyo')"
49+
description: "City name (for example, 'London', 'New York', 'Tokyo')"
5050
required: true
5151
- name: units
5252
type: string

modules/ai-agents/examples/mcp-tools/processors/search_jira.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ meta:
2222
properties:
2323
- name: jql
2424
type: string
25-
description: "JQL query (e.g., 'project = DOC AND status = Open')"
25+
description: "JQL query (for example, 'project = DOC AND status = Open')"
2626
required: true
2727
- name: max_results
2828
type: number
3.71 MB
Loading
3.92 MB
Loading

modules/ai-agents/pages/agents/architecture-patterns.adoc

Lines changed: 46 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -20,25 +20,33 @@ Agent architecture determines how you manage complexity as your system grows. Th
2020

2121
Starting with a simple architecture is tempting, but can lead to unmaintainable systems as complexity increases. Planning for growth with clear boundaries prevents technical debt and costly refactoring later.
2222

23-
Warning signs include system prompts exceeding 2000 words, too many tools for the LLM to select correctly, multiple teams modifying the same agent, and changes in one domain breaking others. These symptoms indicate you need architectural boundaries, not just better prompts.
23+
Warning signs that you need architectural boundaries, not just better prompts:
24+
25+
* System prompts exceeding 2000 words
26+
* Too many tools for the LLM to select correctly
27+
* Multiple teams modifying the same agent
28+
* Changes in one domain breaking others
2429

2530
Match agent architecture to domain structure:
2631

27-
[cols="2,3,3"]
32+
[cols="2,2,3,3"]
2833
|===
29-
| Domain Characteristics | Architecture Fit | Reasoning
34+
| Domain Characteristics | Architecture | Pros | Cons
3035

3136
| Single business area, stable requirements
3237
| Single agent
33-
| Simplicity outweighs flexibility needs
38+
| Simple to build and maintain, one deployment, lower latency
39+
| Limited flexibility, difficult to scale to multi-domain problems
3440

3541
| Multiple business areas, shared infrastructure
3642
| Root agent with internal subagents
37-
| Domain separation without deployment complexity
43+
| Separation of concerns, easier debugging, shared resources reduce cost
44+
| Single point of failure, all subagents constrained to same model and budget
3845

3946
| Cross-organization workflows, independent evolution
4047
| External agent-to-agent
41-
| Organizational boundaries require system boundaries
48+
| Independent deployment and scaling, security isolation, flexible infrastructure
49+
| Network latency, authentication complexity, harder to debug across boundaries
4250
|===
4351

4452

@@ -70,6 +78,8 @@ Single agents are simpler to build and maintain. You have one system prompt, one
7078

7179
However, all capabilities must coexist in one agent. Adding features increases complexity rapidly, making single agents difficult to scale to multi-domain problems.
7280

81+
TIP: You can migrate from a single agent to a root agent with subagents without starting over. Add subagents to an existing agent using the Redpanda Cloud Console, then gradually move tools and responsibilities to the new subagents.
82+
7383
== Root agent with subagents pattern
7484

7585
A multi-agent architecture uses a root agent that delegates to specialized internal subagents.
@@ -102,7 +112,7 @@ NOTE: Cross-agent calling between separate Redpanda Cloud agents is not supporte
102112

103113
=== When to use external A2A
104114

105-
Use external A2A for multi-organization workflows that coordinate agents across company boundaries, for platform integration connecting Redpanda Cloud agents with agents hosted elsewhere, and when agents require different deployment environments such as GPU clusters, air-gapped networks, or regional constraints.
115+
Use external glossterm:Agent2Agent (A2A) protocol[] for multi-organization workflows that coordinate agents across company boundaries, for platform integration connecting Redpanda Cloud agents with agents hosted elsewhere, and when agents require different deployment environments such as GPU clusters, air-gapped networks, or regional constraints.
106116

107117
=== How it works
108118

@@ -136,33 +146,55 @@ Avoid these architecture mistakes that lead to unmaintainable agent systems.
136146

137147
A monolithic prompt is a single 3000+ word system prompt covering multiple domains.
138148

139-
This pattern fails because LLM confusion increases with prompt length, multiple teams modify the same prompt creating conflicts and unclear ownership, and changes to one domain risk breaking others.
149+
This pattern fails because:
150+
151+
* LLM confusion increases with prompt length
152+
* Multiple teams modify the same prompt creating conflicts and unclear ownership
153+
* Changes to one domain risk breaking others
140154

141155
Split into domain-specific subagents instead. Each subagent gets a focused prompt under 500 words.
142156

143157
=== The tool explosion
144158

145-
A tool explosion occurs when a single agent has 30+ tools from every MCP server in the cluster.
159+
A tool explosion occurs when a single agent has too many tools from every MCP server in the cluster.
160+
161+
This pattern fails because:
146162

147-
This pattern fails because the LLM struggles to choose correctly from large tool sets, tool descriptions compete for limited prompt space, and the agent invokes wrong tools with similar names, wasting iteration budget on selection mistakes.
163+
* The LLM struggles to choose correctly from large tool sets
164+
* Tool descriptions compete for limited prompt space
165+
* The agent invokes wrong tools with similar names, wasting iteration budget on selection mistakes
148166

149-
Limit tools per agent. Use subagents to partition tools by domain. For tool design patterns, see xref:ai-agents:mcp/remote/tool-patterns.adoc[].
167+
Limit tools per agent to 10-15 for optimal performance. Agents with more than 20-25 tools often show degraded tool selection accuracy. Use subagents to partition tools by domain. For tool design patterns, see xref:ai-agents:mcp/remote/tool-patterns.adoc[].
150168

151169
=== Premature A2A splitting
152170

153171
Premature splitting creates three separate A2A agents when all logic could fit in one agent with internal subagents.
154172

155-
This pattern fails because network latency affects every cross-agent call, authentication complexity multiplies with three sets of credentials, debugging requires correlating logs across systems, and you manage three deployments instead of one.
173+
This pattern fails because:
174+
175+
* Network latency affects every cross-agent call
176+
* Authentication complexity multiplies with three sets of credentials
177+
* Debugging requires correlating logs across systems
178+
* You manage three deployments instead of one
156179

157180
Start with internal subagents for domain separation. Split to external A2A only when you need organizational boundaries or different infrastructure.
158181

159182
=== Unbounded tool chaining
160183

161184
Unbounded chaining sets max iterations to 100, returns hundreds of items from tools, and places no constraints on tool call frequency.
162185

163-
This pattern fails because the context window fills with tool results, requests time out before completion, costs spiral with many iterations multiplied by large context, and the agent loses track of the original goal.
186+
This pattern fails because:
187+
188+
* The context window fills with tool results
189+
* Requests time out before completion
190+
* Costs spiral with many iterations multiplied by large context
191+
* The agent loses track of the original goal
192+
193+
For best results:
164194

165-
Design workflows to complete in 20-30 iterations. Return paginated results from tools. Add prompt constraints like "Never call the same tool more than 3 times per request."
195+
* Design workflows to complete in 20-30 iterations
196+
* Return paginated results from tools
197+
* Add prompt constraints like "Never call the same tool more than 3 times per request"
166198

167199
== Model selection guide
168200

modules/ai-agents/pages/agents/concepts.adoc

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,11 @@ Every agent request follows a reasoning loop. The agent doesn't execute all tool
2020

2121
=== The reasoning loop
2222

23+
The following diagram shows how agents process requests through iterative reasoning:
24+
25+
.Agent reasoning loop with tool integration
26+
image::ai-agents:agent-reasoning-loop.png[Diagram showing the agent reasoning loop: User Request flows to LLM Receives Context, then to LLM Decision which branches to Tool Executes, Request Clarification, or Return Response to User]
27+
2328
When an agent receives a request:
2429

2530
. The LLM receives the context, including system prompt, conversation history, user request, and previous tool results.
@@ -30,10 +35,15 @@ When an agent receives a request:
3035

3136
The loop continues until one of these conditions is met:
3237

38+
.Reasoning loop exit conditions
39+
image::ai-agents:agent-exit-conditions.png[Diagram showing exit conditions: Task Complete returns response, Max Iterations returns partial result, Unrecoverable Error returns error, otherwise continue loop]
40+
3341
* Agent completes the task and responds to the user
3442
* Agent reaches max iterations limit
3543
* Agent encounters an unrecoverable error
3644

45+
NOTE: If the agent encounters an unrecoverable error on the first iteration, it returns an error immediately. Unrecoverable errors include authentication failures, invalid tool configurations, or LLM API failures.
46+
3747
=== Why iterations matter
3848

3949
Each iteration includes three phases:
@@ -42,9 +52,9 @@ Each iteration includes three phases:
4252
. **Tool invocation**: If the agent decides to call a tool, execution happens and waits for results.
4353
. **Context expansion**: Tool results are added to the conversation history for the next iteration.
4454

45-
With higher iteration limits, agents can complete complex tasks but costs more and takes longer.
55+
With higher iteration limits, agents can complete complex tasks but can cost more and take longer.
4656

47-
With lower iteration limits, agents respond faster and cheaper but may fail on complex requests.
57+
With lower iteration limits, agents can respond faster and are cheaper but may fail on complex requests.
4858

4959
==== Cost calculation
5060

modules/ai-agents/pages/agents/create-agent.adoc

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88

99
Create a new AI agent through the Redpanda Cloud Console. This guide walks you through configuring the agent's model, system prompt, tools, and execution settings.
1010

11+
include::ai-agents:partial$byoc-aws-requirement.adoc[]
12+
1113
After reading this page, you will be able to:
1214

1315
* [ ] {learning-objective-1}
@@ -16,7 +18,7 @@ After reading this page, you will be able to:
1618
1719
== Prerequisites
1820

19-
* A xref:get-started:cluster-types/byoc/index.adoc[BYOC cluster] with Remote MCP enabled.
21+
* A xref:get-started:cluster-types/byoc/index.adoc[BYOC cluster].
2022
* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[AI Gateway configured] with at least one LLM provider enabled.
2123
* At least one xref:ai-agents:mcp/remote/overview.adoc[Remote MCP server] deployed with tools.
2224
* System prompt prepared (see xref:ai-agents:agents/prompt-best-practices.adoc[System Prompt Best Practices]).
@@ -169,7 +171,7 @@ Choose based on task complexity:
169171

170172
Start with 30 for most use cases.
171173

172-
=== Configure A2A discovery metadata
174+
=== Configure A2A discovery metadata (optional)
173175

174176
After creating your agent, configure discovery metadata for external integrations. For detailed agent card design guidance, see link:https://agent2agent.info/docs/guides/create-agent-card/[Create an Agent Card^].
175177

@@ -192,8 +194,8 @@ Skills describe what your agent can do for capability-based discovery. External
192194
.. Click *+ Add Skill* to define what this agent can do.
193195
.. For each skill, configure:
194196
+
195-
* *Skill ID* (required): Unique identifier using lowercase letters, numbers, and hyphens (e.g., `fraud-analysis`, `order-lookup`)
196-
* *Skill Name* (required): Human-readable name displayed in agent directories (e.g., "Fraud Analysis", "Order Lookup")
197+
* *Skill ID* (required): Unique identifier using lowercase letters, numbers, and hyphens (for example, `fraud-analysis`, `order-lookup`)
198+
* *Skill Name* (required): Human-readable name displayed in agent directories (for example, "Fraud Analysis", "Order Lookup")
197199
* *Description* (required): Explain what this skill does and when to use it. Be specific about inputs, outputs, and use cases.
198200
* *Tags* (optional): Add tags for categorization and search. Use common terms like `fraud`, `security`, `finance`, `orders`.
199201
* *Examples* (optional): Click *+ Add Example* to provide sample queries demonstrating how to invoke this skill. Examples help users understand how to interact with your agent.
@@ -202,7 +204,7 @@ Skills describe what your agent can do for capability-based discovery. External
202204

203205
. Click *Save Changes*.
204206

205-
The updated metadata appears immediately at `\https://your-agent-url/.well-known/agent-card.json`. For more about what these fields mean and how they're used, see xref:ai-agents:agents/a2a-concepts.adoc#agent-card-metadata[Agent card metadata].
207+
The updated metadata appears immediately at `\https://your-agent-url/.well-known/agent-card.json`. For more about what these fields mean and how they're used, see xref:ai-agents:agents/a2a-concepts.adoc#agent-cards[Agent cards].
206208

207209
=== Review and create
208210

modules/ai-agents/pages/agents/overview.adoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88

99
AI agents are systems that combine large language models (LLMs) with the ability to execute actions and process data. Redpanda Cloud provides real-time streaming infrastructure and standardized tool access to support agent development.
1010

11+
include::ai-agents:partial$byoc-aws-requirement.adoc[]
12+
1113
After reading this page, you will be able to:
1214

1315
* [ ] {learning-objective-1}
@@ -16,15 +18,15 @@ After reading this page, you will be able to:
1618
1719
== What is an AI agent?
1820

19-
An AI agent is a system built around a large language model that can interpret user intent, decide which actions are required, invoke external tools, process live and historical data, and chain multiple steps into a workflow. AI agents differ from text-only LLMs by executing actions and invoking external tools.
21+
An AI agent is a system built around a glossterm:large language model (LLM)[] that can interpret user intent, decide which actions are required, invoke external tools, process live and historical data, and chain multiple steps into a workflow. AI agents differ from text-only LLMs by executing actions and invoking external tools.
2022

2123
== How agents work
2224

2325
Every AI agent consists of four essential components:
2426

2527
* *System prompt*: Defines the agent's role, responsibilities, and constraints
2628
* *LLM*: Interprets user intent and decides which tools to invoke
27-
* *Tools*: External capabilities exposed through the Model Context Protocol (MCP)
29+
* *Tools*: External capabilities exposed through the xref:ai-agents:mcp/remote/overview.adoc[Model Context Protocol (MCP)]
2830
* *Context*: Conversation history, tool results, and real-time events from Redpanda topics
2931

3032
Agents can invoke Redpanda Connect components as tools on-demand. Redpanda Connect pipelines can also invoke agents for event-driven processing. This bidirectional integration supports both interactive workflows and automated streaming.
@@ -35,7 +37,7 @@ For a deeper understanding of how agents execute, manage context, and maintain s
3537

3638
== Key benefits
3739

38-
Redpanda Cloud provides real-time streaming data so agents access live events instead of batch snapshots. Remote MCP support enables standardized tool access. Managed infrastructure handles deployment, scaling, and security for you. Low-latency execution means tools run close to your data. Integrated secrets management securely stores API keys and credentials.
40+
Redpanda Cloud provides real-time streaming data so agents access live events instead of batch snapshots. xref:ai-agents:mcp/remote/overview.adoc[Remote MCP] support enables standardized tool access. Managed infrastructure handles deployment, scaling, and security for you. Low-latency execution means tools run close to your data. Integrated secrets management securely stores API keys and credentials.
3941

4042
== Use cases
4143

modules/ai-agents/pages/agents/prompt-best-practices.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -292,9 +292,9 @@ Guide agents to:
292292

293293
For cost management strategies including iteration limits and monitoring, see xref:ai-agents:agents/concepts.adoc[].
294294

295-
== Example: Complete system prompt
295+
== Example: System prompt with all best practices
296296

297-
This example demonstrates all best practices:
297+
This complete example demonstrates all the patterns described in this guide:
298298

299299
[,text]
300300
----

0 commit comments

Comments
 (0)