You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix broken xrefs across the site (zero build errors)
Local build was emitting 16 xref-resolution errors across 6 pages plus
one partial. None were caused by recent commits; they were stale
references left over from the `cloud-docs:ai-agents` -> `redpanda-adp`
component migration plus a few cross-component xrefs that lost their
component prefix.
Local-component fixes:
- ai-agents:observability/concepts.adoc -> observability:concepts.adoc
(transcripts.adoc, ingest-custom-traces.adoc, partial)
- ai-agents:observability/transcripts.adoc -> observability:transcripts.adoc
- ai-agents:agents/monitor-agents.adoc -> agents:monitor.adoc
- governance:guardrails.adoc -> governance:guardrails/index.adoc (the
page moved into a subdirectory)
- integrations/index.adoc -> integrations:index.adoc (was bare; needed
the same-component module prefix)
- agent-trace-hierarchy / mcp-server-trace-hierarchy anchors renamed
to ...-transcript-hierarchy to match the actual section headings on
observability/concepts.adoc; link labels updated accordingly.
Cross-component fixes (verified the targets exist):
- manage:rpk/rpk-install.adoc -> redpanda-cloud:manage:rpk/rpk-install.adoc
- develop:connect/components/inputs/otlp_*.adoc ->
redpanda-connect:components:inputs/otlp_*.adoc
Build verification: `npm run build` now reports 0 xref errors. The
remaining warnings are pre-existing template-attribute placeholders
(unrelated).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: modules/governance/pages/budgets.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ Some guardrail evaluators call an LLM to do their work. A toxicity classifier, f
61
61
62
62
Guardrail evaluator cost surfaces in the same spending pipeline as user-facing LLM calls. The evaluator's cost is attributed to the *evaluator's configured upstream provider* — usually a small classifier model, separate from the user-facing LLM — so per-provider breakdowns separate the two automatically.
63
63
64
-
For the per-evaluator cost model and how it interacts with the dashboard's spend view, see xref:governance:guardrails.adoc[Configure guardrails].
64
+
For the per-evaluator cost model and how it interacts with the dashboard's spend view, see xref:governance:guardrails/index.adoc[Configure guardrails].
65
65
66
66
// TODO: confirm with eng that guardrail evaluator cost flows into the same SpendingService as user-facing LLM cost (vs. a separate stream). Open Q A3 in the companion plan, also flagged on the Guardrails plan.
67
67
@@ -87,7 +87,7 @@ Cap-management arrives after GA per the Governance V0 PRD. The planned feature s
87
87
* *Alert hooks* — webhook, email, or chat notifications when a cap is approached or exceeded.
88
88
* *Multi-tenant cap-setting* — per-tenant caps with override semantics.
89
89
90
-
Until those features ship, treat the dashboard and breakdown queries as your visibility layer and use platform-level guardrails (xref:governance:guardrails.adoc[Configure guardrails]) for selective request blocking.
90
+
Until those features ship, treat the dashboard and breakdown queries as your visibility layer and use platform-level guardrails (xref:governance:guardrails/index.adoc[Configure guardrails]) for selective request blocking.
91
91
92
92
// TODO: once the cap-management surface lands, replace this section with a forward link to the configuration how-to. If cap-management content grows beyond a single section, split this page into a sub-folder. Open Q C1 in the companion plan.
Copy file name to clipboardExpand all lines: modules/observability/pages/ingest-custom-traces.adoc
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ After reading this page, you will be able to:
19
19
20
20
* A Redpanda Connect pipeline host (today: a Redpanda BYOC cluster with Connect enabled). Ability to manage secrets on that host.
21
21
// TODO: Replace with the standalone-ADP ingestion target once defined (may no longer require a Redpanda Cloud cluster).
22
-
* The latest version of xref:manage:rpk/rpk-install.adoc[`rpk`] installed
22
+
* The latest version of xref:redpanda-cloud:manage:rpk/rpk-install.adoc[`rpk`] installed
23
23
* Custom agent or application instrumented with OpenTelemetry SDK
24
24
* Basic understanding of the https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/[OpenTelemetry span format^] and https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP)^]
25
25
@@ -60,11 +60,11 @@ For non-LangChain applications or custom instrumentation, continue with the sect
60
60
61
61
Custom agents are applications with OpenTelemetry instrumentation that operate independently of Redpanda's Remote MCP servers or declarative agents (such as LangChain, CrewAI, or manually instrumented applications).
62
62
63
-
When these agents send traces to `redpanda.otel_traces`, you gain unified observability alongside Remote MCP server and declarative agent traces. See xref:ai-agents:observability/concepts.adoc#cross-service-transcripts[Cross-service transcripts] for details on how traces correlate across services.
63
+
When these agents send traces to `redpanda.otel_traces`, you gain unified observability alongside Remote MCP server and declarative agent traces. See xref:observability:concepts.adoc#cross-service-transcripts[Cross-service transcripts] for details on how traces correlate across services.
64
64
65
65
=== Trace format requirements
66
66
67
-
Custom agents must emit traces in OTLP format. The xref:develop:connect/components/inputs/otlp_http.adoc[`otlp_http`] input accepts both OTLP Protobuf (`application/x-protobuf`) and JSON (`application/json`) payloads. For <<use-grpc,gRPC transport>>, use the xref:develop:connect/components/inputs/otlp_grpc.adoc[`otlp_grpc`] input.
67
+
Custom agents must emit traces in OTLP format. The xref:redpanda-connect:components:inputs/otlp_http.adoc[`otlp_http`] input accepts both OTLP Protobuf (`application/x-protobuf`) and JSON (`application/json`) payloads. For <<use-grpc,gRPC transport>>, use the xref:redpanda-connect:components:inputs/otlp_grpc.adoc[`otlp_grpc`] input.
68
68
69
69
Each trace must follow the OTLP specification with these required fields:
70
70
@@ -96,7 +96,7 @@ Optional but recommended fields:
96
96
- `parentSpanId` for hierarchical traces
97
97
- `attributes` for contextual information
98
98
99
-
For complete trace structure details, see xref:ai-agents:observability/concepts.adoc#understand-the-transcript-structure[Understand the transcript structure].
99
+
For complete trace structure details, see xref:observability:concepts.adoc#understand-the-transcript-structure[Understand the transcript structure].
100
100
101
101
== Configure the ingestion pipeline
102
102
@@ -573,15 +573,15 @@ After your custom agent sends traces through the pipeline, they appear in the *T
573
573
574
574
==== Identify custom agent transcripts
575
575
576
-
Custom agent transcripts are identified by the `service.name` resource attribute, which differs from Redpanda's built-in services (`ai-agent` for declarative agents, `mcp-{server-id}` for MCP servers). See xref:ai-agents:observability/concepts.adoc#cross-service-transcripts[Cross-service transcripts] to understand how the `service.name` attribute identifies transcript sources.
576
+
Custom agent transcripts are identified by the `service.name` resource attribute, which differs from Redpanda's built-in services (`ai-agent` for declarative agents, `mcp-{server-id}` for MCP servers). See xref:observability:concepts.adoc#cross-service-transcripts[Cross-service transcripts] to understand how the `service.name` attribute identifies transcript sources.
577
577
578
578
Your custom agent transcripts display with:
579
579
580
580
* **Service name** in the service filter dropdown (from your `service.name` resource attribute)
581
581
* **Agent name** in span details (from the `gen_ai.agent.name` attribute)
582
582
* **Operation names** like `"invoke_agent my-assistant"` indicating agent executions
583
583
584
-
For detailed instructions on filtering, searching, and navigating transcripts in the UI, see xref:ai-agents:observability/transcripts.adoc[View Transcripts].
584
+
For detailed instructions on filtering, searching, and navigating transcripts in the UI, see xref:observability:transcripts.adoc[View Transcripts].
585
585
586
586
==== Token usage tracking
587
587
@@ -619,7 +619,7 @@ If requests succeed but traces do not appear in `redpanda.otel_traces`:
619
619
620
620
== Next steps
621
621
622
-
* xref:ai-agents:observability/transcripts.adoc[]
623
-
* xref:ai-agents:agents/monitor-agents.adoc[Observability for declarative agents]
Copy file name to clipboardExpand all lines: modules/observability/pages/transcripts.adoc
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
9
9
Use the Transcripts view to read a complete record of an agent or MCP server execution, turn by turn. Each transcript captures the conversation between the user, the agent, any LLM calls, and any tools it invoked, along with token usage, USD cost, latency, and any errors.
10
10
11
-
For conceptual background on the underlying OpenTelemetry data model, see xref:ai-agents:observability/concepts.adoc[].
11
+
For conceptual background on the underlying OpenTelemetry data model, see xref:observability:concepts.adoc[].
12
12
13
13
After reading this page, you will be able to:
14
14
@@ -41,7 +41,7 @@ Each row in the list represents one execution (one trace). Columns include:
41
41
* *USD cost* — total cost for the execution, derived from per-model pricing. See <<troubleshooting>> if this column shows `0`.
42
42
* *Duration* — wall-clock time between the first and last span.
43
43
44
-
A transcript marked _reconstructed_ is one in which some turns were rebuilt from LLM message context after the original spans were evicted from `redpanda.otel_traces`. See xref:ai-agents:observability/concepts.adoc#history-reconstruction[Reconstructed transcript history] for what that means.
44
+
A transcript marked _reconstructed_ is one in which some turns were rebuilt from LLM message context after the original spans were evicted from `redpanda.otel_traces`. See xref:observability:concepts.adoc#history-reconstruction[Reconstructed transcript history] for what that means.
45
45
46
46
// TODO: Confirm final column list on the GA Console UI. Today's labels likely shift. Verify against adp-production before merge.
47
47
@@ -102,7 +102,7 @@ Turns are listed in order by role:
102
102
* *ASSISTANT* — a response from the LLM. Shows the model, input/output token counts, USD cost for that turn, and latency. If the assistant turn called a tool, its tool calls are nested underneath.
103
103
* *TOOL* — a tool invocation. Shows the tool name, the arguments passed, the result, and the latency of the call.
104
104
105
-
Any turn may carry the `is_reconstructed` marker. Reconstructed turns preserve role order and the high-level content of the conversation but do not carry per-turn token counts, latency, or tool-call arguments. See xref:ai-agents:observability/concepts.adoc#history-reconstruction[Reconstructed transcript history] for the mechanics.
105
+
Any turn may carry the `is_reconstructed` marker. Reconstructed turns preserve role order and the high-level content of the conversation but do not carry per-turn token counts, latency, or tool-call arguments. See xref:observability:concepts.adoc#history-reconstruction[Reconstructed transcript history] for the mechanics.
106
106
107
107
=== Errors
108
108
@@ -148,7 +148,7 @@ If the failure happened during a tool call, the error is attached to the TOOL tu
148
148
== Limitations
149
149
150
150
* Large time windows sample the list to keep the UI responsive. The exact transcript you need may not be in the current page; narrow the time range or add filters.
151
-
* Reconstructed turns do not carry token counts, latency, or tool-call arguments for the reconstructed range. For byte-level fidelity, lower the ingestion lag or extend `redpanda.otel_traces` retention (see xref:ai-agents:observability/concepts.adoc#opentelemetry-traces-topic[How Redpanda stores trace data]).
151
+
* Reconstructed turns do not carry token counts, latency, or tool-call arguments for the reconstructed range. For byte-level fidelity, lower the ingestion lag or extend `redpanda.otel_traces` retention (see xref:observability:concepts.adoc#opentelemetry-traces-topic[How Redpanda stores trace data]).
152
152
* USD cost is only populated for models covered by the pricing table.
153
153
// TODO: List which providers/models are priced at GA and what users see for un-priced ones (`0`, `null`, or an explicit "unknown" marker).
154
154
// TODO: If the GA Console UI ships transcript export, document the entry point and output format here; otherwise omit.
@@ -165,7 +165,7 @@ A transcript stays in `RUNNING` until the root span closes. Common causes:
165
165
166
166
=== USD cost shows 0
167
167
168
-
`TranscriptUsage.usd_cost` is populated by the cost-reporting pipeline from the `gen_ai.usage.*` attributes on each LLM-call span combined with a per-model pricing table. For the full list of cost-bearing attributes (including the explicit USD-cost fields), see xref:ai-agents:observability/concepts.adoc#key-attributes-by-layer[Key attributes by layer].
168
+
`TranscriptUsage.usd_cost` is populated by the cost-reporting pipeline from the `gen_ai.usage.*` attributes on each LLM-call span combined with a per-model pricing table. For the full list of cost-bearing attributes (including the explicit USD-cost fields), see xref:observability:concepts.adoc#key-attributes-by-layer[Key attributes by layer].
169
169
// TODO: Document which providers/models are priced at GA.
170
170
171
171
If cost is `0` for a transcript that clearly used tokens, check:
// UI navigation and interface explanation (procedural context for how-to pages)
@@ -66,10 +66,10 @@ The trace list shows nested operations with visual duration bars indicating how
66
66
67
67
// Link to appropriate concepts section based on context
68
68
ifeval::["{context}" == "agent"]
69
-
For details on span types, see xref:ai-agents:observability/concepts.adoc#agent-trace-hierarchy[Agent trace hierarchy].
69
+
For details on span types, see xref:observability:concepts.adoc#agent-transcript-hierarchy[Agent transcript hierarchy].
70
70
endif::[]
71
71
ifeval::["{context}" == "mcp"]
72
-
For details on span types, see xref:ai-agents:observability/concepts.adoc#mcp-server-trace-hierarchy[MCP server trace hierarchy].
72
+
For details on span types, see xref:observability:concepts.adoc#mcp-server-transcript-hierarchy[MCP server transcript hierarchy].
73
73
endif::[]
74
74
75
75
==== Summary panel
@@ -91,6 +91,6 @@ ifeval::["{context}" == "mcp"]
91
91
* Service: The MCP server identifier
92
92
endif::[]
93
93
94
-
If any turns were rebuilt from LLM message context after their original spans were evicted, the panel shows a _reconstructed_ marker on those turns. For the mechanics, see xref:ai-agents:observability/concepts.adoc#history-reconstruction[Reconstructed transcript history].
94
+
If any turns were rebuilt from LLM message context after their original spans were evicted, the panel shows a _reconstructed_ marker on those turns. For the mechanics, see xref:observability:concepts.adoc#history-reconstruction[Reconstructed transcript history].
95
95
96
96
// TODO: Re-verify this field list against the GA Console UI on adp-production. Beta labels may shift; update wording before GA.
0 commit comments