Skip to content

Commit 01d0a70

Browse files
authored
feat(core): Migrate Vercel AI event processor to span streaming (#20608)
Migrates the Vercel AI event processor so it also works in the span streaming path via the `processSpan` hook. The event processor currently serves three purposes: **Attribute renaming.** The Vercel AI SDK emits attributes under `ai.*` names that need to be renamed to OpenTelemetry semantic conventions (`gen_ai.*`) and the `vercel.ai.*` namespace. This logic is straightforward to port. It's extracted into a shared `processVercelAiSpanAttributes` helper that is now called from both the legacy event processor path (for transactions) and the new `processSpan` hook (for streamed spans). **Token accumulation on parent spans.** The event processor aggregates token usage from child spans onto their parent `invoke_agent` spans. This is a cross-span operation that fundamentally doesn't work in the streaming model where spans are processed individually. The [span streaming implementation guide](https://develop.sentry.dev/sdk/telemetry/spans/implementation/) explicitly lists this as a case that cannot be replaced. Since the plan is for parent-level token accumulation to go away regardless, we simply drop it for the streaming path. **Tool descriptions on execute_tool spans.** The event processor iterates over all spans in a transaction to find `gen_ai.request.available_tools` on `doGenerate` spans and applies the matching description to sibling `execute_tool` spans. This cross-span iteration doesn't work when spans are processed individually. Instead, we parse and store tool descriptions in a map at `spanStart` time of the `doGenerate` span. Since `execute_tool` spans are siblings of `doGenerate` (both children of `invoke_agent`), we key the map by the parent span ID so `execute_tool` spans can look up descriptions by their own `parent_span_id`. This assumes a flat sibling hierarchy, which holds for our test scenarios. If we encounter more complex cases down the road, I think it's fine to address in a follow-up. Closes #20377
1 parent 12cd3e5 commit 01d0a70

17 files changed

Lines changed: 1095 additions & 64 deletions

File tree

dev-packages/node-integration-tests/suites/tracing/vercelai/scenario-error-in-tool.mjs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,8 @@ async function run() {
3535
prompt: 'What is the weather in San Francisco?',
3636
});
3737
});
38+
39+
await Sentry.flush(2000);
3840
}
3941

4042
run();

dev-packages/node-integration-tests/suites/tracing/vercelai/scenario.mjs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,8 @@ async function run() {
7575
prompt: 'Where is the third span?',
7676
});
7777
});
78+
79+
await Sentry.flush(2000);
7880
}
7981

8082
run();

dev-packages/node-integration-tests/suites/tracing/vercelai/instrument-streaming.mjs renamed to dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/instrument-with-pii.mjs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,5 @@ Sentry.init({
88
sendDefaultPii: true,
99
transport: loggingTransport,
1010
traceLifecycle: 'stream',
11+
integrations: [Sentry.vercelAIIntegration()],
1112
});

dev-packages/node-integration-tests/suites/tracing/vercelai/instrument-streaming-with-truncation.mjs renamed to dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/instrument-with-truncation.mjs

File renamed without changes.
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
import * as Sentry from '@sentry/node';
2+
import { loggingTransport } from '@sentry-internal/node-integration-tests';
3+
4+
Sentry.init({
5+
dsn: 'https://public@dsn.ingest.sentry.io/1337',
6+
release: '1.0',
7+
tracesSampleRate: 1.0,
8+
transport: loggingTransport,
9+
traceLifecycle: 'stream',
10+
integrations: [Sentry.vercelAIIntegration()],
11+
});
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
import * as Sentry from '@sentry/node';
2+
import { generateText } from 'ai';
3+
import { MockLanguageModelV1 } from 'ai/test';
4+
import { z } from 'zod';
5+
6+
async function run() {
7+
Sentry.setTag('test-tag', 'test-value');
8+
9+
await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
10+
try {
11+
await generateText({
12+
model: new MockLanguageModelV1({
13+
doGenerate: async () => ({
14+
rawCall: { rawPrompt: null, rawSettings: {} },
15+
finishReason: 'tool-calls',
16+
usage: { promptTokens: 15, completionTokens: 25 },
17+
text: 'Tool call completed!',
18+
toolCalls: [
19+
{
20+
toolCallType: 'function',
21+
toolCallId: 'call-1',
22+
toolName: 'getWeather',
23+
args: '{ "location": "San Francisco" }',
24+
},
25+
],
26+
}),
27+
}),
28+
tools: {
29+
getWeather: {
30+
parameters: z.object({ location: z.string() }),
31+
execute: async () => {
32+
throw new Error('Error in tool');
33+
},
34+
},
35+
},
36+
prompt: 'What is the weather in San Francisco?',
37+
});
38+
} catch {
39+
// Expected error - we want the spans to still be flushed
40+
}
41+
});
42+
43+
await Sentry.flush(2000);
44+
}
45+
46+
run();

dev-packages/node-integration-tests/suites/tracing/vercelai/scenario-span-streaming.mjs renamed to dev-packages/node-integration-tests/suites/tracing/vercelai/span-streaming-v4/scenario-truncation.mjs

File renamed without changes.
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
import * as Sentry from '@sentry/node';
2+
import { generateText } from 'ai';
3+
import { MockLanguageModelV1 } from 'ai/test';
4+
import { z } from 'zod';
5+
6+
async function run() {
7+
await Sentry.startSpan({ op: 'function', name: 'main' }, async () => {
8+
await generateText({
9+
model: new MockLanguageModelV1({
10+
doGenerate: async () => ({
11+
rawCall: { rawPrompt: null, rawSettings: {} },
12+
finishReason: 'stop',
13+
usage: { promptTokens: 10, completionTokens: 20 },
14+
text: 'First span here!',
15+
}),
16+
}),
17+
prompt: 'Where is the first span?',
18+
});
19+
20+
// This span should have input and output prompts attached because telemetry is explicitly enabled.
21+
await generateText({
22+
experimental_telemetry: { isEnabled: true },
23+
model: new MockLanguageModelV1({
24+
doGenerate: async () => ({
25+
rawCall: { rawPrompt: null, rawSettings: {} },
26+
finishReason: 'stop',
27+
usage: { promptTokens: 10, completionTokens: 20 },
28+
text: 'Second span here!',
29+
}),
30+
}),
31+
prompt: 'Where is the second span?',
32+
});
33+
34+
// This span should include tool calls and tool results
35+
await generateText({
36+
model: new MockLanguageModelV1({
37+
doGenerate: async () => ({
38+
rawCall: { rawPrompt: null, rawSettings: {} },
39+
finishReason: 'tool-calls',
40+
usage: { promptTokens: 15, completionTokens: 25 },
41+
text: 'Tool call completed!',
42+
toolCalls: [
43+
{
44+
toolCallType: 'function',
45+
toolCallId: 'call-1',
46+
toolName: 'getWeather',
47+
args: '{ "location": "San Francisco" }',
48+
},
49+
],
50+
}),
51+
}),
52+
tools: {
53+
getWeather: {
54+
description: 'Get the current weather for a location',
55+
parameters: z.object({ location: z.string() }),
56+
execute: async args => {
57+
return `Weather in ${args.location}: Sunny, 72°F`;
58+
},
59+
},
60+
},
61+
prompt: 'What is the weather in San Francisco?',
62+
});
63+
64+
// This span should not be captured because we've disabled telemetry
65+
await generateText({
66+
experimental_telemetry: { isEnabled: false },
67+
model: new MockLanguageModelV1({
68+
doGenerate: async () => ({
69+
rawCall: { rawPrompt: null, rawSettings: {} },
70+
finishReason: 'stop',
71+
usage: { promptTokens: 10, completionTokens: 20 },
72+
text: 'Third span here!',
73+
}),
74+
}),
75+
prompt: 'Where is the third span?',
76+
});
77+
});
78+
79+
await Sentry.flush(2000);
80+
}
81+
82+
run();

0 commit comments

Comments
 (0)