Skip to content

Commit c5e3249

Browse files
authored
feat(core): Add scope-level conversation ID API to support linking AI conversations (#18909)
Introduces a new `Sentry.setConversationId()` API to track multi turn AI conversations across API calls. > We want to leverage each AI frameworks built-in functionalities wherever possible. OpenAIs conversation_id, sessions, etc to create the gen_ai.conversation.id attribute. > However if the framework does not provide such a mechanism (such as Google GenAI or Anthropic) we shall provide a common function: Sentry.setConversationId(...) or sentry_sdk.set_conversation_id(), which adds the conversation ID to the Scope in a similar way as Sentry.setUser() and sentry_sdk.set_user() do. When provided in such a way it will override any automatically detected value. _Why not just add this as an attribute?_ it should only appear as a span attribute, not propagate to logs and metrics. This keeps AI conversation context isolated to spans where it's semantically relevant. - Related to: https://linear.app/getsentry/issue/TET-1736/python-sdk-add-gen-aiconversationid-to-the-integrations-where-it-is#comment-77bf901d - Closes: https://linear.app/getsentry/issue/JS-1515/implement-a-new-sentrysetconversationid-api
1 parent e0ee7b4 commit c5e3249

File tree

24 files changed

+646
-7
lines changed

24 files changed

+646
-7
lines changed

.size-limit.js

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -96,21 +96,21 @@ module.exports = [
9696
path: 'packages/browser/build/npm/esm/prod/index.js',
9797
import: createImport('init', 'feedbackIntegration'),
9898
gzip: true,
99-
limit: '42 KB',
99+
limit: '43 KB',
100100
},
101101
{
102102
name: '@sentry/browser (incl. sendFeedback)',
103103
path: 'packages/browser/build/npm/esm/prod/index.js',
104104
import: createImport('init', 'sendFeedback'),
105105
gzip: true,
106-
limit: '30 KB',
106+
limit: '31 KB',
107107
},
108108
{
109109
name: '@sentry/browser (incl. FeedbackAsync)',
110110
path: 'packages/browser/build/npm/esm/prod/index.js',
111111
import: createImport('init', 'feedbackAsyncIntegration'),
112112
gzip: true,
113-
limit: '35 KB',
113+
limit: '36 KB',
114114
},
115115
{
116116
name: '@sentry/browser (incl. Metrics)',
@@ -140,7 +140,7 @@ module.exports = [
140140
import: createImport('init', 'ErrorBoundary'),
141141
ignore: ['react/jsx-runtime'],
142142
gzip: true,
143-
limit: '27 KB',
143+
limit: '28 KB',
144144
},
145145
{
146146
name: '@sentry/react (incl. Tracing)',
@@ -208,7 +208,7 @@ module.exports = [
208208
name: 'CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics)',
209209
path: createCDNPath('bundle.tracing.replay.feedback.logs.metrics.min.js'),
210210
gzip: true,
211-
limit: '86 KB',
211+
limit: '87 KB',
212212
},
213213
// browser CDN bundles (non-gzipped)
214214
{
@@ -223,7 +223,7 @@ module.exports = [
223223
path: createCDNPath('bundle.tracing.min.js'),
224224
gzip: false,
225225
brotli: false,
226-
limit: '127 KB',
226+
limit: '128 KB',
227227
},
228228
{
229229
name: 'CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed',
@@ -278,7 +278,7 @@ module.exports = [
278278
import: createImport('init'),
279279
ignore: [...builtinModules, ...nodePrefixedBuiltinModules],
280280
gzip: true,
281-
limit: '52 KB',
281+
limit: '53 KB',
282282
},
283283
// Node SDK (ESM)
284284
{

CHANGELOG.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,23 @@
66

77
Work in this release was contributed by @sebws and @harshit078. Thank you for your contributions!
88

9+
- **feat(core): Introduces a new `Sentry.setConversationId()` API to track multi turn AI conversations across API calls. ([#18909](https://github.com/getsentry/sentry-javascript/pull/18909))**
10+
11+
You can now set a conversation ID that will be automatically applied to spans within that scope. This allows you to link traces from the same conversation together.
12+
13+
```javascript
14+
import * as Sentry from '@sentry/node';
15+
16+
// Set conversation ID for all subsequent spans
17+
Sentry.setConversationId('conv_abc123');
18+
19+
// All AI spans will now include the gen_ai.conversation.id attribute
20+
await openai.chat.completions.create({...});
21+
```
22+
23+
This is particularly useful for tracking multiple AI API calls that are part of the same conversation, allowing you to analyze entire conversation flows in Sentry.
24+
The conversation ID is stored on the isolation scope and automatically applied to spans via the new `conversationIdIntegration`.
25+
926
- **feat(tanstackstart-react): Auto-instrument global middleware in `sentryTanstackStart` Vite plugin ([#18884](https://github.com/getsentry/sentry-javascript/pull/18844))**
1027

1128
The `sentryTanstackStart` Vite plugin now automatically instruments `requestMiddleware` and `functionMiddleware` arrays in `createStart()`. This captures performance data without requiring manual wrapping.

dev-packages/browser-integration-tests/suites/public-api/debug/test.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ sentryTest('logs debug messages correctly', async ({ getLocalTestUrl, page }) =>
2424
? [
2525
'Sentry Logger [log]: Integration installed: InboundFilters',
2626
'Sentry Logger [log]: Integration installed: FunctionToString',
27+
'Sentry Logger [log]: Integration installed: ConversationId',
2728
'Sentry Logger [log]: Integration installed: BrowserApiErrors',
2829
'Sentry Logger [log]: Integration installed: Breadcrumbs',
2930
'Sentry Logger [log]: Global Handler attached: onerror',
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
import * as Sentry from '@sentry/node';
2+
import express from 'express';
3+
import OpenAI from 'openai';
4+
5+
function startMockServer() {
6+
const app = express();
7+
app.use(express.json());
8+
9+
// Chat completions endpoint
10+
app.post('/openai/chat/completions', (req, res) => {
11+
const { model } = req.body;
12+
13+
res.send({
14+
id: 'chatcmpl-mock123',
15+
object: 'chat.completion',
16+
created: 1677652288,
17+
model: model,
18+
choices: [
19+
{
20+
index: 0,
21+
message: {
22+
role: 'assistant',
23+
content: 'Mock response from OpenAI',
24+
},
25+
finish_reason: 'stop',
26+
},
27+
],
28+
usage: {
29+
prompt_tokens: 10,
30+
completion_tokens: 15,
31+
total_tokens: 25,
32+
},
33+
});
34+
});
35+
36+
return new Promise(resolve => {
37+
const server = app.listen(0, () => {
38+
resolve(server);
39+
});
40+
});
41+
}
42+
43+
async function run() {
44+
const server = await startMockServer();
45+
46+
// Test: Multiple chat completions in the same conversation with manual conversation ID
47+
await Sentry.startSpan({ op: 'function', name: 'chat-with-manual-conversation-id' }, async () => {
48+
const client = new OpenAI({
49+
baseURL: `http://localhost:${server.address().port}/openai`,
50+
apiKey: 'mock-api-key',
51+
});
52+
53+
// Set conversation ID manually using Sentry API
54+
Sentry.setConversationId('user_chat_session_abc123');
55+
56+
// First message in the conversation
57+
await client.chat.completions.create({
58+
model: 'gpt-4',
59+
messages: [{ role: 'user', content: 'What is the capital of France?' }],
60+
});
61+
62+
// Second message in the same conversation
63+
await client.chat.completions.create({
64+
model: 'gpt-4',
65+
messages: [{ role: 'user', content: 'Tell me more about it' }],
66+
});
67+
68+
// Third message in the same conversation
69+
await client.chat.completions.create({
70+
model: 'gpt-4',
71+
messages: [{ role: 'user', content: 'What is its population?' }],
72+
});
73+
});
74+
75+
server.close();
76+
await Sentry.flush(2000);
77+
}
78+
79+
run();
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
import * as Sentry from '@sentry/node';
2+
import express from 'express';
3+
import OpenAI from 'openai';
4+
5+
function startMockServer() {
6+
const app = express();
7+
app.use(express.json());
8+
9+
// Chat completions endpoint
10+
app.post('/openai/chat/completions', (req, res) => {
11+
const { model } = req.body;
12+
13+
res.send({
14+
id: 'chatcmpl-mock123',
15+
object: 'chat.completion',
16+
created: 1677652288,
17+
model: model,
18+
choices: [
19+
{
20+
index: 0,
21+
message: {
22+
role: 'assistant',
23+
content: 'Mock response from OpenAI',
24+
},
25+
finish_reason: 'stop',
26+
},
27+
],
28+
usage: {
29+
prompt_tokens: 10,
30+
completion_tokens: 15,
31+
total_tokens: 25,
32+
},
33+
});
34+
});
35+
36+
return new Promise(resolve => {
37+
const server = app.listen(0, () => {
38+
resolve(server);
39+
});
40+
});
41+
}
42+
43+
async function run() {
44+
const server = await startMockServer();
45+
const client = new OpenAI({
46+
baseURL: `http://localhost:${server.address().port}/openai`,
47+
apiKey: 'mock-api-key',
48+
});
49+
50+
// First request/conversation scope
51+
await Sentry.withScope(async scope => {
52+
// Set conversation ID for this request scope BEFORE starting the span
53+
scope.setConversationId('conv_user1_session_abc');
54+
55+
await Sentry.startSpan({ op: 'http.server', name: 'GET /chat/conversation-1' }, async () => {
56+
// First message in conversation 1
57+
await client.chat.completions.create({
58+
model: 'gpt-4',
59+
messages: [{ role: 'user', content: 'Hello from conversation 1' }],
60+
});
61+
62+
// Second message in conversation 1
63+
await client.chat.completions.create({
64+
model: 'gpt-4',
65+
messages: [{ role: 'user', content: 'Follow-up in conversation 1' }],
66+
});
67+
});
68+
});
69+
70+
server.close();
71+
await Sentry.flush(2000);
72+
}
73+
74+
run();
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
import * as Sentry from '@sentry/node';
2+
import express from 'express';
3+
import OpenAI from 'openai';
4+
5+
function startMockServer() {
6+
const app = express();
7+
app.use(express.json());
8+
9+
// Chat completions endpoint
10+
app.post('/openai/chat/completions', (req, res) => {
11+
const { model } = req.body;
12+
13+
res.send({
14+
id: 'chatcmpl-mock123',
15+
object: 'chat.completion',
16+
created: 1677652288,
17+
model: model,
18+
choices: [
19+
{
20+
index: 0,
21+
message: {
22+
role: 'assistant',
23+
content: 'Mock response from OpenAI',
24+
},
25+
finish_reason: 'stop',
26+
},
27+
],
28+
usage: {
29+
prompt_tokens: 10,
30+
completion_tokens: 15,
31+
total_tokens: 25,
32+
},
33+
});
34+
});
35+
36+
return new Promise(resolve => {
37+
const server = app.listen(0, () => {
38+
resolve(server);
39+
});
40+
});
41+
}
42+
43+
async function run() {
44+
const server = await startMockServer();
45+
const client = new OpenAI({
46+
baseURL: `http://localhost:${server.address().port}/openai`,
47+
apiKey: 'mock-api-key',
48+
});
49+
50+
// Second request/conversation scope (completely separate)
51+
await Sentry.withScope(async scope => {
52+
// Set different conversation ID for this request scope BEFORE starting the span
53+
scope.setConversationId('conv_user2_session_xyz');
54+
55+
await Sentry.startSpan({ op: 'http.server', name: 'GET /chat/conversation-2' }, async () => {
56+
// First message in conversation 2
57+
await client.chat.completions.create({
58+
model: 'gpt-4',
59+
messages: [{ role: 'user', content: 'Hello from conversation 2' }],
60+
});
61+
62+
// Second message in conversation 2
63+
await client.chat.completions.create({
64+
model: 'gpt-4',
65+
messages: [{ role: 'user', content: 'Follow-up in conversation 2' }],
66+
});
67+
});
68+
});
69+
70+
server.close();
71+
await Sentry.flush(2000);
72+
}
73+
74+
run();

0 commit comments

Comments
 (0)