Skip to content

feat(core): simplify truncation logic to only keep the newest message #42233

feat(core): simplify truncation logic to only keep the newest message

feat(core): simplify truncation logic to only keep the newest message #42233

Triggered via pull request January 20, 2026 15:46
Status Failure
Total duration 18m 21s
Artifacts 1

build.yml

on: pull_request
Get Metadata
8s
Get Metadata
Check file formatting
1m 17s
Check file formatting
Check PR branches
4s
Check PR branches
Prepare E2E tests
2m 19s
Prepare E2E tests
Matrix: job_browser_loader_tests
Matrix: job_browser_playwright_tests
Matrix: job_node_integration_tests
Matrix: job_node_unit_tests
Matrix: job_remix_integration_tests
Size Check
2m 1s
Size Check
Lint
7m 10s
Lint
Circular Dependency Check
1m 26s
Circular Dependency Check
Upload Artifacts
0s
Upload Artifacts
Browser Unit Tests
3m 23s
Browser Unit Tests
Bun Unit Tests
40s
Bun Unit Tests
Deno Unit Tests
57s
Deno Unit Tests
Cloudflare Integration Tests
53s
Cloudflare Integration Tests
Node Overhead Check
3m 26s
Node Overhead Check
Check for faulty .d.ts files
36s
Check for faulty .d.ts files
Matrix: job_e2e_tests
Matrix: job_optional_e2e_tests
All required jobs passed or were skipped
3s
All required jobs passed or were skipped
Fit to window
Zoom out
Zoom in

Annotations

48 errors and 167 warnings
E2E node-express-send-to-sentry Test (optional)
Process completed with exit code 1.
Node (22) Integration Tests
Process completed with exit code 1.
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "93645306b3b87111", + "span_id": "8160a9c7d78ad46b", + "start_timestamp": 1768924494.607, "status": "ok", + "timestamp": 1768924494.7275178, + "trace_id": "1f95ee7e43f9afdfd95f810a51127797", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 42163, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 42163, + "url": "http://localhost:42163/openai/chat/completions", + "url.full": "http://localhost:42163/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:42163/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "8160a9c7d78ad46b", + "span_id": "06b93b4ab52222eb", + "start_timestamp": 1768924494.644, + "status": "ok", + "timestamp": 1768924494.7226727, + "trace_id": "1f95ee7e43f9afdfd95f810a51127797", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "93645306b3b87111", + "span_id": "4a185c400e9c0290", + "start_timestamp": 1768924494.728, "status": "ok", + "timestamp": 1768924494.740445, + "trace_id": "1f95ee7e43f9afdfd95f810a51127797", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 42163, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 42163, + "url": "http://localhost:42163/openai/responses", + "url.full": "http://localhost:42163/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > esm > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "9258326ffd050acc", + "span_id": "7474b9a208bb77a4", + "start_timestamp": 1768924491.259, "status": "ok", + "timestamp": 1768924491.3302522, + "trace_id": "895e5fcc55b81243e2a3367c3278530a", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34067, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34067, + "url": "http://localhost:34067/openai/chat/completions", + "url.full": "http://localhost:34067/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:34067/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "7474b9a208bb77a4", + "span_id": "e32841645dc32287", + "start_timestamp": 1768924491.274, + "status": "ok", + "timestamp": 1768924491.3261037, + "trace_id": "895e5fcc55b81243e2a3367c3278530a", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "9258326ffd050acc", + "span_id": "41460ae70c81f6ef", + "start_timestamp": 1768924491.33, "status": "ok", + "timestamp": 1768924491.3478138, + "trace_id": "895e5fcc55b81243e2a3367c3278530a", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34067, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34067, + "url": "http://localhost:34067/openai/responses", + "url.full": "http://localhost:34067/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "8b12bcc054732335", + "span_id": "20881cd85c1bce21", + "start_timestamp": 1768924457.064, "status": "ok", + "timestamp": 1768924457.1597543, + "trace_id": "eccb5e67d60444dd6918fea72d825d25", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 32777, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 32777, + "url": "http://localhost:32777/openai/chat/completions", + "url.full": "http://localhost:32777/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:32777/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "20881cd85c1bce21", + "span_id": "58de222fb07ea3da", + "start_timestamp": 1768924457.103, + "status": "ok", + "timestamp": 1768924457.1559904, + "trace_id": "eccb5e67d60444dd6918fea72d825d25", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "8b12bcc054732335", + "span_id": "e9b46df320db5db5", + "start_timestamp": 1768924457.16, "status": "ok", + "timestamp": 1768924457.178601, + "trace_id": "eccb5e67d60444dd6918fea72d825d25", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 32777, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 32777, + "url": "http://localhost:32777/openai/responses", + "url.full": "http://localhost:32777/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > esm > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "adb3d35b9c1cc7e9", + "span_id": "e2b65039518640ec", + "start_timestamp": 1768924453.523, "status": "ok", + "timestamp": 1768924453.5767422, + "trace_id": "9bf8bf5439b35e7c334e0d224f6a448b", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34589, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34589, + "url": "http://localhost:34589/openai/chat/completions", + "url.full": "http://localhost:34589/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:34589/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "e2b65039518640ec", + "span_id": "f33da19a6fbc4d85", + "start_timestamp": 1768924453.533, + "status": "ok", + "timestamp": 1768924453.5735226, + "trace_id": "9bf8bf5439b35e7c334e0d224f6a448b", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "adb3d35b9c1cc7e9", + "span_id": "17481c3aaaea8330", + "start_timestamp": 1768924453.577, "status": "ok", + "timestamp": 1768924453.593889, + "trace_id": "9bf8bf5439b35e7c334e0d224f6a448b", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34589, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34589, + "url": "http://localhost:34589/openai/responses", + "url.full": "http://localhost:34589/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > cjs > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "3a91d733c8a7afcb", + "span_id": "38fbe8c3a230228d", + "start_timestamp": 1768924491.249, "status": "ok", + "timestamp": 1768924491.2502856, + "trace_id": "7a127910e1039ca7580cf1863c50490a", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "3a91d733c8a7afcb", + "span_id": "5fd9f0dc7055eb78", + "start_timestamp": 1768924491.251, "status": "ok", + "timestamp": 1768924491.35589, + "trace_id": "7a127910e1039ca7580cf1863c50490a", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34011, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34011, + "url": "http://localhost:34011/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:34011/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v22.21.1", + }, + "description": "POST http://localhost:34011/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "5fd9f0dc7055eb78", + "span_id": "09a40339bb9ab4ae", + "start_timestamp": 1768924491.285, + "status": "ok", + "timestamp": 1768924491.3516421, + "trace_id": "7a127910e1039ca7580cf1863c50490a", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": Any
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > esm > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "29806a1aa606eb45", + "span_id": "51bd9ac2a6605004", + "start_timestamp": 1768924490.361, "status": "ok", + "timestamp": 1768924490.3624253, + "trace_id": "9ade122a9edcb54480bd6709669376f1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "29806a1aa606eb45", + "span_id": "3fca898883603dc2", + "start_timestamp": 1768924490.363, "status": "ok", + "timestamp": 1768924490.4368696, + "trace_id": "9ade122a9edcb54480bd6709669376f1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40089, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40089, + "url": "http://localhost:40089/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:40089/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v22.21.1", + }, + "description": "POST http://localhost:40089/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "3fca898883603dc2", + "span_id": "3df68563271d022b", + "start_timestamp": 1768924490.377, + "status": "ok", + "timestamp": 1768924490.4315622, + "trace_id": "9ade122a9edcb54480bd6709669376f1", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": A
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > cjs > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "caab8ec389a28ef2", + "span_id": "2a284ae684f5fb38", + "start_timestamp": 1768924457.511, "status": "ok", + "timestamp": 1768924457.577613, + "trace_id": "0202b082ad030e4d1923356c64fd63a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 44293, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 44293, + "url": "http://localhost:44293/anthropic/v1/messages", + "url.full": "http://localhost:44293/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:44293/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "2a284ae684f5fb38", + "span_id": "a1fb7fc7f7b59b0c", + "start_timestamp": 1768924457.534, "status": "ok", + "timestamp": 1768924457.5751324, + "trace_id": "0202b082ad030e4d1923356c64fd63a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "caab8ec389a28ef2", + "span_id": "97067132c5f9885d", + "start_timestamp": 1768924457.578, "status": "internal_error", + "timestamp": 1768924457.596083, + "trace_id": "0202b082ad030e4d1923356c64fd63a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 44293, "otel.kind": "CLIENT", "sentry.op": "http.client",
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > esm > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "2bef4846e02f8ee2", + "span_id": "af98bf1baca9ee3e", + "start_timestamp": 1768924456.775, "status": "ok", + "timestamp": 1768924456.8547006, + "trace_id": "4625a11bfb3ce0ba5bd859cd9cc37c78", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 39417, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 39417, + "url": "http://localhost:39417/anthropic/v1/messages", + "url.full": "http://localhost:39417/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:39417/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "af98bf1baca9ee3e", + "span_id": "9ed7bda4e52953a1", + "start_timestamp": 1768924456.791, "status": "ok", + "timestamp": 1768924456.849801, + "trace_id": "4625a11bfb3ce0ba5bd859cd9cc37c78", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "2bef4846e02f8ee2", + "span_id": "08b129bd4633c045", + "start_timestamp": 1768924456.855, "status": "internal_error", + "timestamp": 1768924456.8800292, + "trace_id": "4625a11bfb3ce0ba5bd859cd9cc37c78", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 39417, "otel.kind": "CLIENT", "sentry.op": "http.client"
Node (24) Integration Tests
Process completed with exit code 1.
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "fba3b6cb738c1f52", + "span_id": "195598ffe4db0e6b", + "start_timestamp": 1768924490.687, "status": "ok", + "timestamp": 1768924490.7757094, + "trace_id": "f25ab4f4ebe448d43602325fefb9ae64", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 38953, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 38953, + "url": "http://localhost:38953/openai/chat/completions", + "url.full": "http://localhost:38953/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:38953/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "195598ffe4db0e6b", + "span_id": "dbb59939852dcd89", + "start_timestamp": 1768924490.714, + "status": "ok", + "timestamp": 1768924490.7713027, + "trace_id": "f25ab4f4ebe448d43602325fefb9ae64", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "fba3b6cb738c1f52", + "span_id": "03b0c93cd09ef18f", + "start_timestamp": 1768924490.777, "status": "ok", + "timestamp": 1768924490.79545, + "trace_id": "f25ab4f4ebe448d43602325fefb9ae64", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 38953, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 38953, + "url": "http://localhost:38953/openai/responses", + "url.full": "http://localhost:38953/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "Op
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > esm > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "48381be48e8dd0de", + "span_id": "b4d73c598c21e413", + "start_timestamp": 1768924487.218, "status": "ok", + "timestamp": 1768924487.3067925, + "trace_id": "1f14d9b31fddfd1c87a8e50e51543792", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 38489, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 38489, + "url": "http://localhost:38489/openai/chat/completions", + "url.full": "http://localhost:38489/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:38489/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "b4d73c598c21e413", + "span_id": "c964ff26b7724805", + "start_timestamp": 1768924487.236, + "status": "ok", + "timestamp": 1768924487.3023708, + "trace_id": "1f14d9b31fddfd1c87a8e50e51543792", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "48381be48e8dd0de", + "span_id": "3d84ae3973d6aff2", + "start_timestamp": 1768924487.307, "status": "ok", + "timestamp": 1768924487.329372, + "trace_id": "1f14d9b31fddfd1c87a8e50e51543792", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 38489, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 38489, + "url": "http://localhost:38489/openai/responses", + "url.full": "http://localhost:38489/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "40a48ec85f243da3", + "span_id": "61d98698228b4042", + "start_timestamp": 1768924451.834, "status": "ok", + "timestamp": 1768924451.9296842, + "trace_id": "87577531f198d053ff6bf7135f8982a2", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 42603, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 42603, + "url": "http://localhost:42603/openai/chat/completions", + "url.full": "http://localhost:42603/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:42603/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "61d98698228b4042", + "span_id": "80ec16dfb0b7140a", + "start_timestamp": 1768924451.868, + "status": "ok", + "timestamp": 1768924451.9265242, + "trace_id": "87577531f198d053ff6bf7135f8982a2", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "40a48ec85f243da3", + "span_id": "4b241b440684aa03", + "start_timestamp": 1768924451.93, "status": "ok", + "timestamp": 1768924451.939485, + "trace_id": "87577531f198d053ff6bf7135f8982a2", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 42603, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 42603, + "url": "http://localhost:42603/openai/responses", + "url.full": "http://localhost:42603/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > esm > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "6156435b7bc2e3a9", + "span_id": "68ed6cc0f39c6506", + "start_timestamp": 1768924448.197, "status": "ok", + "timestamp": 1768924448.2807724, + "trace_id": "d43fe59da3e9e21848402a7f4adf7542", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 35967, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 35967, + "url": "http://localhost:35967/openai/chat/completions", + "url.full": "http://localhost:35967/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:35967/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "68ed6cc0f39c6506", + "span_id": "ce1e54240b49d671", + "start_timestamp": 1768924448.211, + "status": "ok", + "timestamp": 1768924448.2765615, + "trace_id": "d43fe59da3e9e21848402a7f4adf7542", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "6156435b7bc2e3a9", + "span_id": "a07601592774ab6a", + "start_timestamp": 1768924448.281, "status": "ok", + "timestamp": 1768924448.298135, + "trace_id": "d43fe59da3e9e21848402a7f4adf7542", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 35967, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 35967, + "url": "http://localhost:35967/openai/responses", + "url.full": "http://localhost:35967/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > cjs > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "b91213a0d17eb2c3", + "span_id": "fa337ccd0bb9d058", + "start_timestamp": 1768924488.302, "status": "ok", + "timestamp": 1768924488.3033812, + "trace_id": "411f685240b095a142d5d8867138d4e6", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "b91213a0d17eb2c3", + "span_id": "0990182ed124cf9b", + "start_timestamp": 1768924488.304, "status": "ok", + "timestamp": 1768924488.382698, + "trace_id": "411f685240b095a142d5d8867138d4e6", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 37877, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 37877, + "url": "http://localhost:37877/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:37877/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v24.12.0", + }, + "description": "POST http://localhost:37877/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "0990182ed124cf9b", + "span_id": "bb26e35684eccbfd", + "start_timestamp": 1768924488.331, + "status": "ok", + "timestamp": 1768924488.3789914, + "trace_id": "411f685240b095a142d5d8867138d4e6", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": An
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > esm > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "12168be01a3445ed", + "span_id": "f6ef01f1e766305e", + "start_timestamp": 1768924487.547, "status": "ok", + "timestamp": 1768924487.5478873, + "trace_id": "d6cea1bdaaceccccc19a397b3b7568a0", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "12168be01a3445ed", + "span_id": "982779e9b67e0f4b", + "start_timestamp": 1768924487.549, "status": "ok", + "timestamp": 1768924487.6262462, + "trace_id": "d6cea1bdaaceccccc19a397b3b7568a0", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41469, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41469, + "url": "http://localhost:41469/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:41469/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v24.12.0", + }, + "description": "POST http://localhost:41469/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "982779e9b67e0f4b", + "span_id": "4b23339accb23d6f", + "start_timestamp": 1768924487.559, + "status": "ok", + "timestamp": 1768924487.621729, + "trace_id": "d6cea1bdaaceccccc19a397b3b7568a0", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": An
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > cjs > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "4a9471a82a99754e", + "span_id": "bf2548528f79f271", + "start_timestamp": 1768924452.434, "status": "ok", + "timestamp": 1768924452.5148547, + "trace_id": "ec819d199cf912b5646fc828cc694a69", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 45601, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 45601, + "url": "http://localhost:45601/anthropic/v1/messages", + "url.full": "http://localhost:45601/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:45601/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "bf2548528f79f271", + "span_id": "a7b588b4b430c6a0", + "start_timestamp": 1768924452.46, "status": "ok", + "timestamp": 1768924452.5121925, + "trace_id": "ec819d199cf912b5646fc828cc694a69", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "4a9471a82a99754e", + "span_id": "aa475e09260e4227", + "start_timestamp": 1768924452.515, "status": "internal_error", + "timestamp": 1768924452.534163, + "trace_id": "ec819d199cf912b5646fc828cc694a69", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 45601, "otel.kind": "CLIENT", "sentry.op": "http.client",
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > esm > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "60ca5015b5bbf866", + "span_id": "313c983349f72ed7", + "start_timestamp": 1768924451.75, "status": "ok", + "timestamp": 1768924451.8117945, + "trace_id": "9e8e4897bad17923c1e277cb85d9a5fd", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 43939, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 43939, + "url": "http://localhost:43939/anthropic/v1/messages", + "url.full": "http://localhost:43939/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:43939/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "313c983349f72ed7", + "span_id": "ad05933b4c6b23b0", + "start_timestamp": 1768924451.764, "status": "ok", + "timestamp": 1768924451.8091338, + "trace_id": "9e8e4897bad17923c1e277cb85d9a5fd", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "60ca5015b5bbf866", + "span_id": "3432bed6184d5ae5", + "start_timestamp": 1768924451.812, "status": "internal_error", + "timestamp": 1768924451.8336806, + "trace_id": "9e8e4897bad17923c1e277cb85d9a5fd", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 43939, "otel.kind": "CLIENT", "sentry.op": "http.client"
Node (18) Integration Tests
Process completed with exit code 1.
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "b70f1acdeb2b3eef", + "span_id": "41b6f3cb5fb8d370", + "start_timestamp": 1768924489.466, "status": "ok", + "timestamp": 1768924489.5490022, + "trace_id": "9a6ef78cc5d46f699aaddb8a70b3cc5f", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 36265, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 36265, + "url": "http://localhost:36265/openai/chat/completions", + "url.full": "http://localhost:36265/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:36265/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "41b6f3cb5fb8d370", + "span_id": "a8dd889ef5fed20a", + "start_timestamp": 1768924489.497, + "status": "ok", + "timestamp": 1768924489.5413764, + "trace_id": "9a6ef78cc5d46f699aaddb8a70b3cc5f", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "b70f1acdeb2b3eef", + "span_id": "8b76909567675639", + "start_timestamp": 1768924489.549, "status": "ok", + "timestamp": 1768924489.569625, + "trace_id": "9a6ef78cc5d46f699aaddb8a70b3cc5f", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 36265, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 36265, + "url": "http://localhost:36265/openai/responses", + "url.full": "http://localhost:36265/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > esm > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "a9d95e5721a79f63", + "span_id": "cee41d0126481322", + "start_timestamp": 1768924485.919, "status": "ok", + "timestamp": 1768924486.0376015, + "trace_id": "7755dffe5a6afe1939d3fc2943132a41", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 33107, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 33107, + "url": "http://localhost:33107/openai/chat/completions", + "url.full": "http://localhost:33107/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:33107/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "cee41d0126481322", + "span_id": "59ce06858518f617", + "start_timestamp": 1768924485.964, + "status": "ok", + "timestamp": 1768924486.020491, + "trace_id": "7755dffe5a6afe1939d3fc2943132a41", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "a9d95e5721a79f63", + "span_id": "aeb80eb4a07b0cfa", + "start_timestamp": 1768924486.038, "status": "ok", + "timestamp": 1768924486.0886004, + "trace_id": "7755dffe5a6afe1939d3fc2943132a41", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 33107, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 33107, + "url": "http://localhost:33107/openai/responses", + "url.full": "http://localhost:33107/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "5c390ec0d565a235", + "span_id": "209f0d797081a914", + "start_timestamp": 1768924448.073, "status": "ok", + "timestamp": 1768924448.1748202, + "trace_id": "656995d867dd9faf109f782493393bc8", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41327, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41327, + "url": "http://localhost:41327/openai/chat/completions", + "url.full": "http://localhost:41327/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:41327/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "209f0d797081a914", + "span_id": "b5e39b6e678ae658", + "start_timestamp": 1768924448.111, + "status": "ok", + "timestamp": 1768924448.1636472, + "trace_id": "656995d867dd9faf109f782493393bc8", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "5c390ec0d565a235", + "span_id": "48895afbcb6600a3", + "start_timestamp": 1768924448.175, "status": "ok", + "timestamp": 1768924448.194677, + "trace_id": "656995d867dd9faf109f782493393bc8", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41327, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41327, + "url": "http://localhost:41327/openai/responses", + "url.full": "http://localhost:41327/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > esm > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "bf11816540552580", + "span_id": "3563da78dfb426e0", + "start_timestamp": 1768924444.557, "status": "ok", + "timestamp": 1768924444.6480525, + "trace_id": "7dd9d10563fb9abcb5c0e157574eaf02", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40373, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40373, + "url": "http://localhost:40373/openai/chat/completions", + "url.full": "http://localhost:40373/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:40373/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "3563da78dfb426e0", + "span_id": "9c99509c54526c61", + "start_timestamp": 1768924444.589, + "status": "ok", + "timestamp": 1768924444.6368258, + "trace_id": "7dd9d10563fb9abcb5c0e157574eaf02", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "bf11816540552580", + "span_id": "952e1c26c56719e7", + "start_timestamp": 1768924444.648, "status": "ok", + "timestamp": 1768924444.6654418, + "trace_id": "7dd9d10563fb9abcb5c0e157574eaf02", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40373, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40373, + "url": "http://localhost:40373/openai/responses", + "url.full": "http://localhost:40373/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original":
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > cjs > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "55f533e5ce19cc76", + "span_id": "c1492e479df97705", + "start_timestamp": 1768924489.187, "status": "ok", + "timestamp": 1768924489.1893969, + "trace_id": "37587959c38b5e75635b372fd841a8f1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "55f533e5ce19cc76", + "span_id": "688c556806df87f6", + "start_timestamp": 1768924489.19, "status": "ok", + "timestamp": 1768924489.2911246, + "trace_id": "37587959c38b5e75635b372fd841a8f1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40173, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40173, + "url": "http://localhost:40173/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:40173/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v18.20.8", + }, + "description": "POST http://localhost:40173/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "688c556806df87f6", + "span_id": "f9daa2fe4b4f4475", + "start_timestamp": 1768924489.226, + "status": "ok", + "timestamp": 1768924489.2812755, + "trace_id": "37587959c38b5e75635b372fd841a8f1", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": An
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > esm > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "441927330d7521c4", + "span_id": "190cd82ebc8c91b9", + "start_timestamp": 1768924488.294, "status": "ok", + "timestamp": 1768924488.2950275, + "trace_id": "11f49ab11571a0133da61872d4c9dbf8", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "441927330d7521c4", + "span_id": "71ec7fa0a48ea709", + "start_timestamp": 1768924488.296, "status": "ok", + "timestamp": 1768924488.381608, + "trace_id": "11f49ab11571a0133da61872d4c9dbf8", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 39865, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 39865, + "url": "http://localhost:39865/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:39865/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v18.20.8", + }, + "description": "POST http://localhost:39865/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "71ec7fa0a48ea709", + "span_id": "bd0e35eb6f66f8cb", + "start_timestamp": 1768924488.326, + "status": "ok", + "timestamp": 1768924488.3740592, + "trace_id": "11f49ab11571a0133da61872d4c9dbf8", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": An
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > cjs > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "8c388cef0888f8d3", + "span_id": "6c5364077ac60a9c", + "start_timestamp": 1768924449.626, "status": "ok", + "timestamp": 1768924449.7089381, + "trace_id": "12871d2c65a706ffd9cdd35deca044a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 38727, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 38727, + "url": "http://localhost:38727/anthropic/v1/messages", + "url.full": "http://localhost:38727/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:38727/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "6c5364077ac60a9c", + "span_id": "3381a8462fda71fc", + "start_timestamp": 1768924449.659, "status": "ok", + "timestamp": 1768924449.699003, + "trace_id": "12871d2c65a706ffd9cdd35deca044a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "8c388cef0888f8d3", + "span_id": "0c27c32166f8ce44", + "start_timestamp": 1768924449.709, "status": "internal_error", + "timestamp": 1768924449.7348382, + "trace_id": "12871d2c65a706ffd9cdd35deca044a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 38727, "otel.kind": "CLIENT", "sentry.op": "http.client"
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > esm > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "798e525f9cd4fe1d", + "span_id": "343c4f1e22ae4dee", + "start_timestamp": 1768924448.809, "status": "ok", + "timestamp": 1768924448.880382, + "trace_id": "0fc8a1d5f8a8aed7104045cec65cc45d", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 39277, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 39277, + "url": "http://localhost:39277/anthropic/v1/messages", + "url.full": "http://localhost:39277/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:39277/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "343c4f1e22ae4dee", + "span_id": "e21980fc191222c0", + "start_timestamp": 1768924448.833, "status": "ok", + "timestamp": 1768924448.8736365, + "trace_id": "0fc8a1d5f8a8aed7104045cec65cc45d", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "798e525f9cd4fe1d", + "span_id": "6d0dc6c0bb026946", + "start_timestamp": 1768924448.88, "status": "internal_error", + "timestamp": 1768924448.9005892, + "trace_id": "0fc8a1d5f8a8aed7104045cec65cc45d", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 39277, "otel.kind": "CLIENT", "sentry.op": "http.client",
Node (20) Integration Tests
Process completed with exit code 1.
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "c001f498b18d4058", + "span_id": "cabb2d7e461d29c4", + "start_timestamp": 1768924491.242, "status": "ok", + "timestamp": 1768924491.33638, + "trace_id": "3000a283ed214cd961de55797714dff7", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 44537, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 44537, + "url": "http://localhost:44537/openai/chat/completions", + "url.full": "http://localhost:44537/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:44537/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "cabb2d7e461d29c4", + "span_id": "4ade45fd92f7270d", + "start_timestamp": 1768924491.27, + "status": "ok", + "timestamp": 1768924491.3307898, + "trace_id": "3000a283ed214cd961de55797714dff7", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "c001f498b18d4058", + "span_id": "21e9a64d3845bbf1", + "start_timestamp": 1768924491.337, "status": "ok", + "timestamp": 1768924491.361046, + "trace_id": "3000a283ed214cd961de55797714dff7", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 44537, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 44537, + "url": "http://localhost:44537/openai/responses", + "url.full": "http://localhost:44537/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "Open
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > esm > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "c74472938afdc6ce", + "span_id": "2fcb7c353f74c30b", + "start_timestamp": 1768924487.523, "status": "ok", + "timestamp": 1768924487.6410394, + "trace_id": "86faf796ae89f545afbdff0f19d4e031", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 39689, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 39689, + "url": "http://localhost:39689/openai/chat/completions", + "url.full": "http://localhost:39689/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:39689/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "2fcb7c353f74c30b", + "span_id": "98437ed81a66ca95", + "start_timestamp": 1768924487.552, + "status": "ok", + "timestamp": 1768924487.6366208, + "trace_id": "86faf796ae89f545afbdff0f19d4e031", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "c74472938afdc6ce", + "span_id": "c9010cbea14c8643", + "start_timestamp": 1768924487.642, "status": "ok", + "timestamp": 1768924487.6620486, + "trace_id": "86faf796ae89f545afbdff0f19d4e031", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 39689, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 39689, + "url": "http://localhost:39689/openai/responses", + "url.full": "http://localhost:39689/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "4a7a1bb1984dfec5", + "span_id": "cfec7edabf50dc89", + "start_timestamp": 1768924449.876, "status": "ok", + "timestamp": 1768924449.977106, + "trace_id": "d0b6b405736030778fad5458495a756b", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41049, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41049, + "url": "http://localhost:41049/openai/chat/completions", + "url.full": "http://localhost:41049/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:41049/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "cfec7edabf50dc89", + "span_id": "3a62f90b9f7af548", + "start_timestamp": 1768924449.914, + "status": "ok", + "timestamp": 1768924449.9729948, + "trace_id": "d0b6b405736030778fad5458495a756b", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "4a7a1bb1984dfec5", + "span_id": "39fea9484ec73845", + "start_timestamp": 1768924449.977, "status": "ok", + "timestamp": 1768924449.988979, + "trace_id": "d0b6b405736030778fad5458495a756b", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41049, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41049, + "url": "http://localhost:41049/openai/responses", + "url.full": "http://localhost:41049/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > esm > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "910b3631701bd2c2", + "span_id": "5c9bf0285dc71dfe", + "start_timestamp": 1768924446.256, "status": "ok", + "timestamp": 1768924446.366591, + "trace_id": "0ed7a7df3dbbc6868e15877a608d19a6", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 36073, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 36073, + "url": "http://localhost:36073/openai/chat/completions", + "url.full": "http://localhost:36073/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:36073/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "5c9bf0285dc71dfe", + "span_id": "204c08ae95c17ade", + "start_timestamp": 1768924446.293, + "status": "ok", + "timestamp": 1768924446.3579447, + "trace_id": "0ed7a7df3dbbc6868e15877a608d19a6", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "910b3631701bd2c2", + "span_id": "fc9e8defcc6919d6", + "start_timestamp": 1768924446.367, "status": "ok", + "timestamp": 1768924446.3910162, + "trace_id": "0ed7a7df3dbbc6868e15877a608d19a6", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 36073, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 36073, + "url": "http://localhost:36073/openai/responses", + "url.full": "http://localhost:36073/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > cjs > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "94c102fa21eb9fab", + "span_id": "34f3d2a63e915add", + "start_timestamp": 1768924489.701, "status": "ok", + "timestamp": 1768924489.7022388, + "trace_id": "8307dc6c7326bda3f3804a07d388f0a6", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "94c102fa21eb9fab", + "span_id": "b6157b1dfee5ca9d", + "start_timestamp": 1768924489.703, "status": "ok", + "timestamp": 1768924489.787761, + "trace_id": "8307dc6c7326bda3f3804a07d388f0a6", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34275, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34275, + "url": "http://localhost:34275/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:34275/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v20.19.6", + }, + "description": "POST http://localhost:34275/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "b6157b1dfee5ca9d", + "span_id": "93fa5f373c79dab2", + "start_timestamp": 1768924489.734, + "status": "ok", + "timestamp": 1768924489.7796512, + "trace_id": "8307dc6c7326bda3f3804a07d388f0a6", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": An
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > esm > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "c849363e6804e4a6", + "span_id": "a6b239623d391bf8", + "start_timestamp": 1768924488.874, "status": "ok", + "timestamp": 1768924488.8748899, + "trace_id": "0b8d59575a20a67389468bb137f0f7c9", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "c849363e6804e4a6", + "span_id": "06447040ee4561e6", + "start_timestamp": 1768924488.875, "status": "ok", + "timestamp": 1768924488.9543052, + "trace_id": "0b8d59575a20a67389468bb137f0f7c9", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34651, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34651, + "url": "http://localhost:34651/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:34651/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v20.19.6", + }, + "description": "POST http://localhost:34651/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "06447040ee4561e6", + "span_id": "0b50736a28a51ae7", + "start_timestamp": 1768924488.899, + "status": "ok", + "timestamp": 1768924488.9503243, + "trace_id": "0b8d59575a20a67389468bb137f0f7c9", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": A
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > cjs > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "1ca48dbb78977fd4", + "span_id": "356306e76bed1092", + "start_timestamp": 1768924451.099, "status": "ok", + "timestamp": 1768924451.17538, + "trace_id": "4c501650a0ec8d2810e017f8e525f0aa", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40663, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40663, + "url": "http://localhost:40663/anthropic/v1/messages", + "url.full": "http://localhost:40663/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:40663/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "356306e76bed1092", + "span_id": "ca21bc83ef2b4537", + "start_timestamp": 1768924451.128, "status": "ok", + "timestamp": 1768924451.1706896, + "trace_id": "4c501650a0ec8d2810e017f8e525f0aa", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "1ca48dbb78977fd4", + "span_id": "cb0605b33d2a3d2b", + "start_timestamp": 1768924451.176, "status": "internal_error", + "timestamp": 1768924451.202343, + "trace_id": "4c501650a0ec8d2810e017f8e525f0aa", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 40663, "otel.kind": "CLIENT", "sentry.op": "http.client",
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > esm > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "c4e7ebdf9f786a1b", + "span_id": "df6802e872166169", + "start_timestamp": 1768924450.307, "status": "ok", + "timestamp": 1768924450.383149, + "trace_id": "59b3e8046f6a61533923921f86e2b3a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 44051, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 44051, + "url": "http://localhost:44051/anthropic/v1/messages", + "url.full": "http://localhost:44051/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:44051/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "df6802e872166169", + "span_id": "57b474d8565ddf3b", + "start_timestamp": 1768924450.331, "status": "ok", + "timestamp": 1768924450.3777297, + "trace_id": "59b3e8046f6a61533923921f86e2b3a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "c4e7ebdf9f786a1b", + "span_id": "8b32d840f94079fe", + "start_timestamp": 1768924450.383, "status": "internal_error", + "timestamp": 1768924450.4098294, + "trace_id": "59b3e8046f6a61533923921f86e2b3a1", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 44051, "otel.kind": "CLIENT", "sentry.op": "http.client"
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "6dac63c42b894612", + "span_id": "a5415a46c6a33366", + "start_timestamp": 1768924649.736, "status": "ok", + "timestamp": 1768924649.823901, + "trace_id": "e9f0475fd594a42fd28431dd84984b07", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 36111, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 36111, + "url": "http://localhost:36111/openai/chat/completions", + "url.full": "http://localhost:36111/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:36111/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "a5415a46c6a33366", + "span_id": "57e335e123fc0347", + "start_timestamp": 1768924649.769, + "status": "ok", + "timestamp": 1768924649.8198805, + "trace_id": "e9f0475fd594a42fd28431dd84984b07", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "6dac63c42b894612", + "span_id": "8a284deb532acb5c", + "start_timestamp": 1768924649.824, "status": "ok", + "timestamp": 1768924649.8378608, + "trace_id": "e9f0475fd594a42fd28431dd84984b07", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 36111, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 36111, + "url": "http://localhost:36111/openai/responses", + "url.full": "http://localhost:36111/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "O
suites/tracing/openai/v6/test.ts > OpenAI integration (V6) > esm/cjs > esm > creates openai related spans with sendDefaultPii: true (v6): dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "608504f05258a542", + "span_id": "eb7222b94f89d800", + "start_timestamp": 1768924646.049, "status": "ok", + "timestamp": 1768924646.1238925, + "trace_id": "ec6423bd52154bd83b33c50ad632f17e", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 45747, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 45747, + "url": "http://localhost:45747/openai/chat/completions", + "url.full": "http://localhost:45747/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 6.0.0", + }, + "description": "POST http://localhost:45747/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "eb7222b94f89d800", + "span_id": "2a01d5003b7327f7", + "start_timestamp": 1768924646.063, + "status": "ok", + "timestamp": 1768924646.1209095, + "trace_id": "ec6423bd52154bd83b33c50ad632f17e", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "608504f05258a542", + "span_id": "78cf5005d9ef09c9", + "start_timestamp": 1768924646.124, "status": "ok", + "timestamp": 1768924646.1424608, + "trace_id": "ec6423bd52154bd83b33c50ad632f17e", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 45747, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 45747, + "url": "http://localhost:45747/openai/responses", + "url.full": "http://localhost:45747/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > cjs > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "9eb4da6f1d57cd79", + "span_id": "db1ba38cc0e40c1a", + "start_timestamp": 1768924610.249, "status": "ok", + "timestamp": 1768924610.3320415, + "trace_id": "bfea0278d13bc4e7b3e3b7bad6ae2f68", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 43029, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 43029, + "url": "http://localhost:43029/openai/chat/completions", + "url.full": "http://localhost:43029/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:43029/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "db1ba38cc0e40c1a", + "span_id": "8da7507c507619c0", + "start_timestamp": 1768924610.281, + "status": "ok", + "timestamp": 1768924610.3283978, + "trace_id": "bfea0278d13bc4e7b3e3b7bad6ae2f68", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "9eb4da6f1d57cd79", + "span_id": "1b4c6b924427c6a3", + "start_timestamp": 1768924610.332, "status": "ok", + "timestamp": 1768924610.345217, + "trace_id": "bfea0278d13bc4e7b3e3b7bad6ae2f68", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 43029, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 43029, + "url": "http://localhost:43029/openai/responses", + "url.full": "http://localhost:43029/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/openai/test.ts > OpenAI integration > esm/cjs > esm > creates openai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,12 +1,12 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { + "spans": [ + { "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.request.temperature": 0.7, "gen_ai.response.finish_reasons": "[\"stop\"]", "gen_ai.response.id": "chatcmpl-mock123", @@ -25,13 +25,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "chat gpt-3.5-turbo", "op": "gen_ai.chat", "origin": "auto.ai.openai", + "parent_span_id": "7a4858c9aea2742c", + "span_id": "829da0e61d87bcb6", + "start_timestamp": 1768924606.595, "status": "ok", + "timestamp": 1768924606.6732032, + "trace_id": "0fb2e3b612ae1f6718839a0b13885c33", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 281, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40339, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40339, + "url": "http://localhost:40339/openai/chat/completions", + "url.full": "http://localhost:40339/openai/chat/completions", + "url.path": "/openai/chat/completions", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "OpenAI/JS 5.18.1", + }, + "description": "POST http://localhost:40339/openai/chat/completions", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "829da0e61d87bcb6", + "span_id": "1b70dbd4e8cc4cb2", + "start_timestamp": 1768924606.611, + "status": "ok", + "timestamp": 1768924606.6697302, + "trace_id": "0fb2e3b612ae1f6718839a0b13885c33", + }, + { "data": { "gen_ai.operation.name": "responses", "gen_ai.request.messages": "Translate this to French: Hello", "gen_ai.request.model": "gpt-3.5-turbo", "gen_ai.response.finish_reasons": "[\"completed\"]", @@ -51,13 +86,48 @@ "sentry.origin": "auto.ai.openai", }, "description": "responses gpt-3.5-turbo", "op": "gen_ai.responses", "origin": "auto.ai.openai", + "parent_span_id": "7a4858c9aea2742c", + "span_id": "0c64c33ff327bea7", + "start_timestamp": 1768924606.674, "status": "ok", + "timestamp": 1768924606.690888, + "trace_id": "0fb2e3b612ae1f6718839a0b13885c33", }, - ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 435, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 40339, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 40339, + "url": "http://localhost:40339/openai/responses", + "url.full": "http://localhost:40339/openai/responses", + "url.path": "/openai/responses", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > cjs > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "7032f8c04691367a", + "span_id": "33159a916cbbf4ff", + "start_timestamp": 1768924647.982, "status": "ok", + "timestamp": 1768924647.9832373, + "trace_id": "cef32ff09b85f6271e20ce96eddbecd0", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "7032f8c04691367a", + "span_id": "1d66a01e878683c8", + "start_timestamp": 1768924647.983, "status": "ok", + "timestamp": 1768924648.0830657, + "trace_id": "cef32ff09b85f6271e20ce96eddbecd0", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41395, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41395, + "url": "http://localhost:41395/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:41395/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v24.12.0", + }, + "description": "POST http://localhost:41395/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "1d66a01e878683c8", + "span_id": "76088e02166bfd21", + "start_timestamp": 1768924648.01, + "status": "ok", + "timestamp": 1768924648.0786748, + "trace_id": "cef32ff09b85f6271e20ce96eddbecd0", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": An
suites/tracing/google-genai/test.ts > Google GenAI integration > esm/cjs > esm > creates google genai related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,77 +1,191 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "chat", "gen_ai.request.max_tokens": 150, - "gen_ai.request.messages": StringMatching /\[\{"role":"system","content":"You are a friendly robot who likes to be funny."\},/, + "gen_ai.request.messages": "[{\"role\":\"user\",\"parts\":[{\"text\":\"Hello, how are you?\"}]}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "gemini-1.5-pro", "gen_ai.request.temperature": 0.8, "gen_ai.request.top_p": 0.9, "gen_ai.system": "google_genai", "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro create", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "4fdbecf63e0eb191", + "span_id": "21b680259c80b8c6", + "start_timestamp": 1768924647.178, "status": "ok", + "timestamp": 1768924647.1789904, + "trace_id": "7e1b521f5908b432e4948f876ea129a6", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "chat", - "gen_ai.request.messages": Any<String>, + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"Tell me a joke\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "gemini-1.5-pro", - "gen_ai.response.text": Any<String>, + "gen_ai.response.text": "Mock response from Google GenAI!", "gen_ai.system": "google_genai", "gen_ai.usage.input_tokens": 8, "gen_ai.usage.output_tokens": 12, "gen_ai.usage.total_tokens": 20, "sentry.op": "gen_ai.chat", "sentry.origin": "auto.ai.google_genai", }, "description": "chat gemini-1.5-pro", "op": "gen_ai.chat", "origin": "auto.ai.google_genai", + "parent_span_id": "4fdbecf63e0eb191", + "span_id": "9013c18c25e924fe", + "start_timestamp": 1768924647.18, "status": "ok", + "timestamp": 1768924647.246415, + "trace_id": "7e1b521f5908b432e4948f876ea129a6", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { + "http.request.method": "POST", + "http.request.method_original": "POST", + "http.response.header.content-length": 217, + "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 41949, + "otel.kind": "CLIENT", + "sentry.op": "http.client", + "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 41949, + "url": "http://localhost:41949/v1beta/models/gemini-1.5-pro:generateContent", + "url.full": "http://localhost:41949/v1beta/models/gemini-1.5-pro:generateContent", + "url.path": "/v1beta/models/gemini-1.5-pro:generateContent", + "url.query": "", + "url.scheme": "http", + "user_agent.original": "google-genai-sdk/1.20.0 gl-node/v24.12.0", + }, + "description": "POST http://localhost:41949/v1beta/models/gemini-1.5-pro:generateContent", + "op": "http.client", + "origin": "auto.http.otel.node_fetch", + "parent_span_id": "9013c18c25e924fe", + "span_id": "63180f25c35b82e8", + "start_timestamp": 1768924647.19, + "status": "ok", + "timestamp": 1768924647.2428598, + "trace_id": "7e1b521f5908b432e4948f876ea129a6", + }, + { + "data": { "gen_ai.operation.name": "models", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": Any<
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > cjs > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "410de718c2ccc1b2", + "span_id": "f1d53725f3243d2c", + "start_timestamp": 1768924610.317, "status": "ok", + "timestamp": 1768924610.3950875, + "trace_id": "9581d02c9d597594b26b0b7529ebb80d", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 34191, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 34191, + "url": "http://localhost:34191/anthropic/v1/messages", + "url.full": "http://localhost:34191/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:34191/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "f1d53725f3243d2c", + "span_id": "1a958c89005735e2", + "start_timestamp": 1768924610.343, "status": "ok", + "timestamp": 1768924610.3920867, + "trace_id": "9581d02c9d597594b26b0b7529ebb80d", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "410de718c2ccc1b2", + "span_id": "339224b1f08afcd7", + "start_timestamp": 1768924610.396, "status": "internal_error", + "timestamp": 1768924610.4160912, + "trace_id": "9581d02c9d597594b26b0b7529ebb80d", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 34191, "otel.kind": "CLIENT", "sentry.op": "http.client
suites/tracing/anthropic/test.ts > Anthropic integration > esm/cjs > esm > creates anthropic related spans with sendDefaultPii: true: dev-packages/node-integration-tests/utils/assertions.ts#L35
AssertionError: expected { contexts: { …(8) }, …(14) } to match object { event_id: Any<String>, …(5) } (89 matching properties omitted from actual) - Expected + Received @@ -1,13 +1,14 @@ { "event_id": Any<String>, - "spans": ArrayContaining [ - ObjectContaining { - "data": ObjectContaining { + "spans": [ + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.max_tokens": 100, - "gen_ai.request.messages": "[{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"What is the capital of France?\"}]", + "gen_ai.request.messages.original_length": 2, "gen_ai.request.model": "claude-3-haiku-20240307", "gen_ai.request.temperature": 0.7, "gen_ai.response.id": "msg_mock123", "gen_ai.response.model": "claude-3-haiku-20240307", "gen_ai.response.text": "Hello from Anthropic mock!", @@ -19,94 +20,150 @@ "sentry.origin": "auto.ai.anthropic", }, "description": "messages claude-3-haiku-20240307", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "23ae0b059fddcb60", + "span_id": "9de56f0fabcdc9e0", + "start_timestamp": 1768924609.538, "status": "ok", + "timestamp": 1768924609.595514, + "trace_id": "46702ff41c73638bd73fc91a4b9d3501", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 247, "http.response.status_code": 200, + "network.peer.address": "::1", + "network.peer.port": 43835, "otel.kind": "CLIENT", "sentry.op": "http.client", "sentry.origin": "auto.http.otel.node_fetch", + "server.address": "localhost", + "server.port": 43835, + "url": "http://localhost:43835/anthropic/v1/messages", + "url.full": "http://localhost:43835/anthropic/v1/messages", "url.path": "/anthropic/v1/messages", "url.query": "", "url.scheme": "http", + "user_agent.original": "Anthropic/JS 0.63.0", }, + "description": "POST http://localhost:43835/anthropic/v1/messages", "op": "http.client", "origin": "auto.http.otel.node_fetch", + "parent_span_id": "9de56f0fabcdc9e0", + "span_id": "1d4467709c09b901", + "start_timestamp": 1768924609.548, "status": "ok", + "timestamp": 1768924609.59263, + "trace_id": "46702ff41c73638bd73fc91a4b9d3501", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "gen_ai.operation.name": "messages", "gen_ai.request.messages": "[{\"role\":\"user\",\"content\":\"This will fail\"}]", + "gen_ai.request.messages.original_length": 1, "gen_ai.request.model": "error-model", "gen_ai.system": "anthropic", "sentry.op": "gen_ai.messages", "sentry.origin": "auto.ai.anthropic", }, "description": "messages error-model", "op": "gen_ai.messages", "origin": "auto.ai.anthropic", + "parent_span_id": "23ae0b059fddcb60", + "span_id": "9fe0cf9fea4e2f61", + "start_timestamp": 1768924609.596, "status": "internal_error", + "timestamp": 1768924609.6153986, + "trace_id": "46702ff41c73638bd73fc91a4b9d3501", }, - ObjectContaining { - "data": ObjectContaining { + { + "data": { "http.request.method": "POST", "http.request.method_original": "POST", "http.response.header.content-length": 15, "http.response.status_code": 404, + "network.peer.address": "::1", + "network.peer.port": 43835, "otel.kind": "CLIENT", "sentry.op": "http.client",
suites/public-api/onUnhandledRejectionIntegration/test.ts > onUnhandledRejectionIntegration > should not close process on unhandled rejection in strict mode: dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/test.ts#L45
Error: Test timed out in 15000ms. If this is a long-running test, pass a timeout value as the last argument or configure it globally with "testTimeout". ❯ suites/public-api/onUnhandledRejectionIntegration/test.ts:45:3
Unhandled error: dev-packages/node-integration-tests/suites/public-api/onUnhandledRejectionIntegration/test.ts#L52
AssertionError: expected null not to be null ❯ suites/public-api/onUnhandledRejectionIntegration/test.ts:52:25 ❯ ChildProcess.exithandler node:child_process:410:7 ❯ ChildProcess.emit node:events:508:28 ❯ maybeClose node:internal/child_process:1101:16 ❯ Process.ChildProcess._handle.onexit node:internal/child_process:305:5 This error originated in "suites/public-api/onUnhandledRejectionIntegration/test.ts" test file. It doesn't mean the error was thrown inside the file itself, but while it was running. The latest test that might've caused the error is "should not close process on unhandled rejection in strict mode". It might mean one of the following: - The error was thrown, while Vitest was running this test. - If the error occurred after the test had been completed, this was the last documented test before it was thrown.
All required jobs passed or were skipped
Process completed with exit code 1.
PW loader_eager Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
PW loader_base Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
PW loader_replay_buffer Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
PW loader_debug Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
PW loader_tracing Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
PW loader_replay Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
PW loader_tracing_replay Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Remix (Node 20) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Remix (Node 22) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Remix (Node 24) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Remix (Node 18) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (24) Unit Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (18) Unit Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (22) Unit Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (20) Unit Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_replay_feedback_min firefox Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_min Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Browser Unit Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_replay Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright esm (2/4) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright esm (1/4) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-koa Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express-cjs-preload Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-profiling-esm Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express-esm-loader Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express-esm-without-loader Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-profiling-cjs Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-hapi Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express-esm-preload Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-core-express-otel-v2-sdk-node Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-core-express-otel-v2 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E browser-webworker-vite Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-otel-sdk-node Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-with-submodules Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express-v5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-framework-spa Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-core-express-otel-v2-custom-sampler Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-core-express-otel-v1-custom-sampler Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-otel-custom-sampler Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E tsx-express Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-otel Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-spa Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-8 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-express-incorrect-instrumentation Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-core-express-otel-v1 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-core-express-otel-v1-sdk-node Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E astro-4 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-with-submodules-decorator Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E solid Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-distributed-tracing Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E cloudflare-mcp Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-19 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E webpack-5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright esm (3/4) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-firebase Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E webpack-4 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E svelte-5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-connect Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-11 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E generic-ts5.0 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-6-use-routes Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-basic-with-graphql Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E default-browser Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E astro-5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-fastify-4 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E cloudflare-workers Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E solid-tanstack-router Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-otel-without-tracing Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-framework-node-20-18 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E generic-ts3.8 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-framework Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-remix-app-express-vite-dev Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-graphql Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-basic Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nestjs-fastify Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-remix-app-v2-non-vite Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-framework-custom Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E vue-3 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-17 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E vue-3 (latest) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E sveltekit-cloudflare-pages Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-fastify-5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-remix-app-express Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-react-app (TS 3.8) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-fastify-3 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E vue-tanstack-router Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-create-hash-router Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-framework-spa-node-20-18 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-spa (TS 3.8) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E sveltekit-2.5.0-twp Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E solidstart Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E angular-17 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E angular-20 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-remix-app-v2 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E node-exports-test-app Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16-userfeedback Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_replay_feedback_min webkit Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E sveltekit-2-kit-tracing Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E tanstack-router Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-create-browser-router Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E angular-19 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E solidstart-dynamic-import Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E sveltekit-2 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-framework (latest) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nuxt-3-top-level-import Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E angular-21 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16-cacheComponents Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E solidstart-top-level-import Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16-tunnel (turbopack) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-react-app Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E aws-serverless (Node 20) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-cross-usage Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E solidstart-spa Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E remix-hydrogen Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-next-app Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_replay_feedback_logs_metrics Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E tanstackstart-react Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-6 (TS 3.8) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-create-memory-router Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E angular-18 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16-tunnel Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E sveltekit-2-svelte-5 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nuxt-3-dynamic-import Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E aws-serverless Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-6-descendant-routes Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E create-next-app (next@13) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nuxt-3 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-15-intl Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E hydrogen-react-router-7 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E ember-embroider Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-6 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-pages-dir (next@13) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E aws-serverless (Node 18) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E ember-classic Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright esm (4/4) Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_logs_metrics Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-pages-dir Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-13 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-t3 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-15-basepath Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nuxt-3-min Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-14 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-pages-dir (next@15) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16-tunnel (webpack) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_replay Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nuxt-4 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16 (latest, turbopack) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_replay_feedback Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Playwright bundle_tracing_replay_feedback_min Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E react-router-7-lazy-routes Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-15 Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (22) Integration Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (24) Integration Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (18) Integration Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16 (webpack) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-16 (latest, webpack) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-app-dir (next@13) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (20) Integration Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-app-dir Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E supabase-nextjs Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
E2E nextjs-app-dir (next@15) Test
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results
Node (24) (TS 3.8) Integration Tests
This action is being deprecated in favor of 'codecov-action'. Please update CI accordingly to use 'codecov-action@v5' with 'report_type: test_results'. The 'codecov-action' should and can be run at least once for coverage and once for test results

Artifacts

Produced during runtime
Name Size Digest
build-output Expired
30.3 MB
sha256:079bf18cbb69865849df616c1315570386489d4782cb4de761575dc6eff366ff