feat(ai-sdk): migrate ai-sdk to opentelemtry 1.40 semantic conventions#924
feat(ai-sdk): migrate ai-sdk to opentelemtry 1.40 semantic conventions#924dvirski wants to merge 4 commits intotraceloop:mainfrom
Conversation
📝 WalkthroughWalkthroughMigrates tracing to OpenTelemetry 1.40 GenAI semantics, updates AI SDK and Changes
Sequence DiagramsequenceDiagram
participant Dev as Developer Script
participant Traceloop as Traceloop SDK
participant VercelAI as Vercel AI SDK
participant Provider as LLM Provider (OpenAI/Anthropic)
participant OTel as OpenTelemetry
Dev->>Traceloop: run sample_vercel_ai_otel140 (withWorkflow/withAgent)
Traceloop->>OTel: open instrumentation context (OTel 1.40)
Traceloop->>VercelAI: call generateText/streamText/generateObject / tool loop
VercelAI->>Provider: send request (OpenAI / Anthropic)
Provider-->>VercelAI: streaming/response (model output, tokens, tool calls)
VercelAI->>Traceloop: emit ai-sdk spans/attributes
Traceloop->>OTel: transform spans -> gen_ai semantic conventions (chat/model, execute_tool, gen_ai.*)
OTel-->>OTel: record finalized spans with gen_ai attributes
Dev->>Dev: validate recorded spans/attributes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.ts (1)
505-527:⚠️ Potential issue | 🟡 MinorAssert that the tool span exists before validating its attributes.
The
if (toolSpan)guard makes this test pass even when tool execution tracing regresses and noexecute_toolspan is emitted.Suggested tightening
const toolSpan = spans.find((span) => span.name.startsWith("execute_tool "), ); - if (toolSpan) { - // Verify tool call input/output are captured - assert.ok( - toolSpan.attributes[SpanAttributes.TRACELOOP_ENTITY_INPUT], - "Tool span should have entity input", - ); - assert.ok( - toolSpan.attributes[SpanAttributes.TRACELOOP_ENTITY_OUTPUT], - "Tool span should have entity output", - ); - assert.strictEqual( - toolSpan.attributes[SpanAttributes.TRACELOOP_SPAN_KIND], - "tool", - ); - assert.strictEqual( - toolSpan.attributes[SpanAttributes.TRACELOOP_ENTITY_NAME], - "getWeather", - ); - } + assert.ok(toolSpan, "Tool execution span should exist"); + assert.ok( + toolSpan.attributes[SpanAttributes.TRACELOOP_ENTITY_INPUT], + "Tool span should have entity input", + ); + assert.ok( + toolSpan.attributes[SpanAttributes.TRACELOOP_ENTITY_OUTPUT], + "Tool span should have entity output", + ); + assert.strictEqual( + toolSpan.attributes[SpanAttributes.TRACELOOP_SPAN_KIND], + "tool", + ); + assert.strictEqual( + toolSpan.attributes[SpanAttributes.TRACELOOP_ENTITY_NAME], + "getWeather", + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.ts` around lines 505 - 527, The test currently skips validation when no tool span exists because of the `if (toolSpan)` guard; replace it by asserting the span exists first (e.g., assert.ok(toolSpan, "expected execute_tool span") ) and then validate attributes on `toolSpan` (SpanAttributes.TRACELOOP_ENTITY_INPUT, TRACELOOP_ENTITY_OUTPUT, TRACELOOP_SPAN_KIND, TRACELOOP_ENTITY_NAME) to ensure the test fails when an `execute_tool` span is missing and still checks the attribute expectations.
🧹 Nitpick comments (2)
packages/traceloop-sdk/package.json (1)
116-116: Exact version pinning differs from other dependencies.The
aipackage uses exact pinning (6.0.132) while other dependencies use caret ranges (^). Consider using^6.0.132for consistency, unless there's a specific reason to lock this exact version (e.g., known issues with newer minor versions).Proposed change for consistency
- "ai": "6.0.132", + "ai": "^6.0.132",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/package.json` at line 116, The dependency entry for "ai" is pinned to an exact version "6.0.132" while the rest of the dependencies use caret ranges; update the "ai" dependency in package.json to use a caret range (e.g., "^6.0.132") for consistency unless you intentionally need to lock the exact version, by editing the "ai" key in the dependencies section to the caret-prefixed version.packages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.ts (1)
114-117: Use imported semconv constants instead of rawgen_ai.*keys.This file already depends on semconv constants elsewhere, so hardcoding these attribute names makes the tests easier to drift during future renames.
As per coding guidelines, "Import AI/LLM semantic attribute constants from
@traceloop/ai-semantic-conventionsrather than hardcoding strings".Also applies to: 137-140, 163-166, 193-197, 217-221, 238-242, 260-263, 493-499
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.ts` around lines 114 - 117, The test is using hardcoded attribute keys like attributes[`gen_ai.tool.definitions`] which should be replaced with the exported semconv constants from `@traceloop/ai-semantic-conventions`; update all occurrences (e.g., the usages around the toolDefs assignment and the other lines called out) to import the appropriate constant(s) (for example the TOOL_DEFINITIONS/GEN_AI_* constants) and reference them (e.g., attributes[SEMCONV_CONSTANT]) so the tests use the canonical semantic attribute names instead of raw strings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/sample-app/src/sample_vercel_ai_otel140.ts`:
- Around line 99-102: The logs are referencing non-existent properties
promptTokens and completionTokens on result.usage; update all places that read
result.usage.promptTokens -> result.usage.inputTokens and
result.usage.completionTokens -> result.usage.outputTokens (e.g., the
console.log calls that print Tokens and any other occurrences later in the file
where result.usage is accessed) so the logs show actual token counts from
ai@6.0.132.
In
`@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-create-a-dataset-with-file-column-type_3148476545/recording.har`:
- Around line 38-43: The HAR recording used by the test "should create a dataset
with file column type" is returning a 401 Unauthorized which makes the assertion
on dataset.slug fail; update the recording(s) (e.g.,
Attachment-API-Integration-Tests.../recording.har) or the test fixture to
include a successful 200 response body matching the expected create-dataset
payload, and ensure the test's authentication setup (token/mock auth middleware)
is applied when generating the recording so future runs replay an authenticated
response; specifically regenerate or edit the recordings for the dataset-related
tests referenced (including the recording with Unauthorized text) so they
contain a valid dataset JSON with a slug and status 200.
In
`@packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-create-a-new-dataset_1486295619/recording.har`:
- Around line 38-43: The recording for the test "should create a new dataset"
contains a 401 Unauthorized response body {"error":"Unauthorized"} which will
fail in replay; open the test named should create a new dataset in
datasets-comprehensive.test.ts and replace the failing HAR recording
(Dataset-API-Comprehensive-Tests.../should-create-a-new-dataset_1486295619/recording.har)
by re-recording the interaction against the correct environment with valid
credentials, or restore the previous successful recording that contains the
successful dataset creation response (so assertions on result.datasets pass);
ensure the test harness that strips Authorization headers still runs with valid
auth during recording to avoid future 401s.
In
`@packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-list-all-datasets_3088053986/recording.har`:
- Around line 29-34: The recording used by the test "should list all datasets"
contains a 401 Unauthorized response, causing the test to fail; re-run and
re-record the Dataset listing integration test with valid credentials so the HAR
for that test (and the related failing dataset HARs referenced as 102–103)
captures a successful 200 response containing { datasets: [...], total: number
}; replace the recorded response payload used by the test harness and commit the
new recordings so Array.isArray(result.datasets) and the total assertion pass.
In
`@packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-column-data_784587577/recording.har`:
- Around line 38-43: The recording for the POST /v2/datasets call in
recording.har returns 401 which breaks the test "should handle invalid column
data" in datasets-comprehensive.test.ts; re-capture the HTTP fixture so the
initial POST /v2/datasets succeeds (200/created) with valid authentication and
the created dataset payload, ensuring the auth header/token used during capture
matches the test runner credentials, then replace the failing recording entry in
recording.har (the POST /v2/datasets interaction) with the new successful
capture and re-run the test to verify the invalid-column branch executes.
In
`@packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-dataset-slug_1145052198/recording.har`:
- Around line 29-34: The recorded HTTP response for the test "should handle
invalid dataset slug" currently shows a 401 Unauthorized (response.content.text
"{\"error\":\"Unauthorized\"}") which masks the intended 404 Dataset-not-found
behavior; update the HAR recording for that test so the response.status (or
equivalent response object) is 404 and response.content.text is
"{\"error\":\"Dataset not found\"}", and if authentication is causing the 401
regenerate the recording while disabling auth or use the test's mock/stub path
to ensure the server returns the 404 for the non-existent slug (look for the
recording entry named recording.har and the
response.content.text/response.status fields to change).
In
`@packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-row-data_1860613701/recording.har`:
- Around line 36-44: The test "should-handle-invalid-row-data" is hitting a
mocked 401 on the initial POST /v2/datasets so it never reaches the invalid-row
branch when calling tempDataset.addRow; update the test to either (a) ensure the
POST /v2/datasets mock returns a successful creation response (so tempDataset
exists and addRow triggers the invalid-row path), or (b) remove the explicit
dataset creation step and call tempDataset.addRow against the fixture that
returns the invalid-row response, making sure to set up a fake dataset
id/cleanup path if needed; locate the test by the name
"should-handle-invalid-row-data" and the tempDataset.addRow call to apply the
change (same fix for the other affected test case).
In `@packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts`:
- Around line 793-799: transformAiSdkSpanNames currently only renames spans
whose name equals TOOL_SPAN_NAME ("ai.toolCall"), causing other spans ending
with ".tool" to keep legacy names; update transformAiSdkSpanNames to detect
span.name.endsWith(".tool") (same logic as transformOperationName) and call
span.updateName using GEN_AI_OPERATION_NAME_VALUE_EXECUTE_TOOL plus the tool
name when available (use the same attribute key "ai.toolCall.name" or fallback),
and apply the same change for the other similar block handling tool spans so all
".tool" spans are renamed consistently.
- Around line 339-345: The current loops (messages.forEach and the similar loop
at 370-375) always wrap message content as a single part with TYPE_TEXT after
calling processMessageContent, which causes structured parts (tools, images,
etc.) to be lost; update the logic in the message-handling blocks to detect when
processMessageContent returns structured parts (e.g., an array of {type,
content} or an object that already represents parts) and push those parts
directly into inputMessages (or the equivalent output) instead of
JSON.stringify-ing and forcing TYPE_TEXT, and only fall back to creating a
single { type: TYPE_TEXT, content: ... } part when the content is truly plain
text or a primitive; apply the same preservation change to the other loop
mentioned (lines ~370-375).
---
Outside diff comments:
In `@packages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.ts`:
- Around line 505-527: The test currently skips validation when no tool span
exists because of the `if (toolSpan)` guard; replace it by asserting the span
exists first (e.g., assert.ok(toolSpan, "expected execute_tool span") ) and then
validate attributes on `toolSpan` (SpanAttributes.TRACELOOP_ENTITY_INPUT,
TRACELOOP_ENTITY_OUTPUT, TRACELOOP_SPAN_KIND, TRACELOOP_ENTITY_NAME) to ensure
the test fails when an `execute_tool` span is missing and still checks the
attribute expectations.
---
Nitpick comments:
In `@packages/traceloop-sdk/package.json`:
- Line 116: The dependency entry for "ai" is pinned to an exact version
"6.0.132" while the rest of the dependencies use caret ranges; update the "ai"
dependency in package.json to use a caret range (e.g., "^6.0.132") for
consistency unless you intentionally need to lock the exact version, by editing
the "ai" key in the dependencies section to the caret-prefixed version.
In `@packages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.ts`:
- Around line 114-117: The test is using hardcoded attribute keys like
attributes[`gen_ai.tool.definitions`] which should be replaced with the exported
semconv constants from `@traceloop/ai-semantic-conventions`; update all
occurrences (e.g., the usages around the toolDefs assignment and the other lines
called out) to import the appropriate constant(s) (for example the
TOOL_DEFINITIONS/GEN_AI_* constants) and reference them (e.g.,
attributes[SEMCONV_CONSTANT]) so the tests use the canonical semantic attribute
names instead of raw strings.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c595bc0e-90dc-44d3-b0f2-e27c136c3a0b
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (39)
packages/sample-app/package.jsonpackages/sample-app/src/conversations/sample_chatbot_interactive.tspackages/sample-app/src/conversations/sample_conversation_id_config.tspackages/sample-app/src/conversations/sample_with_conversation.tspackages/sample-app/src/sample_vercel_ai_agent.tspackages/sample-app/src/sample_vercel_ai_otel140.tspackages/traceloop-sdk/package.jsonpackages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-create-a-dataset-with-file-column-type_3148476545/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-create-a-new-dataset_1486295619/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-list-all-datasets_3088053986/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-column-data_784587577/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-dataset-slug_1145052198/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-row-data_1860613701/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-preserve-original-AI-SDK-span-name-when-no-agent-metadata-is-provided_1735519430/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-propagate-agent-name-to-tool-call-spans_3577231859/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-properly-scope-agent-names-in-nested-agent-scenarios_1670012146/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-generateObject-with-agent-metadata_1744675110/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Agent-Integration-with-Recording_2039949225/should-use-agent-name-for-streamText-with-agent-metadata_4019571713/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-OpenAI-provider-spans-correctly-with-recording_3593617962/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-capture-and-transform-OpenAI-cache-tokens-from-providerMetadata_2332139343/recording.harpackages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.harpackages/traceloop-sdk/recordings/Test-Agent-Decorator_2969879889/should-create-spans-for-agents-using-decoration-syntax_1932039671/recording.harpackages/traceloop-sdk/recordings/Test-Agent-Decorator_2969879889/should-create-spans-for-agents-using-withAgent-syntax_3895564654/recording.harpackages/traceloop-sdk/recordings/Test-Agent-Decorator_2969879889/should-propagate-agent-name-to-manual-LLM-instrumentation_2332462647/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-create-spans-for-manual-LLM-instrumentation_981493419/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-create-spans-for-workflows-using-decoration-syntax-method-variant_2462514347/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-create-spans-for-workflows-using-decoration-syntax_3330947443/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-create-spans-for-workflows-using-withWorkflow-syntax_3788948678/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-create-workflow-and-tasks-spans-with-chained-entity-names_971051426/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-fix-Vercel-AI-spans-to-match-OpenLLMetry-format_2061519753/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-not-create-spans-if-suppressed_3154458667/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-not-log-prompts-if-traceContent-is-disabled_2300077433/recording.harpackages/traceloop-sdk/recordings/Test-SDK-Decorators_847855269/should-not-mix-association-properties-for-traces-that-run-in-parallel_4012223284/recording.harpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk/ai-sdk-agent-integration.test.tspackages/traceloop-sdk/test/ai-sdk/ai-sdk-integration.test.tspackages/traceloop-sdk/test/ai-sdk/ai-sdk-transformations.test.tspackages/traceloop-sdk/test/ai-sdk/ai-sdk-v5-compatibility.test.tspackages/traceloop-sdk/test/decorators.test.ts
| console.log(` Response: ${result.text}`); | ||
| console.log( | ||
| ` Tokens: ${result.usage.promptTokens} in / ${result.usage.completionTokens} out`, | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For aipackage version 6.0.132, what properties are exposed ongenerateText(...).usage? Are they inputTokens/outputTokensorpromptTokens/completionTokens?
💡 Result:
In ai@6.0.132, generateText(...).usage uses inputTokens / outputTokens (not promptTokens / completionTokens). (ai-sdk.dev)
The usage object (type LanguageModelUsage) exposes: (ai-sdk.dev)
inputTokensinputTokenDetails:{ noCacheTokens, cacheReadTokens, cacheWriteTokens }outputTokensoutputTokenDetails:{ textTokens, reasoningTokens }totalTokensraw(provider-native usage payload)- plus deprecated compatibility fields:
reasoningTokens?,cachedInputTokens?(app.unpkg.com)
Citations:
- 1: https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-text
- 2: https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-text
- 3: https://app.unpkg.com/ai%406.0.41/files/dist/index.d.ts?utm_source=openai
Use inputTokens and outputTokens instead of non-existent property names.
In ai@6.0.132, the usage object exposes inputTokens and outputTokens, not promptTokens and completionTokens. Update lines 99–102 and 125–128 to access the correct properties; otherwise the logs will print undefined.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/sample-app/src/sample_vercel_ai_otel140.ts` around lines 99 - 102,
The logs are referencing non-existent properties promptTokens and
completionTokens on result.usage; update all places that read
result.usage.promptTokens -> result.usage.inputTokens and
result.usage.completionTokens -> result.usage.outputTokens (e.g., the
console.log calls that print Tokens and any other occurrences later in the file
where result.usage is accessed) so the logs show actual token counts from
ai@6.0.132.
...ile-Column_1881713521/should-create-a-dataset-with-file-column-type_3148476545/recording.har
Outdated
Show resolved
Hide resolved
..._1618738334/Dataset-Management_51424618/should-create-a-new-dataset_1486295619/recording.har
Outdated
Show resolved
Hide resolved
...sts_1618738334/Dataset-Management_51424618/should-list-all-datasets_3088053986/recording.har
Outdated
Show resolved
Hide resolved
...618738334/Error-Handling_835236894/should-handle-invalid-column-data_784587577/recording.har
Outdated
Show resolved
Hide resolved
...8738334/Error-Handling_835236894/should-handle-invalid-dataset-slug_1145052198/recording.har
Outdated
Show resolved
Hide resolved
..._1618738334/Error-Handling_835236894/should-handle-invalid-row-data_1860613701/recording.har
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-buffer-attachment_1229567556/recording.har (1)
211-263:⚠️ Potential issue | 🟠 MajorRe-record this fixture against a successful attachment upload.
Line 262 now records
404 Not Foundfor the.../cells/document/upload-urlrequest, so this HAR no longer represents the"should add row with buffer attachment"happy path. Replaying it will keep validating a failed upload flow, with the attachment never being populated. Please re-record against a working backend, or rename/update the test if failure is now intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-buffer-attachment_1229567556/recording.har` around lines 211 - 263, The HAR fixture records a 404 for the upload URL (the request to ".../cells/document/upload-url") so the "should add row with buffer attachment" happy-path is invalid; re-run the integration test against a working backend to re-record this recording.har so the upload request returns a successful upload URL and the attachment is populated, or if the failure is intentional update/rename the test ("should-add-row-with-buffer-attachment") and its expectations to reflect the error path instead of the happy path.packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-set-attachment-on-existing-row_2400858357/recording.har (1)
211-263:⚠️ Potential issue | 🟠 MajorThis attachment-update fixture also replays a failure.
should set attachment on existing rownow records404 Not Foundfor the external attachment call. That means replay no longer exercises the intended success path for updating an attachment on an existing row. Please either re-record a successful interaction or rename/reassert the test as an error-path case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-set-attachment-on-existing-row_2400858357/recording.har` around lines 211 - 263, The recorded fixture for the test "should set attachment on existing row" now contains a 404 response for the external attachment call (the request to the /cells/document/external-url endpoint), so the replay exercises a failure path instead of the intended success path; fix by either re-recording the interaction so the recording.har reflects a successful external attachment upload/response and replace the current fixture, or change the test to explicitly assert the error-path (rename the test to indicate a 404 case and update assertions to expect status 404 and the "404 page not found" body) so the fixture and test intent match (adjust any references in the test harness to "should set attachment on existing row" accordingly).packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-external-attachment_2636152027/recording.har (1)
211-263:⚠️ Potential issue | 🟠 MajorHappy-path fixture now records a 404.
This scenario is named
should add row with external attachment, but the recorded attachment request now replays404 Not Found. In replay mode that turns the success-path test into a failure-path fixture, so this flow is no longer validating external attachments. If the API behavior intentionally changed, the test name/assertions should change as well; otherwise this HAR should be re-recorded against a working attachment endpoint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-external-attachment_2636152027/recording.har` around lines 211 - 263, The recorded HAR for the test scenario "should add row with external attachment" contains a 404 response for the external attachment request (status 404, URL ending in /cells/document/external-url), which turns the happy-path fixture into a failure-path; either re-record the HAR against a working attachment endpoint so the request returns success, or update the test name and assertions in the "should add row with external attachment" scenario to expect and assert the 404 behavior; locate the failing fixture in the recording.har for this scenario and replace the recorded response by re-running the test against the correct API or adjust the test assertions to match the new API behavior.
🧹 Nitpick comments (1)
packages/traceloop-sdk/test/datasets-comprehensive.test.ts (1)
43-46: Extract the replay base URL into a shared test constant/helper.This file now replays against
api.traceloop.dev, whilepackages/traceloop-sdk/test/experiment-export.test.tsstill hardcodesapi-staging.traceloop.com. Keeping these literals scattered will make the Polly fixtures drift again the next time the replay host changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/traceloop-sdk/test/datasets-comprehensive.test.ts` around lines 43 - 46, Extract the replay base URL logic (the const baseUrl variable) into a shared test constant/helper (e.g., export TEST_REPLAY_BASE_URL from a test utilities module) and replace the hardcoded literals in both tests with that import; implement the helper to return process.env.TRACELOOP_BASE_URL when RECORD_MODE === "NEW" (or fall back to a single canonical default like "https://api.traceloop.dev"/chosen replay host) so both datasets-comprehensive.test.ts's baseUrl and experiment-export.test.ts no longer hardcode different hosts and instead import TEST_REPLAY_BASE_URL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Advanced-Dataset-Operations_4052612895/should-get-specific-dataset-version_4292461450/recording.har`:
- Line 34: The recorded fixture returns last_version "v1" while the test calls
getVersion("1.0.0"), causing silent misses in replay; either re-record the
fixture so the response includes the expected normalized version "1.0.0" for
last_version and versions[*].version, or change the test in
datasets-comprehensive.test.ts to normalize/assert the version explicitly (e.g.,
call/compare using the same normalized string or assert getVersion("1.0.0") ===
getVersion("v1") / check for existence rather than relying on implicit matching)
and update calls to getVersion to use the canonical version used in the fixture.
---
Outside diff comments:
In
`@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-buffer-attachment_1229567556/recording.har`:
- Around line 211-263: The HAR fixture records a 404 for the upload URL (the
request to ".../cells/document/upload-url") so the "should add row with buffer
attachment" happy-path is invalid; re-run the integration test against a working
backend to re-record this recording.har so the upload request returns a
successful upload URL and the attachment is populated, or if the failure is
intentional update/rename the test ("should-add-row-with-buffer-attachment") and
its expectations to reflect the error path instead of the happy path.
In
`@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-external-attachment_2636152027/recording.har`:
- Around line 211-263: The recorded HAR for the test scenario "should add row
with external attachment" contains a 404 response for the external attachment
request (status 404, URL ending in /cells/document/external-url), which turns
the happy-path fixture into a failure-path; either re-record the HAR against a
working attachment endpoint so the request returns success, or update the test
name and assertions in the "should add row with external attachment" scenario to
expect and assert the 404 behavior; locate the failing fixture in the
recording.har for this scenario and replace the recorded response by re-running
the test against the correct API or adjust the test assertions to match the new
API behavior.
In
`@packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-set-attachment-on-existing-row_2400858357/recording.har`:
- Around line 211-263: The recorded fixture for the test "should set attachment
on existing row" now contains a 404 response for the external attachment call
(the request to the /cells/document/external-url endpoint), so the replay
exercises a failure path instead of the intended success path; fix by either
re-recording the interaction so the recording.har reflects a successful external
attachment upload/response and replace the current fixture, or change the test
to explicitly assert the error-path (rename the test to indicate a 404 case and
update assertions to expect status 404 and the "404 page not found" body) so the
fixture and test intent match (adjust any references in the test harness to
"should set attachment on existing row" accordingly).
---
Nitpick comments:
In `@packages/traceloop-sdk/test/datasets-comprehensive.test.ts`:
- Around line 43-46: Extract the replay base URL logic (the const baseUrl
variable) into a shared test constant/helper (e.g., export TEST_REPLAY_BASE_URL
from a test utilities module) and replace the hardcoded literals in both tests
with that import; implement the helper to return process.env.TRACELOOP_BASE_URL
when RECORD_MODE === "NEW" (or fall back to a single canonical default like
"https://api.traceloop.dev"/chosen replay host) so both
datasets-comprehensive.test.ts's baseUrl and experiment-export.test.ts no longer
hardcode different hosts and instead import TEST_REPLAY_BASE_URL.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4d2f3c13-8398-410c-92dd-1e419d9daf53
📒 Files selected for processing (36)
packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-columns-including-file-type_3412481455/recording.harpackages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-buffer-attachment_1229567556/recording.harpackages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-row-with-external-attachment_2636152027/recording.harpackages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-clean-up-test-dataset_1190036032/recording.harpackages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-create-a-dataset-with-file-column-type_3148476545/recording.harpackages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-set-attachment-on-existing-row_2400858357/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Advanced-Dataset-Operations_4052612895/should-get-dataset-versions_4064477879/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Advanced-Dataset-Operations_4052612895/should-get-specific-dataset-version_4292461450/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Advanced-Dataset-Operations_4052612895/should-import-CSV-data_539062713/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Advanced-Dataset-Operations_4052612895/should-publish-dataset_2980768709/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Cleanup-Operations_1460444363/should-delete-column_4007047377/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Cleanup-Operations_1460444363/should-delete-dataset_3700252497/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Cleanup-Operations_1460444363/should-delete-row_2042462769/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-add-columns-to-dataset_1128156327/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-convert-column-values_4048403397/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-get-all-columns-from-dataset_2308882270/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-update-column_3018839303/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-validate-column-values_2577281684/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-create-a-new-dataset_1486295619/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-get-dataset-by-slug_1748151842/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-list-all-datasets_3088053986/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-refresh-dataset-data_3799145775/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-update-dataset_4001908675/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-column-data_784587577/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-dataset-slug_1145052198/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-row-data_1860613701/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-add-multiple-rows-to-dataset_1312032261/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-add-single-row-to-dataset_1448807104/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-clone-row_952393791/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-convert-row-to-CSV-format_323021641/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-get-rows-from-dataset_2825323369/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-partial-update-row-data_3739884320/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-refresh-row-data_3010292703/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-update-row-data_1291762897/recording.harpackages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-validate-row-data_344186500/recording.harpackages/traceloop-sdk/test/datasets-comprehensive.test.ts
✅ Files skipped from review due to trivial changes (19)
- packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-clean-up-test-dataset_1190036032/recording.har
- packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-add-columns-including-file-type_3412481455/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-update-dataset_4001908675/recording.har
- packages/traceloop-sdk/recordings/Attachment-API-Integration-Tests_3751859535/Dataset-with-File-Column_1881713521/should-create-a-dataset-with-file-column-type_3148476545/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-add-single-row-to-dataset_1448807104/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-update-column_3018839303/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Advanced-Dataset-Operations_4052612895/should-publish-dataset_2980768709/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Cleanup-Operations_1460444363/should-delete-dataset_3700252497/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-dataset-slug_1145052198/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-get-all-columns-from-dataset_2308882270/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-refresh-row-data_3010292703/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-list-all-datasets_3088053986/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-convert-row-to-CSV-format_323021641/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-column-data_784587577/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-validate-row-data_344186500/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-refresh-dataset-data_3799145775/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-add-columns-to-dataset_1128156327/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Column-Management_4224324438/should-validate-column-values_2577281684/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Row-Management_2874258994/should-clone-row_952393791/recording.har
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Dataset-Management_51424618/should-create-a-new-dataset_1486295619/recording.har
- packages/traceloop-sdk/recordings/Dataset-API-Comprehensive-Tests_1618738334/Error-Handling_835236894/should-handle-invalid-row-data_1860613701/recording.har
...d-Dataset-Operations_4052612895/should-get-specific-dataset-version_4292461450/recording.har
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/instrumentation-utils/src/content-block-mappers.ts`:
- Around line 459-464: When handling string fileSrc in the data-URI branch of
the mapper, guard against malformed data URIs that lack a comma so data can be
undefined; in the block that checks fileSrc.startsWith("data:"), validate that
the split on "," yields a second element (or check fileSrc.indexOf(",") !== -1)
before using the data part, and if missing return a safe fallback (e.g., treat
as uri or return an error/empty blob) so the mapper (the code handling fileSrc
-> { type: "blob", modality, mime_type, content } / { type: "uri", modality,
uri, mime_type }) never sets content to undefined.
- Around line 406-414: The data-URI handling branch can produce content:
undefined when imgSrc.startsWith("data:") but has no comma; update the logic
around imgSrc.split(",") so you safely parse the header and data (e.g., derive
header = parts[0], data = parts[1] ?? "" or use substring after first comma) and
preserve detectedMime (from header.match or fallback to mimeType); then return
the same object shape (type: "blob", modality: "image", mime_type if present,
content) but with a guaranteed string content (empty string fallback) instead of
undefined. Ensure this change touches the code that computes detectedMime and
the returned object in the data-URI branch (symbols: imgSrc, detectedMime,
mimeType, content).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d08c8382-58ed-4ae5-83fe-9837fbc305c9
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (6)
packages/instrumentation-utils/src/content-block-mappers.tspackages/instrumentation-utils/src/index.tspackages/traceloop-sdk/package.jsonpackages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.tspackages/traceloop-sdk/test/ai-sdk/ai-sdk-transformations.test.tspackages/traceloop-sdk/test/datasets-comprehensive.test.ts
✅ Files skipped from review due to trivial changes (1)
- packages/traceloop-sdk/package.json
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/traceloop-sdk/test/datasets-comprehensive.test.ts
- packages/traceloop-sdk/test/ai-sdk/ai-sdk-transformations.test.ts
| if (imgSrc.startsWith("data:")) { | ||
| const [header, data] = imgSrc.split(","); | ||
| const detectedMime = header.match(/data:([^;]+)/)?.[1] ?? mimeType; | ||
| return { | ||
| type: "blob", | ||
| modality: "image", | ||
| ...(detectedMime ? { mime_type: detectedMime } : {}), | ||
| content: data, | ||
| }; |
There was a problem hiding this comment.
Edge case: malformed data URI could yield undefined content.
If imgSrc is a string starting with "data:" but contains no comma (e.g., "data:image/png;base64"), split(",") returns a single-element array, making data undefined. The resulting content: undefined would be serialized as-is.
Consider adding a fallback:
🛡️ Proposed defensive fix
if (imgSrc.startsWith("data:")) {
const [header, data] = imgSrc.split(",");
const detectedMime = header.match(/data:([^;]+)/)?.[1] ?? mimeType;
return {
type: "blob",
modality: "image",
...(detectedMime ? { mime_type: detectedMime } : {}),
- content: data,
+ content: data ?? "",
};
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (imgSrc.startsWith("data:")) { | |
| const [header, data] = imgSrc.split(","); | |
| const detectedMime = header.match(/data:([^;]+)/)?.[1] ?? mimeType; | |
| return { | |
| type: "blob", | |
| modality: "image", | |
| ...(detectedMime ? { mime_type: detectedMime } : {}), | |
| content: data, | |
| }; | |
| if (imgSrc.startsWith("data:")) { | |
| const [header, data] = imgSrc.split(","); | |
| const detectedMime = header.match(/data:([^;]+)/)?.[1] ?? mimeType; | |
| return { | |
| type: "blob", | |
| modality: "image", | |
| ...(detectedMime ? { mime_type: detectedMime } : {}), | |
| content: data ?? "", | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/instrumentation-utils/src/content-block-mappers.ts` around lines 406
- 414, The data-URI handling branch can produce content: undefined when
imgSrc.startsWith("data:") but has no comma; update the logic around
imgSrc.split(",") so you safely parse the header and data (e.g., derive header =
parts[0], data = parts[1] ?? "" or use substring after first comma) and preserve
detectedMime (from header.match or fallback to mimeType); then return the same
object shape (type: "blob", modality: "image", mime_type if present, content)
but with a guaranteed string content (empty string fallback) instead of
undefined. Ensure this change touches the code that computes detectedMime and
the returned object in the data-URI branch (symbols: imgSrc, detectedMime,
mimeType, content).
| if (typeof fileSrc === "string") { | ||
| if (fileSrc.startsWith("data:")) { | ||
| const [, data] = fileSrc.split(","); | ||
| return { type: "blob", modality, mime_type: mimeType, content: data }; | ||
| } | ||
| return { type: "uri", modality, uri: fileSrc, mime_type: mimeType }; |
There was a problem hiding this comment.
Same edge case applies to file data URI parsing.
Consistent with the image handling above, a malformed data URI without a comma would leave data undefined.
🛡️ Proposed defensive fix
if (fileSrc.startsWith("data:")) {
const [, data] = fileSrc.split(",");
- return { type: "blob", modality, mime_type: mimeType, content: data };
+ return { type: "blob", modality, mime_type: mimeType, content: data ?? "" };
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (typeof fileSrc === "string") { | |
| if (fileSrc.startsWith("data:")) { | |
| const [, data] = fileSrc.split(","); | |
| return { type: "blob", modality, mime_type: mimeType, content: data }; | |
| } | |
| return { type: "uri", modality, uri: fileSrc, mime_type: mimeType }; | |
| if (typeof fileSrc === "string") { | |
| if (fileSrc.startsWith("data:")) { | |
| const [, data] = fileSrc.split(","); | |
| return { type: "blob", modality, mime_type: mimeType, content: data ?? "" }; | |
| } | |
| return { type: "uri", modality, uri: fileSrc, mime_type: mimeType }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/instrumentation-utils/src/content-block-mappers.ts` around lines 459
- 464, When handling string fileSrc in the data-URI branch of the mapper, guard
against malformed data URIs that lack a comma so data can be undefined; in the
block that checks fileSrc.startsWith("data:"), validate that the split on ","
yields a second element (or check fileSrc.indexOf(",") !== -1) before using the
data part, and if missing return a safe fallback (e.g., treat as uri or return
an error/empty blob) so the mapper (the code handling fileSrc -> { type: "blob",
modality, mime_type, content } / { type: "uri", modality, uri, mime_type })
never sets content to undefined.
Summary by CodeRabbit
New Features
Improvements
Dependencies
Tests / Recordings