v5.101.0 proposal#8282
Merged
Merged
Conversation
…ss 2 directories with 3 updates (#8259) Bumps the runtime-minor-and-patch-dependencies group with 2 updates in the / directory: [@datadog/openfeature-node-server](https://github.com/DataDog/openfeature-js-client/tree/HEAD/packages/node-server) and [oxc-parser](https://github.com/oxc-project/oxc/tree/HEAD/napi/parser). Bumps the runtime-minor-and-patch-dependencies group with 1 update in the /.github/actions/push_to_test_optimization directory: [@datadog/datadog-ci](https://github.com/DataDog/datadog-ci/tree/HEAD/packages/datadog-ci). Updates `@datadog/openfeature-node-server` from 1.1.1 to 1.1.2 - [Release notes](https://github.com/DataDog/openfeature-js-client/releases) - [Changelog](https://github.com/DataDog/openfeature-js-client/blob/main/CHANGELOG.md) - [Commits](https://github.com/DataDog/openfeature-js-client/commits/v1.1.2/packages/node-server) Updates `oxc-parser` from 0.127.0 to 0.128.0 - [Release notes](https://github.com/oxc-project/oxc/releases) - [Changelog](https://github.com/oxc-project/oxc/blob/main/napi/parser/CHANGELOG.md) - [Commits](https://github.com/oxc-project/oxc/commits/crates_v0.128.0/napi/parser) Updates `@datadog/datadog-ci` from 5.15.0 to 5.16.0 - [Release notes](https://github.com/DataDog/datadog-ci/releases) - [Commits](https://github.com/DataDog/datadog-ci/commits/v5.16.0/packages/datadog-ci) --- updated-dependencies: - dependency-name: "@datadog/openfeature-node-server" dependency-version: 1.1.2 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: runtime-minor-and-patch-dependencies - dependency-name: oxc-parser dependency-version: 0.128.0 dependency-type: direct:production update-type: version-update:semver-minor dependency-group: runtime-minor-and-patch-dependencies - dependency-name: "@datadog/datadog-ci" dependency-version: 5.16.0 dependency-type: direct:production update-type: version-update:semver-minor dependency-group: runtime-minor-and-patch-dependencies ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* feat(llmobs): add tool_definitions support to Tagger
Introduces `_ml_obs.meta.tool_definitions` for LLM spans, mirroring the
shape already produced by dd-trace-py. Each definition is
`{ name, description, schema }`.
- `constants/tags.js`: new `TOOL_DEFINITIONS` tag key.
- `tagger.js`: new `tagToolDefinitions(span, toolDefinitions)` that
validates entries and stores them in the registry. Matches the
`#filterToolCalls` validation style.
- `span_processor.js`: forwards the registry entry to
`meta.tool_definitions` on LLM spans.
- `test/llmobs/util.js`: assertion support for a new
`toolDefinitions` expected field.
No integration plugin emits tool definitions yet; this unblocks
Bedrock Converse (see follow-up PR) and opens the door for
parity follow-ups on openai/anthropic/langchain/ai-sdk.
* fix(llmobs): sanitize tool_definitions through serialization guard
Route `tool_definitions` through `#addObject` before assigning to
`meta`, matching the existing `metadata` path. Direct assignment
bypassed the circular-ref / BigInt substitution done via
`UNSERIALIZABLE_VALUE_TEXT`, so a user-provided schema with a cycle
or unserializable value would make `JSON.stringify(event)` throw in
the span writer and the whole LLMObs span would get dropped.
This is a generic-infra safety fix: the current Bedrock caller is
safe by construction (the AWS SDK JSON.stringifies the request
before send), but `tagToolDefinitions` is meant to be reused by
other plugins and eventually the public `llmobs.annotate()` surface
where the upstream-already-serialized guarantee doesn't hold.
* fix(llmobs): handleFailure in tagToolDefinitions for malformed input
Add an `else` branch that calls `#handleFailure` for non-array /
empty input. Matches the validation pattern used by other tagger
methods. Flagged on review.
* test(llmobs): cover tagToolDefinitions happy path and malformed input
Adds a `tagToolDefinitions` describe block in the tagger spec mirroring
the `tagMetrics` style: one happy-path test asserting the tag is set,
and one negative test asserting the malformed-input branch (non-array,
empty array, undefined) routes through `#handleFailure`.
* test(llmobs): cover tool_definitions sanitization in span processor
* test(llmobs): clarify intent of nested cycle in tool_definitions test
* test(llmobs): simplify tool_definitions test to forwarding only
#8252) `gh api graphql -f variables="$variables"` forwards the value as a string instead of parsing it as JSON; GitHub returns the misleading `Variable $input ... was provided invalid value`. Latest failure: run 25309510001 on PR #8246. The same pattern lives in `update-3rdparty-licenses`; that job's push step rarely fires so nobody noticed. Drive-by fix: `update-3rdparty-licenses` also had `jq -c` instead of `jq -nc`, silently producing empty output on a runner with no stdin.
…8251) A single registry load captures only the Node-conditional `versions:` branch active under the script's Node (e.g. amqplib's `MIN_VERSION`), so CI on Node 24 and a contributor on Node 18 produce different `minimum_tracer_supported` values. Load the registry at each `engines.node` bound and union the captured ranges.
…ebpack in the npm_and_yarn group across 1 directory (#8267) Bumps the npm_and_yarn group with 1 update in the /integration-tests/webpack directory: [axios](https://github.com/axios/axios). Updates `axios` from 1.15.0 to 1.15.2 - [Release notes](https://github.com/axios/axios/releases) - [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md) - [Commits](axios/axios@v1.15.0...v1.15.2) --- updated-dependencies: - dependency-name: axios dependency-version: 1.15.2 dependency-type: direct:production dependency-group: npm_and_yarn ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
OpenAI streams `function_call.arguments` and `tool_calls[].function.arguments` accumulated across SSE chunks; Anthropic does the same with `content_block_delta` `partial_json` deltas and the non-streaming `tool_use` block. A malformed accumulation would otherwise throw straight into the diagnostic-channel chunk subscriber. Both plugins now route parses through a shared `safeJsonParse(value, fallback)` helper in `llmobs/util.js`.
…ns (#8230) LLMObs runs once per LLM call, so any per-payload work is on a hot path. Three wins: 1. `llmobs/util.js#encodeUnicode` walked every char of every string and built the result via `result += ...` per char. The function is the per-string replacer the writer hands to `JSON.stringify` for force-escaping non-ASCII to `\uXXXX`. LLM I/O is overwhelmingly ASCII, so add a fast path: scan once with `charCodeAt`, return the input unchanged if every byte is <= 127, otherwise fall into the existing slow path starting from the first byte that needed escaping. Bench (1s/30 samples on each input): - short ascii (13 chars): OLD 16 Mops/s, NEW 75 Mops/s (+4.6x) - prompt ascii (130 chars): OLD 2.7 Mops/s, NEW 13 Mops/s (+4.7x) - mixed-script (40 chars): OLD 1.3 Mops/s, NEW 6 Mops/s (+4.7x) - long ascii (5KB): OLD 44 Kops/s, NEW 103 Kops/s (+2.4x) - long mixed (5KB): OLD 26 Kops/s, NEW 67 Kops/s (+2.6x) The `\\u` -> `\u` post-`stringify` `replaceAll` is unchanged; that's the contract with the writer. 2. `llmobs/plugins/openai/index.js` had two `Object.entries(parameters).reduce(...)` filter-into-fresh-object patterns -- one for chat metadata, one for responses-API `inputMetadata`. Each call allocates a `[k, v]` tuple for every parameter plus a closure. Replace with `for (const key of Object.keys(parameters))` + indexed read; same shape, no tuples, no closure. 3. `llmobs/plugins/genai/util.js` walked `parts` two or three times in `formatContentObject` (filter functionCall + filter functionResponse) and `formatNonStreamingCandidate` (filter functionCall + find executableCode + find codeExecutionResult). Collapse each set into one `for (const part of parts)` walk that short-circuits with `??=` so neither bucket is allocated unless a matching part is actually present, and so the priority order (functionCall > executableCode > codeExecutionResult) stays identical to the chained `filter`/`find` version. The `Buffer.byteLength(JSON.stringify(event))` triple-stringify in `writers/spans.js` and the recursive `#addObject` clone in `span_processor.js` are deferred: both are real wins but require contract changes (cache the serialised form on the buffer entry, or "capture metadata by reference at finish time"). Out of scope for a clean perf-only commit; flagged for a separate PR.
The OTel-bridge `Span` class drifted from the JS SDK in three places that showed up as observable differences for OTel consumers. This commit fixes all three behind a single set of helpers, with focused unit coverage. 1. Mutations after `end()` were not consistently a no-op. `setAttribute`/`addEvent`/`addLink`/`recordException`/`setStatus` could still poke the underlying DD span after `end()` returned, contradicting the OTel spec which requires a finished span to be immutable. The writable-span gate (`_duration === undefined`) is now centralised in `span-helpers.js` so every helper short-circuits at the same point. 2. `setStatus` did not implement the OTel precedence rules. The spec says `OK` is final, `ERROR > UNSET`, and a later `UNSET` must never overwrite. The bridge tracked a boolean `_hasStatus`, so a second `OK` (after `ERROR`) silently kept the error and a second `ERROR` could not refresh the message. Replaced with a `#statusCode` private field and an `applyOtelStatus(currentCode, status) -> newCode` helper that encodes the precedence table once. 3. `addEvent`'s overloads were not normalised. `addEvent(name, hrTime)` was treated as attributes, so the timestamp ended up tagged on the event instead of as its time. `addOtelEvent` now detects `TimeInput` shapes and routes them to the right slot. The helpers also gained `addOtelLinks` so the bridge does not loop in the class layer, and `setOtelAttributes` no longer mutates its `attributes` input as a drive-by while the gate was being added. Test-only: new `packages/dd-trace/test/opentelemetry/span-helpers.spec.js` covers each helper in isolation (gate, precedence, normalisation, error tagging, `normalizeLinkContext` shape coverage). Existing `span.spec.js` gained the precedence describe block, post-end no-op coverage across all mutation methods, and the `addEvent` time-input regression test. No production behaviour change for code paths that were already spec- compliant; the gate and precedence fixes are observable only on paths that misused the bridge.
…lse (#8219) * fix(otel): honor DD_TRACE_OTEL_ENABLED=false and OTEL_SDK_DISABLED=false `otel-sdk-trace.js` and the `otel_enabled` telemetry tag in `opentracing/span.js` checked these env vars for raw truthiness, which made the string `'false'` itself truthy and mishandled the OTel spec's positive opt-in form (`OTEL_SDK_DISABLED=false`). Replace the truthiness check with explicit precedence: the Datadog opt-out (`DD_TRACE_OTEL_ENABLED=false`) wins over every OTel signal, then the OTel opt-out (`OTEL_SDK_DISABLED=true`) wins over the opt-in side, then either explicit opt-in enables it; otherwise stay disabled by default. Fixes: #4873 * refactor(span): read config off the parent tracer DatadogSpan was reading `DD_TRACE_OTEL_ENABLED`, the experimental flags, and `DD_TRACE_SPAN_LEAK_DEBUG` from `getValueFromEnvSources` at module load. Anything that updated those values through the Config singleton (remote config, programmatic options) never reached the span gates because the gates had captured the env-time value. Read them off `tracer._config` instead — the opentracing tracer already holds the singleton. Three coordinated pieces: 1. Convert `_parentTracer` to a `#parentTracer` private field. There are no external readers; `tracer()` is the public accessor. While in the file, fix three sites that were reading from a non-existent `this._tracer`. 2. Drop the comment block above the span-leak gate. It justified the `tracer?._config?.X` nullish chain that this refactor replaces. 3. The four spec files constructing `new DatadogSpan(...)` with `tracer = {}` (or `null`, or `{}` inline) now pass `tracer = { _config: getConfig() }`. The two `tracePropagationBehaviorExtract` overrides spread the real config so the constructor's other reads see real defaults. Drive-by fix: * Drop a long-dead commented-out `registry.register` line in spanleak.js, and the unused `operationName` argument span.js was passing to `addSpan(span)`.
…8234) * perf(propagation): tighten baggage and tag inject paths W3C baggage extract and `_dd.p.*` tag inject both run on every traced HTTP request. Several sub-allocations are dropped: `text_map.js` swaps three single-char-class `replaceAll(/[\xNN]/g, ...)` regex literals for `replaceAll('=', '~')` / `replaceAll('~', '=')`, which skip the regex match path. `_injectTags` and `_injectTraceparent` walk `trace.tags` via `Object.keys(...)` instead of banned `for-in`; the trace-tags object's prototype chain isn't ours to trust, and `for-in` enumerates inherited keys. `_injectBaggageItems` swaps `Object.entries` for `Object.keys` + indexed read, dropping the per-baggage-item `[k, v]` tuple. `_extractBaggageItems` caches the `baggageTagKeys` `Set` on the propagator (rebuilt only when the config array reference changes, e.g. remote-config rotation) and gates `decodeURIComponent` behind `value.includes('%')` — a microbenchmark pins the gated path at 13.4x faster than running `decodeURIComponent` on plain ASCII baggage and only 2% slower than the raw call on percent-encoded values. `tracestate.js#forVendor` reuses the `state.toString()` value computed one line above instead of recomputing it. The original draft also rewrote `tracestate#fromString` to drop an `Array#unshift` quadratic; #8256 landed a linear parser first that supersedes those hunks, so they're dropped. * perf(dsm): trim per-checkpoint and per-message allocations DSM observes every Kafka, SQS, SNS, Kinesis, Pub/Sub, and AMQP message when enabled, so the per-checkpoint hot path compounds. Several allocations are removed without changing the wire format: 1. `getSizeOrZero` stopped allocating a fresh Buffer copy of every string just to read its UTF-8 byte length. `Buffer.byteLength` returns the same value with no allocation. `getHeadersSize`'s `Object.entries(...).reduce(...)` becomes a `for (const key of Object.keys(headers))` loop, dropping the per-header `[k, v]` tuple and the reducer closure. 2. `pathway.js#shaHash` extracted the first 8 bytes of SHA-256 by round-tripping through a 64-char hex string + a 16-char slice + a hex-decoded Buffer. `digest().subarray(0, 8)` produces the same bytes directly. `computeHash` now also caches `hashableEdgeTags.join('')` and `propagationHashBigInt.toString(16)` once per call (each was computed twice), gates the `manual_checkpoint:true` filter on `includes(...)` so the common path skips the alloc, and reuses a module-scope 20-byte scratch buffer to assemble `encodePathwayContext` with a single `Buffer.from(subarray)` copy-out instead of seven nested allocs. 3. `setCheckpoint` precomputes `PATHWAY_HEADER_BYTES` from the static header overhead instead of allocating a temp object, encoding it, and JSON-stringifying just to read its length. It now reads the direction from `edgeTags[0]` directly: every in-tree caller places it there, the `DataStreamsCheckpointer` shape is updated to match, and the test fixture pinning that arg order is updated in the same commit. Drive-by fix: * `recordCheckpoint` reuses the `BigInt` already computed by the `StatsPoint` returned from `forCheckpoint(...)` instead of running `readBigUInt64LE` a second time. `setCheckpoint` returns `undefined` (rather than `null`) on the disabled fast path so the function shape matches the rest of the file. * `processor.js` drops the `DsmPathwayCodec` import that the inlined byte-count made unreachable; `pathway.js` exports `CONTEXT_PROPAGATION_KEY_BASE64` so the constant calculation is anchored to the actual header key. * `encoding.js` adds an `encodeVarintInto(target, offset, value)` helper so the pathway encoder can write directly into the scratch buffer instead of allocating a per-varint `Uint8Array` and copying.
process_tags was incorrectly nested inside debugger.snapshot and serialized as an object. Per the RFC and other tracer implementations (Java, Python, .NET), it belongs at the root of the intake payload as a comma-separated string.
…an (#8030) OTel consumers that retrieve a span via `trace.getActiveSpan()` and call mutators on it expect those writes to land on whatever span is currently active. When the active span was created by the Datadog tracer (i.e. all DD-native instrumentations), the bridge previously returned a no-op span proxy, so `getActiveSpan().setAttribute(...)` quietly went nowhere. The pattern is documented in the OTel API and is used heavily by user code that mixes manual OTel calls with auto-instrumentation, so this gap was a frequent source of "my attribute didn't show up" reports. Cache an `ActiveSpanProxy` per active context so `getActiveSpan()` hands back the same proxy on every call, and forward all OTel mutators through the same `span-helpers.js` surface that the bridge `Span` class uses. Both bridge classes now extend a small `BridgeSpanBase` that owns the shared method bodies (`setAttribute`, `setAttributes`, `addEvent`, `addLink`, `addLinks`, `recordException`, `setStatus`, `isRecording`), so the proxy and the OTel-created span cannot drift on the spec going forward. `updateName` is intentionally split between the two bridges: - For OTel-created spans, `updateName` writes the DD operation name. The span exists because OTel asked for it, so the OTel name *is* the operation name. - For the proxy wrapping a DD-native span, `updateName` writes `resource.name` instead. Touching the operation name on a span that an integration created would scramble metric aggregation in the backend (operation names drive the metric grouping); `resource.name` is the display field and is the correct target for "rename what I'm looking at" semantics. The two paths share `span-helpers.js`'s `setOtelOperationName` and `setOtelResource` so the gate logic still lives in one place. Test coverage: the new proxy describe block in `context_manager.spec.js` exercises the proxy lifecycle (caching across calls, no-op once the underlying span has finished, the resource-name path, attribute/event/link forwarding). `span-helpers.spec.js` gains coverage for the two new name helpers. `span_context.js` records the active OTel span on the DD span context (`_otelActiveSpan`) so the proxy cache survives context propagation.
* update all mcr references to use our mirror instead * add validation to avoid regression * fix dockerfiles not included in validation * fix: add code owner for check-no-mcr-images script Assign ownership of the MCR image validation script to the lang-platform team, since it's related to the mirror-image workflow they manage. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
#8232) * perf(plugins): trim per-message allocations in bullmq and dogstatsd Two messaging / metrics plugins still ran a JSON or string allocation per message that the surrounding code didn't need. `bullmq/producer.js` parsed and stringified `opts.telemetry.metadata` twice per job in the publish hot path: once in `_injectIntoOpts` to add the trace carrier, again in `setProducerCheckpoint` to add the DSM context. For `Queue.addBulk(N)` that is 4N JSON ops on the same string. `_injectIntoOpts` now returns the parsed object; `bindStart` stashes it on `ctx._ddMetadata` so `setProducerCheckpoint` (base + bulk override) reuses it instead of re-parsing the string we just wrote. The `QueueAddBulkPlugin#getDsmData` `reduce` that walked every job's `data` via `getMessageSize` was dead — the bulk override of `setProducerCheckpoint` never read its result — and is removed with the override. `bullmq/consumer.js` swaps `delete metadata._datadog` for `metadata._datadog = undefined` since `JSON.stringify` already omits undefined values and the assignment avoids the V8 hidden-class transition that `delete` triggers. `dogstatsd.js#DogStatsDClient#_add` runs on every metric emission and rebuilt the full tag list each time — `[...this._tags, ...callTags]` followed by `tags.join(',')` — even though the client-level tags never change between emits. The `|#<static>` prefix is now rendered once in the constructor; per-call tags either append `,<call>` to the prefix or fall back to `|#<call>` when no static tags exist. Wire output is byte-for-byte identical. * perf(appsec, plugins): add isEmpty helper, drop kafka commit chain allocations Two scoped wins for the appsec/plugins surface, both avoiding per-call allocations: 1. New `isEmpty`` helpers in `dd-trace/src/util.js`. AppSec's reporter uses `isEmpty(storedResponseHeaders)` instead of `Object.keys(storedResponseHeaders).length === 0` on the getCollectedHeaders hot path -- a 1.3-1.4x speedup on the empty-side common case during early request lifecycle. 2. Kafka producer + consumer `commit()`. Both walked `commitList.map(transform)` to materialise a fully transformed Array, then ran `keys.some(key => !commit.hasOwnProperty(key))` per item -- one closure allocated per commit. Bench (100 commits per run, monomorphic shape): - current map+some : 258 Kops/s - counted loop with Object.hasOwn : 311 Kops/s (+21%) - counted loop with `in` : 328 Kops/s (+27%)
* fix(exporter): retry through agent startup This fixes the agent-startup race that drops payloads when a pod boots before the Datadog agent is listening. The shared HTTP request used by every exporter retried exactly once on the next tick, so a ~2-4 s window of `ECONNREFUSED` / `ENOENT` / `socket hang up` against the agent was enough to drop the first payloads -- the symptom users hit on k8s rolling deploys and sidecar startup. For the first 30 s of the process (or until the first response arrives), retriable transport errors back off through 1 s, 2 s, 4 s, 8 s with small jitter, up to five attempts. Beyond the window the policy collapses to a single 5-7.5 s jittered retry. UDS `ENOENT` and transient `EAI_AGAIN` join the retriable set; uncoded errors no longer retry. Drive-by fix: * CI-Visibility's request now consumes `isRetriableNetworkError` and the jitter constants from the shared retry helper instead of keeping a duplicate copy. Fixes: #5669 * fix(exporter): unref retry timers so the host process can exit promptly The new agent-startup retry policy spaces attempts 1-8 s apart inside the grace window and 5-7.5 s afterwards, so a still-pending retry kept the event loop alive long enough that a host process running a brief script never reached `beforeExit`. That is what made the Platform integration job lose `library_entrypoint.abort.integration` telemetry and the matching `Found incompatible integration version` log line: the test harness sends `SIGTERM` after one second on those matrix entries, killing the child before its ref'd retry timer fired and before any beforeExit handler ran. `.unref()`'ing the retry timer keeps the policy intact for long-running apps -- their own work holds the loop open, so retries fire as designed -- while letting short-lived scripts exit normally instead of waiting on a retry that nobody is listening for. CI-Visibility request retries stay ref'd. Test-runner plugins (mocha, jest, cucumber, vitest, playwright, cypress) block the suite via `delay: true` channels until the bootstrap fetch (settings, known tests, test management, skippable suites) calls back. Unref'ing the retry there would let the child exit before mocha actually starts, so no test events would be emitted at all. Refs: #8223 * test(ci-visibility): drive git_metadata retry test through a coded error The shared retry helper now ignores network errors that arrive without a transient `code` (`ECONNRESET`, `ECONNREFUSED`, `ETIMEDOUT`, ...) so that uncoded misconfigurations propagate immediately. The pre-existing `replyWithError('Server unavailable')` produced an uncoded error, which the new policy correctly does not retry, and the spec's "second attempt should succeed" assertion turned into a regression report. Switching to `ECONNRESET` keeps the test exercising a real transient-failure path under the tighter policy. The beforeEach also gains a proxyquire stub that collapses the retry delay to 0 ms. Without it the spec depends on the order in which earlier tests run: as soon as one of them marks the endpoint as reached, the post-startup 5-7.5 s delay applies and the retry test blows mocha's default timeout.
…Anthropic (#8146) * add support for thinking blocks * add tests * address a couple of review comments * make branch less restrictive * really fix merge conflict please
* add duration and shorten time output for all green * change format * order by workflow name
…erver in afterEach (#8192)
* fix(propagation): guard link reset on null context in extract
With `tracePropagationBehaviorExtract='ignore'` or `'restart'` and zero
matching extractors, `_extractSpanContext` wrote `context._links = []`
on a `null` context and threw on every extract. Guard the assignment
so the function falls through to the SQSD fallback or returns `null`
without taking the request down.
* fix(http-client): stop mutating the caller's URL/options in combineOptions
`Object.assign(inputURL || {}, inputOptions)` mutated the caller's
`URL` / options object on every outbound request; `normalizeHeaders`
then planted `headers = {}` on top. Switch to a fresh shell via spread
so the merged options are owned by the tracer.
* fix(dogstatsd): include the transport tag in the flush log and skip empty flushes
`flush()` formatted `'Flushing %s metrics via'` with two trailing args,
silently dropping the `'HTTP'`/`'UDP'` tag. Fix the format string and
move the empty-queue early return above the log call so empty flushes
skip the format work entirely.
…ueue (#8277) Add paths-ignore to the push and pull_request triggers mirroring the exclusions already defined in codeql_config.yml, so the workflow does not start (and avoids GitHub API calls) when only tests, docs, scripts, or benchmark files change. Also remove mq-working-branch-master-* from the push trigger: PRs are already scanned before entering the merge queue, so rescanning the merge-queue batch is redundant. Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…8235) * perf(core): trim per-span allocations across id, span, and sampler The tracer's per-span hot path runs inside customer code, so allocations in `Span#finish`, `Identifier#toString`, span-link sanitation, and the priority sampler compound under load. Several co-located cleanups land together. `id.js#toString` for radix 16 ran `Array.prototype.map.call(buffer, pad).join('')`, building one short string per byte plus a final `join`. `Buffer.from(this.#buffer) .toString('hex')` is a single Buffer alloc and a single hex pass; the dead `map` and `pad` constants and the `toHexString` helper are removed. `equals` starts its byte loop at `length - 1` instead of `length`, which previously compared `undefined === undefined` for one wasted iteration. As a drive-by fix the file's `_buffer` field becomes `#private` (no external reader). `opentracing/span.js#_sanitizeAttributes` and `_sanitizeEventAttributes` recursed into arrays via banned `for-in` and used `Object.values(value)` on values already known to be arrays, allocating a parallel array per attribute. Counted `for` loops and `for-of value` keep the same tag keys with neither allocation. The same methods swap `Object.entries(attributes)` for `Object.keys(attributes)` + indexed read; a microbenchmark on small (2-key), medium (10-key), and large (50-key) objects pins `Object.keys` + indexed read at 3.9-5.5x faster than `Object.entries`. `Span#finish` skips `Number.parseFloat` on the dominant `span.finish()` no-arg call. `span_processor.js#_erase` no longer reassigns `span.context()._tags = {}` per finished span; the trace reference is dropped one line later anyway, and the per-span `{}` allocation paid off only for user code that long-retains a finished span. `priority_sampler.js#update` swaps a banned `for-in` over `rates` for `Object.keys(rates)`, and `addDecisionMaker` guards `delete trace.tags[DECISION_MAKER_KEY]` behind an `in` check so the write-barrier only fires when a prior keep decision actually set the tag. * perf(core): trim per-span allocations in span, tracestate, and format Four wins across the per-span hot path. Each is benched with isolated microbenchmarks at the call site (numbers below from a Node 24 / M-series laptop; absolute speedups are stable, ratios are the relevant signal): 1. `span.js#_sanitizeAttributes`. The recursive helper was allocated as a fresh closure on every `addLink` / `addEvent` (and on every constructor with `fields.links`). Hoist `addArrayOrScalarAttribute` to module scope and pass the output map in; the closure allocation per call is gone, the helper sees a stable hidden class, and the inline `String(value)` cast is unchanged. 2. `span.js` constructor links. `fields.links?.map(link => ({ ... }))` built a closure capturing `this` plus an iterator allocation per constructor. A counted `for` loop into a pre-allocated array drops both; the link contract is unchanged. 3. `tracestate.js`. `class TraceStateData extends Map` / `class TraceState extends Map` walked V8's slow Map-subclass constructor reflection path: - construct + iterate (4 entries): subclass 14.7 Mops/s vs composed 15.1 Mops/s (+1.03x) - construct from array of pairs: subclass 8.3 Mops/s vs composed 15.2 Mops/s (+1.83x) The "from array of pairs" case is the `fromString` hot path hit on every `traceparent`/`tracestate` extract. Compose a private `#map` and forward only what callers use. The instance-check in the spec was leaking a Map-internal detail; switch the assertion to `state.size` / `state.get('s') === '2'` instead. A small reproduction is being prepared for an upstream V8 report; reference pending. 4. `span_format.js#extractSpanLinks` / `extractSpanEvents`. Both built closures over `Array#map` per-flush. Counted `for` loops into a pre-sized result array drop the closure and the iterator allocations. Plus a meta/metric truncation fold that's larger but closely related: 5. `tags-processors.js#truncateSpan` walked `Object.keys(span.meta)` and `Object.keys(span.metrics)` again after `extractTags` had already written them. Fold the agent-side key/value length guards into `addTag` (so the limits run as part of the only walk that already visits each tag) and into the one site that bypasses `addTag` (`extractSpanLinks`). The post-walk in `truncateSpan` becomes redundant for format-produced spans; drop it. `truncateSpan` keeps the resource-name guard, which still has no upstream substitute. Bench (10 meta + 5 metric tags, the typical formatted-span shape): - format + truncateAfter (current): 3.7 Mops/s - format with inline truncate (folded): 6.2 Mops/s (+1.68x) The agentless-CI test that asserted "encoder truncates raw spans" was testing an artificial path -- `format()` always runs before the encoder in the live pipeline -- so the equivalent assertion moves into `span_format.spec.js` where the inline truncation lives. Drive-by fix: * Replace the misleading "shorter is treated as zero-padded on the left" comment in `id.js#equals` with one that matches what the loop actually does (big-endian suffix compare bounded by the shorter buffer). * chore(encode): drop dead truncateSpan(span, false) paths The `truncateSpan(span, false)` calls in `encode/0.4`, `encode/0.5`, and `encode/agentless-json` were no-ops: `false` skipped the remaining resource cap, and the meta/metric walk now lives in `addTag`. Drop the calls, the `shouldTruncateResourceName` parameter, and the dedicated false-path test. The agentless-CI encoder keeps `truncateSpan(span)` for the 5000-char resource cap.
* add cost_tags to annotate and annotationContext
* test: expect cost_tags metadata in TS sandbox
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix redundant copy in #getDdMetadata, and add a test
* add a comment to explain the order of tags need to be placed before costTags
* add JSDoc for #getDdMetadata
* apply sam's suggestion
* move look up tags in tagCostTags, and add a comment to consider using set to represent costTags
* Revert behavior to add empty _dd:{} on metadata
* fix(llmobs): repair merge conflict in span_processor.spec.js
The master merge mangled the cost_tags and tool_definitions tests:
the cost_tags `it()` was missing its closing brace, the tool_definitions
test inherited the cost_tags assertion, and the preserves-_dd test
inherited the tool_definitions assertion. Split into three clean tests
with their own setup/process/assertion blocks.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(llmobs): restore generateTraceId stub in tagger.spec.js
Accidentally removed when stripping the validateCostTags stub. The
generateTraceId stub is unused but exists on master; restoring keeps
the diff focused on cost_tags additions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closed
🎉 All green!❄️ No new flaky tests detected 🎯 Code Coverage (details) 🔗 Commit SHA: e3bda56 | Docs | Datadog PR Page | Give us feedback! |
BenchmarksBenchmark execution time: 2026-05-06 14:59:57 Comparing candidate commit e3bda56 in PR branch Found 167 performance improvements and 1 performance regressions! Performance is the same for 1577 metrics, 99 unstable metrics. scenario:datastreams-consume-18
scenario:datastreams-consume-20
scenario:datastreams-consume-22
scenario:datastreams-consume-24
scenario:datastreams-produce-18
scenario:datastreams-produce-20
scenario:datastreams-produce-22
scenario:datastreams-produce-24
scenario:datastreams-produce-high-cardinality-18
scenario:datastreams-produce-high-cardinality-20
scenario:datastreams-produce-high-cardinality-22
scenario:datastreams-produce-high-cardinality-24
scenario:datastreams-produce-manual-checkpoint-18
scenario:datastreams-produce-manual-checkpoint-20
scenario:datastreams-produce-manual-checkpoint-22
scenario:datastreams-produce-manual-checkpoint-24
scenario:datastreams-produce-with-message-size-18
scenario:datastreams-produce-with-message-size-20
scenario:datastreams-produce-with-message-size-22
scenario:datastreams-produce-with-message-size-24
scenario:encoders-0.5-18
scenario:encoders-0.5-20
scenario:encoders-0.5-22
scenario:encoders-0.5-24
scenario:encoders-0.5-events-legacy-22
scenario:llmobs-encode-unicode-ascii-18
scenario:llmobs-encode-unicode-ascii-20
scenario:llmobs-encode-unicode-ascii-22
scenario:llmobs-encode-unicode-ascii-24
scenario:llmobs-encode-unicode-mixed-22
scenario:propagation-extract-18
scenario:propagation-extract-20
scenario:propagation-extract-22
scenario:propagation-extract-24
scenario:propagation-extract-baggage-ascii-18
scenario:propagation-extract-baggage-ascii-20
scenario:propagation-extract-baggage-ascii-22
scenario:propagation-extract-baggage-ascii-24
scenario:propagation-extract-baggage-percent-18
scenario:propagation-extract-baggage-percent-20
scenario:propagation-extract-baggage-percent-22
scenario:propagation-extract-baggage-percent-24
scenario:propagation-extract-inject-18
scenario:propagation-extract-inject-20
scenario:propagation-extract-inject-22
scenario:propagation-extract-inject-24
scenario:propagation-inject-18
scenario:propagation-inject-20
scenario:propagation-inject-22
scenario:propagation-inject-24
scenario:runtime-metrics-with-runtime-metrics-18
scenario:runtime-metrics-with-runtime-metrics-20
scenario:runtime-metrics-with-runtime-metrics-22
scenario:runtime-metrics-with-runtime-metrics-24
scenario:spans-finish-immediately-with-tags-and-otel-18
scenario:spans-finish-immediately-with-tags-and-otel-20
scenario:spans-finish-immediately-with-tags-and-otel-22
scenario:spans-finish-immediately-with-tags-and-otel-24
|
#8269) * fix(propagation): tighten W3C trace-context inject and extract correctness Five corner cases in the W3C trace-context propagator silently dropped or corrupted data on inject and extract. 1. `_injectTraceparent` destructured `_trace: { origin, tags }`, shadowing the module-level `tags` constant; `tags.DD_PARENT_ID` then resolved against the trace-tags object (`undefined`) instead of `_dd.parent_id`, so the remote-span `state.set('p', ...)` branch never fired. The bug was masked by the tracestate round-trip and surfaced when `_resolveTraceContextConflicts` set `_dd.parent_id` from datadog headers without a corresponding `p:` entry in tracestate. Rename the destructured binding to `traceTags`. 2. W3C §3.2.2.5: `trace-flags` is hex. `Number.parseInt(flags, 10) & 1` miscomputes the sampled bit whenever the lower nibble is `b`/`d`/`f` (parseInt returns 0) or whenever any nibble is in `a..f` (parseInt returns `NaN`). Concrete misreads include `0b`/`0d`/`0f` (sampled=1, returned 0), `1a`/`1c`/`1e` (sampled=0, returned 1), and `ab`/`ff`. 3. Inject encodes the origin's `=` as `~` because W3C §3.3.1.3.2 excludes `=` from the value grammar; the `t.<tag>` extract path reverses this, the `case 'o':` arm did not, so `=` characters in custom origins were silently lost across hops. Mirror the tag-value decode. 4. W3C §3.3.1.1 / RFC 7230 §3.2.2: receivers MUST combine multiple `tracestate` header fields. Array-preserving carriers can surface `carrier.tracestate` as a string array; `TraceState.fromString` rejected non-strings and returned an empty state, dropping every vendor entry past the first header. Join the array on `,`. 5. `_extractTraceparentContext` did `if (!headerValue) return null` and then `headerValue.trim().match(...)`, throwing `TypeError` whenever the carrier surfaced `traceparent` as a non-string. Switch the guard to `typeof !== 'string'`. Drive-by fix: * `String#match` returns `null` or a non-empty array, so `matches?.length` is a no-op guard and `matches.slice(1)` allocates a 5-element array purely to skip the full-match capture. Destructure with a leading hole and check `matches !== null`. * fix(propagation): ignore non-string tracestate array members on extract Custom carriers can surface non-string `tracestate` members (`Symbol`, throwing-`toString` objects). `TraceState.fromString` guards against those, but the array-join step runs first and crashes.
The loader bypassed `setAndTrack`, change tracking, and telemetry, and the two fields it produced (`commitSHA` / `repositoryUrl`) lived on `Config` only because four consumers read them. Every Config build was silently chaining `fs.readFileSync` calls against the host's `.git/` and `git.properties` regardless of which subsystem ultimately needed them. Consumers (`git_metadata_tagger`, `remote_config`, `debugger/config`, `profiling/config`) now call `getGitMetadata(config)` when they need the values; the loader has its own focused spec at `git_metadata.spec.js` instead of being covered transitively through Config. `debugger/index.spec.js` previously seeded `config.commitSHA` / `config.repositoryUrl` and asserted the worker received them verbatim. Those values now flow through `getGitMetadata(config)`, so the spec proxyquires the loader the same way `debugger/config.spec.js` does; otherwise the assertion compared against `undefined` and the worker shape contract degraded silently.
…oad-github-action (#8278) - Remove DataDog/coverage-upload-github-action external dependency from the coverage action and reimplement using datadog-ci from npm directly - Extract the common cache/setup-node/install pattern shared by coverage and push_to_test_optimization into a new reusable datadog-ci action, pinning @datadog/datadog-ci in a single place Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
The two `mutation methods are all no-ops` tests pinned the OTel `Span`
mutator contract via `assert.deepStrictEqual(_tags, {})` after `finish()`.
That assertion held on PR #8030's branch because `SpanProcessor#_erase`
still ran `span.context()._tags = {}` per finished span; PR #8235 dropped
the reset because the trace reference is released on the next line and
the per-span allocation was never observable. Both branches were green
individually; together on master the test asserted a side effect that no
longer runs.
The OTel proxy and OTel-bridge `Span` short-circuit on `isWritable(ddSpan)`
inside `span-helpers.js`, so the contract under test is "post-finish writes
by these mutators do not land on `_tags`/`_links`/`_events`" — not "every
tag is wiped". Pin each post-finish mutator's target keys instead.
Refs: #8030
Refs: #8235
…with 2 updates (#8281) Bumps the gh-actions-packages group with 2 updates in the / directory: [github/codeql-action](https://github.com/github/codeql-action) and [slackapi/slack-github-action](https://github.com/slackapi/slack-github-action). Bumps the gh-actions-packages group with 2 updates in the /.github/workflows directory: [github/codeql-action](https://github.com/github/codeql-action) and [slackapi/slack-github-action](https://github.com/slackapi/slack-github-action). Updates `github/codeql-action` from 4.35.2 to 4.35.3 - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@95e58e9...e46ed2c) Updates `slackapi/slack-github-action` from 3.0.2 to 3.0.3 - [Release notes](https://github.com/slackapi/slack-github-action/releases) - [Changelog](https://github.com/slackapi/slack-github-action/blob/main/CHANGELOG.md) - [Commits](slackapi/slack-github-action@03ea543...45a88b9) Updates `github/codeql-action` from 4.35.2 to 4.35.3 - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@95e58e9...e46ed2c) Updates `slackapi/slack-github-action` from 3.0.2 to 3.0.3 - [Release notes](https://github.com/slackapi/slack-github-action/releases) - [Changelog](https://github.com/slackapi/slack-github-action/blob/main/CHANGELOG.md) - [Commits](slackapi/slack-github-action@03ea543...45a88b9) Updates `github/codeql-action` from 4.35.2 to 4.35.3 - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@95e58e9...e46ed2c) Updates `slackapi/slack-github-action` from 3.0.2 to 3.0.3 - [Release notes](https://github.com/slackapi/slack-github-action/releases) - [Changelog](https://github.com/slackapi/slack-github-action/blob/main/CHANGELOG.md) - [Commits](slackapi/slack-github-action@03ea543...45a88b9) Updates `github/codeql-action` from 4.35.2 to 4.35.3 - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@95e58e9...e46ed2c) Updates `slackapi/slack-github-action` from 3.0.2 to 3.0.3 - [Release notes](https://github.com/slackapi/slack-github-action/releases) - [Changelog](https://github.com/slackapi/slack-github-action/blob/main/CHANGELOG.md) - [Commits](slackapi/slack-github-action@03ea543...45a88b9) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: 4.35.3 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: gh-actions-packages - dependency-name: github/codeql-action dependency-version: 4.35.3 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: gh-actions-packages - dependency-name: slackapi/slack-github-action dependency-version: 3.0.3 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: gh-actions-packages - dependency-name: slackapi/slack-github-action dependency-version: 3.0.3 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: gh-actions-packages ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Ruben Bridgewater <ruben@bridgewater.de>
…pdates (#8279) Bumps the test-versions group with 3 updates in the /integration-tests/esbuild directory: [@koa/router](https://github.com/koajs/router), [axios](https://github.com/axios/axios) and [knex](https://github.com/knex/knex). Updates `@koa/router` from 15.4.0 to 15.5.0 - [Release notes](https://github.com/koajs/router/releases) - [Commits](koajs/router@v15.4.0...v15.5.0) Updates `axios` from 1.15.2 to 1.16.0 - [Release notes](https://github.com/axios/axios/releases) - [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md) - [Commits](axios/axios@v1.15.2...v1.16.0) Updates `knex` from 3.2.9 to 3.2.10 - [Release notes](https://github.com/knex/knex/releases) - [Changelog](https://github.com/knex/knex/blob/master/CHANGELOG.md) - [Commits](knex/knex@3.2.9...3.2.10) --- updated-dependencies: - dependency-name: "@koa/router" dependency-version: 15.5.0 dependency-type: direct:production update-type: version-update:semver-minor dependency-group: test-versions - dependency-name: axios dependency-version: 1.16.0 dependency-type: direct:production update-type: version-update:semver-minor dependency-group: test-versions - dependency-name: knex dependency-version: 3.2.10 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: test-versions ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Ruben Bridgewater <ruben@bridgewater.de>
#8243) * fix(scripts): harden mocha-parallel-files against crashes and silent failures Several paths in the parallel test runner crashed the parent or hid failures silently, mostly on Windows. Spawn errors (ENOENT, EAGAIN, EMFILE, AV blocking the binary) emit `error` not `exit`, so the missing listener turned every failed spawn into an uncaught parent-side exception. SIGINT/SIGTERM didn't reach children on Windows where there's no process group to inherit. EPIPE downgraded real test failures to exit 0. Track live children, handle all three events, forward signals (escalating to SIGKILL after 1 s), and reuse `process.exitCode` from the EPIPE path. JUnit aggregation also lost data: with `jobs === 1` each child wrote its xunit straight to `junitOutFile` and later children overwrote earlier ones, so CI got only the last file's results. Sharding is now unconditional. Smaller correctness fixes folded in: `--timeout 0` was treated as falsy, the stream-end leftover got flushed without a trailing newline, `isFailureStartLine` matched any test name containing "N failing", junit attribute parsing returned `NaN` for malformed input, junit cleanup didn't survive a Windows file lock, and `globSync` results were sorted per pattern instead of globally. * refactor(scripts): align mocha-parallel-files with project style The bigger items: extracted `Entry` / `EntryStats` / `ParseArgsResult` typedefs so the `/** @type {...} */ (null)` casts inside the entries factory and `parseArgs` body go away. Renamed single-letter callback args (`m`, `l`, `f`, `p`) to expressive names; the for-loop counter `i` in `parseArgs` stays. Switched the one-shot child events (`message`, `exit`, `error`, `stdout` / `stderr` 'end') from `.on` to `.once`. Added `@param` JSDoc on every callable the diff defines or changes — closures included — plus three boy-scout `@param` blocks on adjacent helpers (`isWarningLine`, `handleStdoutChunk`, `handleStderrChunk`). No prose lines; method names carry the contracts. The smaller items: `node:` prefix on the core imports, `error` instead of `err` in catch handlers, `??` over `||` for the stat-coalesce defaults, brace the three `if/else` sites, drop the dead `?? ''` after `parts.at(-1)`, drop the no-op `unknown`-cast on the IPC `msg`, inline `ensureDir` (its body was a single `fs.mkdirSync` call), and remove the `stableUnique` one-line helper in favour of an inline `[...new Set(...)]`.
0b9ff04 to
e3bda56
Compare
sabrenner
approved these changes
May 6, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
f121c4281f] - (SEMVER-PATCH) fix(scripts): harden mocha-parallel-files against crashes and silent … (Ruben Bridgewater) #8243fc60a84f47] - (SEMVER-PATCH) chore(deps): bump the test-versions group across 1 directory with 3 updates (dependabot[bot]) #82790509edc11b] - (SEMVER-PATCH) chore(deps): bump the gh-actions-packages group across 2 directories with 2 updates (dependabot[bot]) #82816fc588e858] - (SEMVER-PATCH) Revert "ci(codeql): skip workflow for non-production file changes and merge queue (ci(codeql): skip workflow for non-production file changes and merge queue #8277)" (Ruben Bridgewater) #8291215ea598d4] - (SEMVER-PATCH) test(otel): scope post-finish no-op assertions to mutator keys (Ruben Bridgewater) #82853616c45502] - (SEMVER-PATCH) ci: extract shared datadog-ci install action and replace coverage-upload-github-action (Roch Devost) #827818c37b6d40] - (SEMVER-PATCH) refactor(config): extract git metadata loader out of Config (Ruben Bridgewater) #8238cbd06436a8] - (SEMVER-PATCH) fix(propagation): tighten W3C trace-context inject and extract correc… (Ruben Bridgewater) #8269388ae8171b] - (SEMVER-MINOR) fix(llmobs): add cost_tags to annotate and annotationContext (Xinyuan Guo) #8175e3a092a7d5] - (SEMVER-PATCH) perf(core): trim per-span allocations across id, span, and sampler (Ruben Bridgewater) #82356ce2635162] - (SEMVER-PATCH) ci(codeql): skip workflow for non-production file changes and merge queue (Roch Devost) #8277b5259d74b6] - (SEMVER-PATCH) fix: guard propagation, stop mutating input, and fix log (Ruben Bridgewater) #8226de5649fd31] - (SEMVER-PATCH) fix(test): prevent cascading ws test failures by reliably closing wsServer in afterEach (William Conti) #819257b73cfbbf] - (SEMVER-PATCH) ci: add duration and shorten time output for all green (Roch Devost) #8225c3f962f6f6] - (SEMVER-MINOR) feat(llmobs, anthropic): add reasoning/extended thinking support for Anthropic (Sam Brenner) #8146df6b6e0c71] - (SEMVER-MINOR) feat(exporter): retry through agent startup (Ruben Bridgewater) #8223210e9f7cc4] - (SEMVER-PATCH) update windows ci jobs to upload a node report artifact on crash (Roch Devost) #8239d28de77da5] - (SEMVER-PATCH) perf(plugins): trim per-message allocations in bullmq, sharedb, and d… (Ruben Bridgewater) #82327afadcff5a] - (SEMVER-PATCH) ci: update all mcr references to use our mirror instead (Roch Devost) #82157c559f48de] - (SEMVER-MINOR) feat(otel): forward getActiveSpan() writes onto the active Datadog span (Ruben Bridgewater) #8030a5733805a2] - (SEMVER-PATCH) fix(debugger): move process_tags to payload root (Thomas Watson) #81736500302a1f] - (SEMVER-PATCH) perf(propagation): tighten tracestate, baggage, and tag inject paths (Ruben Bridgewater) #8234e8690041e0] - (SEMVER-PATCH) fix(otel): honor DD_TRACE_OTEL_ENABLED=false and OTEL_SDK_DISABLED=false (Ruben Bridgewater) #8219f67ac4dad3] - (SEMVER-PATCH) fix(otel): tighten OTel-bridge Span spec compliance (Ruben Bridgewater) #824218eb17da71] - (SEMVER-PATCH) perf(llmobs): fast-path encodeUnicode and collapse plugin filter chains (Ruben Bridgewater) #82306e32e73e40] - (SEMVER-PATCH) fix(llmobs): guard JSON.parse on streamed tool-call arguments (Ruben Bridgewater) #82271234065491] - (SEMVER-PATCH) [test-optimization] Default Nx and Lage names in v6 (Juan Antonio Fernández de Alba) #8268aae141ef5f] - (SEMVER-PATCH) [test optimization] Use head SHA for test optimization dispatch (Juan Antonio Fernández de Alba) #82704db54b181f] - (SEMVER-PATCH) chore(deps): bump axios from 1.15.0 to 1.15.2 in /integration-tests/webpack in the npm_and_yarn group across 1 directory (dependabot[bot]) #82670b6d586397] - (SEMVER-PATCH) fix(scripts): resolve instrumentation ranges at both engine bounds (Ruben Bridgewater) #8251f6c69c6bf6] - (SEMVER-PATCH) ci: pipe GraphQL variables via stdin so commit-on-branch pushes don't… (Ruben Bridgewater) #8252afae167f30] - (SEMVER-PATCH) [test-optimization] Raise v6 Mocha minimum version (Juan Antonio Fernández de Alba) #8245d509f232a8] - (SEMVER-PATCH) [test-optimization] Raise v6 Cypress minimum version (Juan Antonio Fernández de Alba) #8247ea902ccf5b] - (SEMVER-PATCH) [test-optimization] Raise v6 Jest minimum version (Juan Antonio Fernández de Alba) #8246a329f1f575] - (SEMVER-MINOR) feat(llmobs): add tool_definitions support to Tagger (Alexandre Choura) #8082d34fada6ce] - (SEMVER-PATCH) chore(deps): bump the runtime-minor-and-patch-dependencies group across 2 directories with 3 updates (dependabot[bot]) #8259c267fbfd54] - (SEMVER-PATCH) pin node version to 24.24.1 for windows in ci (Roch Devost) #8262