Skip to content

Inject Kinesis DSM context into all batch records#8640

Open
jeastham1993 wants to merge 2 commits into
masterfrom
james.eastham/kinesis-dsm-all-records
Open

Inject Kinesis DSM context into all batch records#8640
jeastham1993 wants to merge 2 commits into
masterfrom
james.eastham/kinesis-dsm-all-records

Conversation

@jeastham1993
Copy link
Copy Markdown
Collaborator

Summary of changes

Updates the AWS Kinesis PutRecords instrumentation so trace propagation and DSM injection are attempted for every record in the batch, instead of only Records[0].

Reason for change

Batch propagation for Kinesis was inconsistent with the other AWS messaging integrations. SQS and SNS already inject Datadog context into every message in a batch, while Kinesis only injected into the first record.

Bringing Kinesis in line with the other integrations makes batch behavior more predictable and avoids partial propagation when a single PutRecords request contains multiple independently consumed records.

Implementation details

ContextPropagation.InjectTraceIntoRecords() now iterates through request.Records and calls InjectTraceIntoData() for each duck-typed record entry.

The underlying per-record behavior is unchanged:

Injection still only happens when the payload can be parsed as JSON.
DSM injection still depends on the existing payload/header-size checks.
Injection is still skipped if the final payload would exceed the Kinesis 1 MB limit.
PutRecord behavior is unchanged.
One important tradeoff is that this introduces additional per-batch work for Kinesis. Previously we only parsed and attempted injection for the first record; now we do that work for every record in the batch. That means CPU/allocation overhead will scale with batch size, especially for larger JSON payloads. This is a potential performance cost, but it also aligns Kinesis with the existing SQS/SNS model, where Datadog context is added to every message in a batch.

If there are concerns around performance, we could possibly hide this behind a feature flag. dd-trace-js does this.

Test coverage

Updated the managed Kinesis propagation tests to verify that all records in a PutRecords batch receive _datadog context, not just the first record.

Focused test run:

Datadog.Trace.ClrProfiler.Managed.Tests.AutoInstrumentation.AWS.Kinesis.ContextPropagationTests
11 passed, 0 failed on net10.0

Other details

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 30ad5edac9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +40 to +44
foreach (var requestRecord in request.Records)
{
InjectTraceIntoData(tracer, record, scope, streamName);
if (requestRecord.DuckCast<IContainsData>() is { } record)
{
InjectTraceIntoData(tracer, record, scope, streamName);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Guard batch injection against the PutRecords size limit

For PutRecords batches whose original payload is close to the service request-size limit, this loop now adds a _datadog object to every JSON record without tracking the aggregate request size, so a call that previously only grew the first record can now exceed Kinesis' whole-request limit and be rejected. The per-record MaxKinesisDataSize check inside InjectTraceIntoData doesn't protect the batch total; please account for the total batch size, including partition keys/added context, before mutating every record.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 30ad5edac9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +40 to +44
foreach (var requestRecord in request.Records)
{
InjectTraceIntoData(tracer, record, scope, streamName);
if (requestRecord.DuckCast<IContainsData>() is { } record)
{
InjectTraceIntoData(tracer, record, scope, streamName);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Guard batch injection against the PutRecords size limit

For PutRecords batches whose original payload is close to the service request-size limit, this loop now adds a _datadog object to every JSON record without tracking the aggregate request size, so a call that previously only grew the first record can now exceed Kinesis' whole-request limit and be rejected. The per-record MaxKinesisDataSize check inside InjectTraceIntoData doesn't protect the batch total; please account for the total batch size, including partition keys/added context, before mutating every record.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant