Skip to content

Fix OTLP HTTP/protobuf export failures and improve OTLP integration test reliability#8449

Merged
bouwkast merged 10 commits intomasterfrom
steven/attempt-to-fix-otel-tests-failing-constantly
Apr 21, 2026
Merged

Fix OTLP HTTP/protobuf export failures and improve OTLP integration test reliability#8449
bouwkast merged 10 commits intomasterfrom
steven/attempt-to-fix-otel-tests-failing-constantly

Conversation

@bouwkast
Copy link
Copy Markdown
Collaborator

@bouwkast bouwkast commented Apr 13, 2026

Summary of changes

This fixes some export failures occurring in OTLP Logs/Metrics exports that were caused by both bugs within the exporter code (forced HTTP/2 and a unclosed _otlpExporter) and also with some test changes.

Reason for change

I've been noticing on many of my CI runs that I kept running into the following error that requires a retry:

[Error] Exception when sending logs OTLP HTTP request. System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 10 seconds elapsing.

On looking further I found a few issues of note:

  • The HttpClients for Logs/Metrics for our OTLP implementation forced HTTP/2. However, the test agent that we us only supports an HTTP/1.1 aiohttp server. Note: this is technically a production change for a test issue I'm fine reverting this and just leaving these tests as [Flaky]
  • OtlpSubmissionLogSink.DisposeAsync() never called _otlpExporter.Shutdown() which appears to have left the HttpClient open.
  • Tests used OTEL_LOG_EXPORT_INTERVAL instead of OTEL_BLRP_SCHEDULE_DELAY (former doesn't seem to exist ever or anymore)
  • We did not honor the base protocol OTEL_EXPORTER_OTLP_PROTOCOL applying to metrics/logs protocol(s) when they are not set -> https://opentelemetry.io/docs/specs/otel/protocol/exporter/#configuration-options

Implementation details

It does seem that the test agent is a bit finicky to work with and since we don't use it very much anywhere else I opted to go for some polling strategies instead of single shots for getting request data.

  • Fix OTEL_LOG_EXPORT_INTERVAL to OTEL_BLRP_SCHEDULE_DELAY
  • Replace fire-and-forget /test/session/clear with ClearTestAgentSession retry helper
  • Replace single-GET data fetch with WaitForTestAgentData polling helper (30s timeout, 500ms interval)
  • Set OTEL_METRIC_EXPORT_INTERVAL=60000 so only the shutdown flush fires (one clean batch)
  • Set OTEL_BLRP_SCHEDULE_DELAY=500; The OpenTelemetry SDK has a hard 5s shutdown timeout that we hit so this should be good enough to get our data out and flush it before it is timed out. I'll add this to the Other Details as well. This seems to be the flakiest bit left
  • Remove [Flaky] from SubmitsOtlpMetrics and SubmitsOtlpLogs
  • Remove the newly added Skip as well
  • Force the OTEL_EXPORTER_OTLP_PROTOCOL value to set the metric/log protocol when not set
  • Removed the paramter from Shutdown as it wasn't used (and I attempted to use it)

Test coverage

I'm going to run the CI a few times against this to see if this helps get the tests running quicker and pass

Other details

Claude response on test agent HTTP/2 support

The test-agent's HTTP server on port 4318 is created in agent.py:

  otlp_http_app = make_otlp_http_app(agent)       # line 2436
  otlp_http_runner = web.AppRunner(otlp_http_app)  # line 2470
  otlp_http_site = web.TCPSite(otlp_http_runner, port=parsed_args.otlp_http_port)  # line 2490

  This is aiohttp.web -- its TCPSite serves HTTP/1.1 only. aiohttp does not support HTTP/2. You can see it declared as a dependency in
   setup.py:

  install_requires=[
      "aiohttp",
      ...
  ]

  There's no h2, hyper, hypercorn, or any other HTTP/2 library in the dependencies.

  For contrast, port 4317 (gRPC) uses grpc_aio.server() (line 2481), which speaks HTTP/2 natively -- that's why gRPC tests were never
  affected.

  Source: https://github.com/DataDog/dd-apm-test-agent -- setup.py for dependencies, ddapm_test_agent/agent.py lines 2436-2490 for
  server setup.

Claude response on OpenTelemetry SDK Log Shutdown Chain

 LoggerProviderSdk.Dispose() calls Processor.Shutdown(5000) -- hardcoded, not
  configurable. BatchExportProcessor.OnShutdown(5000) calls worker.Shutdown(5000)
  which does Thread.Join(5000). The worker thread runs PerformExport() which calls
  Exporter.Export() with no timeout parameter and no cancellation token. The OTLP
  exporter then calls HttpClient.SendAsync() with HttpClient.Timeout set from
  OTEL_EXPORTER_OTLP_TIMEOUT (default 10s). When the 5s Thread.Join expires before the
   10s HTTP timeout, the dispose path proceeds to cancel the in-flight request and
  dispose the worker. The export silently fails, data is lost, and the process exits
  with code 0.

  OTEL_BLRP_EXPORT_TIMEOUT (default 30s) is parsed and stored but never read during
  export or shutdown -- it's a dead field.

@bouwkast bouwkast added the AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos label Apr 13, 2026
@bouwkast bouwkast changed the title Steven/attempt to fix otel tests failing constantly Fix OTLP export CI timeout caused by forced HTTP/2 Apr 13, 2026
@bouwkast
Copy link
Copy Markdown
Collaborator Author

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. 👍

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@dd-trace-dotnet-ci-bot
Copy link
Copy Markdown

dd-trace-dotnet-ci-bot Bot commented Apr 13, 2026

Execution-Time Benchmarks Report ⏱️

Execution-time results for samples comparing This PR (8449) and master.

✅ No regressions detected - check the details below

Full Metrics Comparison

FakeDbCommand

Metric Master (Mean ± 95% CI) Current (Mean ± 95% CI) Change Status
.NET Framework 4.8 - Baseline
duration73.49 ± (73.63 - 74.16) ms72.20 ± (72.27 - 72.64) ms-1.7%
.NET Framework 4.8 - Bailout
duration79.47 ± (79.31 - 79.78) ms79.40 ± (79.19 - 79.74) ms-0.1%
.NET Framework 4.8 - CallTarget+Inlining+NGEN
duration1083.96 ± (1083.84 - 1091.90) ms1077.65 ± (1078.72 - 1086.25) ms-0.6%
.NET Core 3.1 - Baseline
process.internal_duration_ms22.76 ± (22.70 - 22.81) ms22.83 ± (22.77 - 22.89) ms+0.3%✅⬆️
process.time_to_main_ms86.52 ± (86.20 - 86.84) ms86.47 ± (86.13 - 86.81) ms-0.1%
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed10.91 ± (10.90 - 10.91) MB10.91 ± (10.90 - 10.91) MB-0.0%
runtime.dotnet.threads.count12 ± (12 - 12)12 ± (12 - 12)+0.0%
.NET Core 3.1 - Bailout
process.internal_duration_ms22.53 ± (22.48 - 22.57) ms22.80 ± (22.74 - 22.85) ms+1.2%✅⬆️
process.time_to_main_ms86.41 ± (86.15 - 86.68) ms88.41 ± (88.07 - 88.75) ms+2.3%✅⬆️
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed10.94 ± (10.94 - 10.94) MB10.94 ± (10.93 - 10.94) MB-0.0%
runtime.dotnet.threads.count13 ± (13 - 13)13 ± (13 - 13)+0.0%
.NET Core 3.1 - CallTarget+Inlining+NGEN
process.internal_duration_ms230.67 ± (229.51 - 231.83) ms230.21 ± (229.11 - 231.31) ms-0.2%
process.time_to_main_ms528.27 ± (526.92 - 529.61) ms532.46 ± (531.11 - 533.82) ms+0.8%✅⬆️
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed48.49 ± (48.47 - 48.52) MB48.51 ± (48.48 - 48.54) MB+0.0%✅⬆️
runtime.dotnet.threads.count28 ± (28 - 28)28 ± (28 - 28)-0.0%
.NET 6 - Baseline
process.internal_duration_ms21.28 ± (21.23 - 21.33) ms21.58 ± (21.53 - 21.64) ms+1.4%✅⬆️
process.time_to_main_ms74.12 ± (73.90 - 74.33) ms74.50 ± (74.27 - 74.74) ms+0.5%✅⬆️
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed10.62 ± (10.62 - 10.62) MB10.62 ± (10.62 - 10.63) MB+0.1%✅⬆️
runtime.dotnet.threads.count10 ± (10 - 10)10 ± (10 - 10)+0.0%
.NET 6 - Bailout
process.internal_duration_ms21.50 ± (21.44 - 21.55) ms21.49 ± (21.43 - 21.54) ms-0.0%
process.time_to_main_ms76.76 ± (76.47 - 77.04) ms76.65 ± (76.37 - 76.93) ms-0.1%
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed10.74 ± (10.74 - 10.74) MB10.74 ± (10.74 - 10.74) MB+0.0%✅⬆️
runtime.dotnet.threads.count11 ± (11 - 11)11 ± (11 - 11)+0.0%
.NET 6 - CallTarget+Inlining+NGEN
process.internal_duration_ms386.94 ± (385.06 - 388.81) ms386.12 ± (383.80 - 388.45) ms-0.2%
process.time_to_main_ms528.32 ± (526.99 - 529.65) ms530.60 ± (529.40 - 531.80) ms+0.4%✅⬆️
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed49.85 ± (49.82 - 49.88) MB50.06 ± (50.02 - 50.09) MB+0.4%✅⬆️
runtime.dotnet.threads.count28 ± (28 - 28)28 ± (28 - 28)-0.0%
.NET 8 - Baseline
process.internal_duration_ms19.52 ± (19.48 - 19.56) ms19.42 ± (19.39 - 19.45) ms-0.5%
process.time_to_main_ms73.41 ± (73.23 - 73.59) ms72.73 ± (72.57 - 72.90) ms-0.9%
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed7.66 ± (7.66 - 7.67) MB7.66 ± (7.66 - 7.67) MB+0.0%✅⬆️
runtime.dotnet.threads.count10 ± (10 - 10)10 ± (10 - 10)+0.0%
.NET 8 - Bailout
process.internal_duration_ms19.56 ± (19.51 - 19.61) ms19.86 ± (19.80 - 19.91) ms+1.5%✅⬆️
process.time_to_main_ms74.97 ± (74.72 - 75.22) ms76.37 ± (76.08 - 76.66) ms+1.9%✅⬆️
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed7.71 ± (7.70 - 7.71) MB7.73 ± (7.72 - 7.73) MB+0.3%✅⬆️
runtime.dotnet.threads.count11 ± (11 - 11)11 ± (11 - 11)+0.0%
.NET 8 - CallTarget+Inlining+NGEN
process.internal_duration_ms306.76 ± (304.29 - 309.24) ms305.40 ± (302.91 - 307.88) ms-0.4%
process.time_to_main_ms489.74 ± (488.63 - 490.85) ms492.96 ± (491.71 - 494.21) ms+0.7%✅⬆️
runtime.dotnet.exceptions.count0 ± (0 - 0)0 ± (0 - 0)+0.0%
runtime.dotnet.mem.committed37.07 ± (37.04 - 37.09) MB37.12 ± (37.10 - 37.14) MB+0.1%✅⬆️
runtime.dotnet.threads.count27 ± (27 - 27)27 ± (27 - 27)-0.4%

HttpMessageHandler

Metric Master (Mean ± 95% CI) Current (Mean ± 95% CI) Change Status
.NET Framework 4.8 - Baseline
duration192.79 ± (192.71 - 193.48) ms193.65 ± (193.77 - 194.61) ms+0.4%✅⬆️
.NET Framework 4.8 - Bailout
duration195.30 ± (195.23 - 195.65) ms196.52 ± (196.32 - 196.83) ms+0.6%✅⬆️
.NET Framework 4.8 - CallTarget+Inlining+NGEN
duration1150.66 ± (1150.21 - 1155.86) ms1153.64 ± (1154.44 - 1160.18) ms+0.3%✅⬆️
.NET Core 3.1 - Baseline
process.internal_duration_ms185.90 ± (185.56 - 186.25) ms187.24 ± (186.88 - 187.60) ms+0.7%✅⬆️
process.time_to_main_ms81.00 ± (80.79 - 81.21) ms81.84 ± (81.63 - 82.04) ms+1.0%✅⬆️
runtime.dotnet.exceptions.count3 ± (3 - 3)3 ± (3 - 3)+0.0%
runtime.dotnet.mem.committed16.15 ± (16.12 - 16.18) MB16.07 ± (16.04 - 16.10) MB-0.5%
runtime.dotnet.threads.count20 ± (20 - 20)20 ± (20 - 20)-0.3%
.NET Core 3.1 - Bailout
process.internal_duration_ms185.28 ± (185.00 - 185.55) ms187.90 ± (187.47 - 188.34) ms+1.4%✅⬆️
process.time_to_main_ms82.36 ± (82.22 - 82.51) ms83.73 ± (83.52 - 83.95) ms+1.7%✅⬆️
runtime.dotnet.exceptions.count3 ± (3 - 3)3 ± (3 - 3)+0.0%
runtime.dotnet.mem.committed16.21 ± (16.18 - 16.24) MB16.12 ± (16.10 - 16.14) MB-0.5%
runtime.dotnet.threads.count21 ± (20 - 21)21 ± (21 - 21)+0.4%✅⬆️
.NET Core 3.1 - CallTarget+Inlining+NGEN
process.internal_duration_ms395.19 ± (393.57 - 396.80) ms397.34 ± (396.01 - 398.67) ms+0.5%✅⬆️
process.time_to_main_ms509.86 ± (508.72 - 511.00) ms512.37 ± (511.37 - 513.38) ms+0.5%✅⬆️
runtime.dotnet.exceptions.count3 ± (3 - 3)3 ± (3 - 3)+0.0%
runtime.dotnet.mem.committed58.84 ± (58.64 - 59.04) MB59.27 ± (59.20 - 59.34) MB+0.7%✅⬆️
runtime.dotnet.threads.count30 ± (29 - 30)30 ± (29 - 30)+0.0%✅⬆️
.NET 6 - Baseline
process.internal_duration_ms191.28 ± (190.89 - 191.67) ms192.06 ± (191.76 - 192.36) ms+0.4%✅⬆️
process.time_to_main_ms71.14 ± (70.96 - 71.31) ms71.79 ± (71.59 - 71.99) ms+0.9%✅⬆️
runtime.dotnet.exceptions.count4 ± (4 - 4)4 ± (4 - 4)+0.0%
runtime.dotnet.mem.committed16.15 ± (16.01 - 16.28) MB16.17 ± (16.03 - 16.30) MB+0.1%✅⬆️
runtime.dotnet.threads.count19 ± (19 - 19)18 ± (18 - 18)-3.6%
.NET 6 - Bailout
process.internal_duration_ms190.02 ± (189.79 - 190.25) ms191.14 ± (190.88 - 191.41) ms+0.6%✅⬆️
process.time_to_main_ms71.86 ± (71.74 - 71.98) ms72.45 ± (72.30 - 72.60) ms+0.8%✅⬆️
runtime.dotnet.exceptions.count4 ± (4 - 4)4 ± (4 - 4)+0.0%
runtime.dotnet.mem.committed15.97 ± (15.81 - 16.12) MB16.23 ± (16.10 - 16.36) MB+1.7%✅⬆️
runtime.dotnet.threads.count19 ± (19 - 19)20 ± (20 - 20)+3.4%✅⬆️
.NET 6 - CallTarget+Inlining+NGEN
process.internal_duration_ms597.77 ± (595.21 - 600.32) ms601.48 ± (598.83 - 604.13) ms+0.6%✅⬆️
process.time_to_main_ms511.82 ± (511.01 - 512.63) ms513.75 ± (512.86 - 514.63) ms+0.4%✅⬆️
runtime.dotnet.exceptions.count4 ± (4 - 4)4 ± (4 - 4)+0.0%
runtime.dotnet.mem.committed61.60 ± (61.51 - 61.70) MB61.82 ± (61.74 - 61.91) MB+0.4%✅⬆️
runtime.dotnet.threads.count30 ± (30 - 30)30 ± (30 - 30)-0.0%
.NET 8 - Baseline
process.internal_duration_ms187.26 ± (187.00 - 187.52) ms188.73 ± (188.50 - 188.97) ms+0.8%✅⬆️
process.time_to_main_ms69.76 ± (69.58 - 69.95) ms70.27 ± (70.09 - 70.46) ms+0.7%✅⬆️
runtime.dotnet.exceptions.count4 ± (4 - 4)4 ± (4 - 4)+0.0%
runtime.dotnet.mem.committed11.77 ± (11.74 - 11.80) MB11.75 ± (11.72 - 11.77) MB-0.2%
runtime.dotnet.threads.count18 ± (18 - 18)18 ± (18 - 18)+0.5%✅⬆️
.NET 8 - Bailout
process.internal_duration_ms187.09 ± (186.88 - 187.31) ms188.28 ± (188.04 - 188.51) ms+0.6%✅⬆️
process.time_to_main_ms70.86 ± (70.74 - 70.99) ms71.44 ± (71.32 - 71.57) ms+0.8%✅⬆️
runtime.dotnet.exceptions.count4 ± (4 - 4)4 ± (4 - 4)+0.0%
runtime.dotnet.mem.committed11.84 ± (11.81 - 11.87) MB11.74 ± (11.68 - 11.81) MB-0.8%
runtime.dotnet.threads.count19 ± (19 - 19)19 ± (19 - 19)-0.7%
.NET 8 - CallTarget+Inlining+NGEN
process.internal_duration_ms519.92 ± (517.35 - 522.48) ms517.49 ± (514.91 - 520.07) ms-0.5%
process.time_to_main_ms471.74 ± (470.84 - 472.64) ms472.71 ± (471.97 - 473.45) ms+0.2%✅⬆️
runtime.dotnet.exceptions.count4 ± (4 - 4)4 ± (4 - 4)+0.0%
runtime.dotnet.mem.committed50.78 ± (50.75 - 50.81) MB50.71 ± (50.68 - 50.74) MB-0.1%
runtime.dotnet.threads.count30 ± (30 - 30)30 ± (30 - 30)-0.1%
Comparison explanation

Execution-time benchmarks measure the whole time it takes to execute a program, and are intended to measure the one-off costs. Cases where the execution time results for the PR are worse than latest master results are highlighted in **red**. The following thresholds were used for comparing the execution times:

  • Welch test with statistical test for significance of 5%
  • Only results indicating a difference greater than 5% and 5 ms are considered.

Note that these results are based on a single point-in-time result for each branch. For full results, see the dashboard.

Graphs show the p99 interval based on the mean and StdDev of the test run, as well as the mean value of the run (shown as a diamond below the graph).

Duration charts
FakeDbCommand (.NET Framework 4.8)
gantt
    title Execution time (ms) FakeDbCommand (.NET Framework 4.8)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (72ms)  : 70, 75
    master - mean (74ms)  : 70, 78

    section Bailout
    This PR (8449) - mean (79ms)  : 75, 84
    master - mean (80ms)  : 76, 83

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (1,082ms)  : 1027, 1138
    master - mean (1,088ms)  : 1028, 1148

Loading
FakeDbCommand (.NET Core 3.1)
gantt
    title Execution time (ms) FakeDbCommand (.NET Core 3.1)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (116ms)  : 110, 123
    master - mean (116ms)  : 111, 122

    section Bailout
    This PR (8449) - mean (118ms)  : 112, 125
    master - mean (116ms)  : 111, 121

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (801ms)  : 775, 826
    master - mean (797ms)  : 770, 824

Loading
FakeDbCommand (.NET 6)
gantt
    title Execution time (ms) FakeDbCommand (.NET 6)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (103ms)  : 98, 107
    master - mean (102ms)  : 98, 106

    section Bailout
    This PR (8449) - mean (105ms)  : 99, 111
    master - mean (105ms)  : 99, 111

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (945ms)  : 907, 982
    master - mean (943ms)  : 909, 978

Loading
FakeDbCommand (.NET 8)
gantt
    title Execution time (ms) FakeDbCommand (.NET 8)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (100ms)  : 97, 103
    master - mean (101ms)  : 97, 104

    section Bailout
    This PR (8449) - mean (104ms)  : 98, 111
    master - mean (102ms)  : 97, 108

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (832ms)  : 779, 885
    master - mean (829ms)  : 786, 873

Loading
HttpMessageHandler (.NET Framework 4.8)
gantt
    title Execution time (ms) HttpMessageHandler (.NET Framework 4.8)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (194ms)  : 189, 199
    master - mean (193ms)  : 189, 197

    section Bailout
    This PR (8449) - mean (197ms)  : 194, 199
    master - mean (195ms)  : 193, 197

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (1,157ms)  : 1116, 1198
    master - mean (1,153ms)  : 1112, 1194

Loading
HttpMessageHandler (.NET Core 3.1)
gantt
    title Execution time (ms) HttpMessageHandler (.NET Core 3.1)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (278ms)  : 273, 282
    master - mean (276ms)  : 271, 281

    section Bailout
    This PR (8449) - mean (280ms)  : 273, 288
    master - mean (276ms)  : 273, 280

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (947ms)  : 925, 968
    master - mean (933ms)  : 912, 954

Loading
HttpMessageHandler (.NET 6)
gantt
    title Execution time (ms) HttpMessageHandler (.NET 6)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (273ms)  : 268, 277
    master - mean (271ms)  : 266, 276

    section Bailout
    This PR (8449) - mean (272ms)  : 268, 276
    master - mean (270ms)  : 267, 273

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (1,145ms)  : 1106, 1185
    master - mean (1,140ms)  : 1103, 1177

Loading
HttpMessageHandler (.NET 8)
gantt
    title Execution time (ms) HttpMessageHandler (.NET 8)
    dateFormat  x
    axisFormat %Q
    todayMarker off
    section Baseline
    This PR (8449) - mean (269ms)  : 265, 273
    master - mean (267ms)  : 263, 270

    section Bailout
    This PR (8449) - mean (270ms)  : 266, 273
    master - mean (267ms)  : 264, 271

    section CallTarget+Inlining+NGEN
    This PR (8449) - mean (1,023ms)  : 981, 1065
    master - mean (1,028ms)  : 978, 1078

Loading

@pr-commenter
Copy link
Copy Markdown

pr-commenter Bot commented Apr 13, 2026

Benchmarks

Benchmark execution time: 2026-04-21 15:57:12

Comparing candidate commit e80e6ec in PR branch steven/attempt-to-fix-otel-tests-failing-constantly with baseline commit e36a35c in branch master.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 27 metrics, 0 unstable metrics, 87 known flaky benchmarks.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

Known flaky benchmarks

These benchmarks are marked as flaky and will not trigger a failure. Modify FLAKY_BENCHMARKS_REGEX to control which benchmarks are marked as flaky.

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.005%]
  • ignore execution_time [-1003.902µs; -33.429µs] or [-0.499%; -0.017%]
  • ignore throughput [+248.703op/s; +633.957op/s] or [+0.295%; +0.752%]

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.007%]
  • ignore execution_time [-1670.674µs; +1827.563µs] or [-0.834%; +0.912%]
  • 🟩 throughput [+10220.292op/s; +12448.841op/s] or [+8.591%; +10.464%]

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.007%]
  • ignore execution_time [+0.268ms; +2.425ms] or [+0.135%; +1.220%]
  • ignore throughput [-1735.544op/s; -566.817op/s] or [-1.765%; -0.576%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces net472

  • ignore allocated_mem [-20 bytes; -19 bytes] or [-0.613%; -0.600%]
  • 🟥 execution_time [+306.810ms; +309.116ms] or [+152.250%; +153.394%]
  • ignore throughput [+7.493op/s; +11.566op/s] or [+1.348%; +2.081%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.009%; +0.002%]
  • 🟥 execution_time [+383.291ms; +386.103ms] or [+302.824%; +305.045%]
  • ignore throughput [+16.114op/s; +19.205op/s] or [+2.125%; +2.532%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.009%; +0.002%]
  • 🟥 execution_time [+394.895ms; +397.940ms] or [+349.467%; +352.162%]
  • ignore throughput [-3.156op/s; +0.489op/s] or [-0.446%; +0.069%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody net472

  • 🟥 allocated_mem [+1.308KB; +1.308KB] or [+27.529%; +27.541%]
  • ignore execution_time [-206.904µs; +446.050µs] or [-0.103%; +0.223%]
  • ignore throughput [-4378.679op/s; -3967.898op/s] or [-3.407%; -3.087%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody net6.0

  • 🟥 allocated_mem [+471 bytes; +472 bytes] or [+9.977%; +9.987%]
  • 🟩 execution_time [-15.650ms; -11.459ms] or [-7.309%; -5.352%]
  • ignore throughput [+4385.335op/s; +7190.647op/s] or [+3.201%; +5.249%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody netcoreapp3.1

  • 🟥 allocated_mem [+1.272KB; +1.272KB] or [+27.502%; +27.510%]
  • ignore execution_time [-12.516ms; -8.367ms] or [-5.960%; -3.984%]
  • ignore throughput [-1351.517op/s; +933.638op/s] or [-1.222%; +0.844%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody net472

  • 🟥 allocated_mem [+1.307KB; +1.307KB] or [+105.746%; +105.759%]
  • ignore execution_time [-1274.151µs; -650.030µs] or [-0.634%; -0.324%]
  • 🟥 throughput [-250109.907op/s; -245995.343op/s] or [-25.537%; -25.117%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody net6.0

  • 🟥 allocated_mem [+471 bytes; +472 bytes] or [+38.558%; +38.566%]
  • 🟩 execution_time [-26.245ms; -21.364ms] or [-11.704%; -9.527%]
  • ignore throughput [-68075.124op/s; -45147.732op/s] or [-7.273%; -4.823%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody netcoreapp3.1

  • 🟥 allocated_mem [+1.272KB; +1.272KB] or [+105.292%; +105.304%]
  • ignore execution_time [-4.518ms; -0.239ms] or [-2.255%; -0.119%]
  • 🟥 throughput [-133902.803op/s; -117734.751op/s] or [-19.239%; -16.916%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.007%; +0.003%]
  • ignore execution_time [-575.912µs; +479.689µs] or [-0.287%; +0.239%]
  • ignore throughput [-792.328op/s; +64.213op/s] or [-0.533%; +0.043%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.003%]
  • ignore execution_time [-2002.370µs; +1510.014µs] or [-1.010%; +0.762%]
  • 🟩 throughput [+11537.518op/s; +14463.442op/s] or [+7.341%; +9.203%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.007%; +0.003%]
  • ignore execution_time [+1.745ms; +5.754ms] or [+0.889%; +2.934%]
  • 🟩 throughput [+7122.674op/s; +9773.142op/s] or [+5.674%; +7.786%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.004%]
  • ignore execution_time [-287.757µs; -13.482µs] or [-0.144%; -0.007%]
  • ignore throughput [-43360.459op/s; -4793.008op/s] or [-1.319%; -0.146%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.008%]
  • ignore execution_time [-2.506ms; -1.766ms] or [-1.239%; -0.873%]
  • 🟩 throughput [+478568.895op/s; +495565.097op/s] or [+15.958%; +16.524%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.004%]
  • 🟩 execution_time [-19.315ms; -14.933ms] or [-8.904%; -6.884%]
  • 🟩 throughput [+216948.438op/s; +271159.826op/s] or [+8.611%; +10.763%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs net472

  • ignore allocated_mem [+0 bytes; +2 bytes] or [-0.001%; +0.007%]
  • 🟥 execution_time [+299.729ms; +300.330ms] or [+149.764%; +150.065%]
  • ignore throughput [+136.886op/s; +155.781op/s] or [+1.512%; +1.721%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs net6.0

  • ignore allocated_mem [-1 bytes; +2 bytes] or [-0.004%; +0.008%]
  • 🟥 execution_time [+299.917ms; +303.621ms] or [+151.249%; +153.117%]
  • ignore throughput [+343.124op/s; +560.153op/s] or [+2.624%; +4.284%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs netcoreapp3.1

  • ignore allocated_mem [-1 bytes; +2 bytes] or [-0.004%; +0.008%]
  • 🟥 execution_time [+300.334ms; +302.951ms] or [+151.285%; +152.603%]
  • ignore throughput [+237.921op/s; +364.783op/s] or [+2.297%; +3.522%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs net472

  • ignore allocated_mem [+2 bytes; +3 bytes] or [+0.137%; +0.150%]
  • 🟥 execution_time [+296.891ms; +297.749ms] or [+145.821%; +146.242%]
  • ignore throughput [-2.283op/s; +8.993op/s] or [-0.061%; +0.238%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.009%]
  • 🟥 execution_time [+295.974ms; +298.228ms] or [+144.691%; +145.793%]
  • ignore throughput [+155.682op/s; +189.548op/s] or [+2.262%; +2.754%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.009%]
  • 🟥 execution_time [+300.905ms; +302.566ms] or [+150.392%; +151.222%]
  • ignore throughput [-71.266op/s; -49.827op/s] or [-1.415%; -0.989%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [+5.090µs; +9.194µs] or [+1.045%; +1.888%]
  • ignore throughput [-37.664op/s; -20.876op/s] or [-1.834%; -1.017%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.000%; +0.010%]
  • ignore execution_time [+18.975µs; +45.632µs] or [+4.352%; +10.466%]
  • ignore throughput [-224.374op/s; -104.447op/s] or [-9.755%; -4.541%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.003%]
  • ignore execution_time [+6.083µs; +28.061µs] or [+1.303%; +6.012%]
  • ignore throughput [-139.524op/s; -58.691op/s] or [-6.441%; -2.709%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-4.711µs; -0.473µs] or [-1.272%; -0.128%]
  • ignore throughput [+4.499op/s; +34.922op/s] or [+0.167%; +1.293%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.007%]
  • 🟥 execution_time [+23.358µs; +46.961µs] or [+7.457%; +14.992%]
  • 🟥 throughput [-436.279op/s; -237.669op/s] or [-13.600%; -7.409%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.003%]
  • ignore execution_time [-15.914µs; +6.442µs] or [-4.354%; +1.762%]
  • ignore throughput [-81.465op/s; +51.906op/s] or [-2.923%; +1.863%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • 🟥 execution_time [+299.482ms; +300.136ms] or [+149.472%; +149.798%]
  • ignore throughput [+3686047.905op/s; +4251875.672op/s] or [+1.846%; +2.130%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest net6.0

  • ignore allocated_mem [+83 bytes; +85 bytes] or [+0.465%; +0.476%]
  • unstable execution_time [+308.472ms; +359.182ms] or [+335.167%; +390.266%]
  • 🟩 throughput [+1046.827op/s; +1202.768op/s] or [+8.602%; +9.883%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest netcoreapp3.1

  • ignore allocated_mem [+20 bytes; +22 bytes] or [+0.099%; +0.110%]
  • unstable execution_time [+276.710ms; +318.933ms] or [+210.103%; +242.163%]
  • 🟩 throughput [+705.768op/s; +905.779op/s] or [+6.832%; +8.769%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces net472

  • 🟥 allocated_mem [+4.385KB; +4.390KB] or [+7.791%; +7.799%]
  • unstable execution_time [+271.652ms; +331.045ms] or [+124.903%; +152.211%]
  • 🟥 throughput [-521.867op/s; -456.670op/s] or [-47.286%; -41.379%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces net6.0

  • ignore allocated_mem [-1.270KB; -1.268KB] or [-2.995%; -2.990%]
  • unstable execution_time [+205.049ms; +338.278ms] or [+87.383%; +144.160%]
  • 🟥 throughput [-741.473op/s; -658.036op/s] or [-49.456%; -43.891%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces netcoreapp3.1

  • ignore allocated_mem [+954 bytes; +958 bytes] or [+2.255%; +2.262%]
  • 🟥 execution_time [+332.408ms; +339.507ms] or [+198.819%; +203.065%]
  • 🟥 throughput [-390.887op/s; -355.895op/s] or [-27.217%; -24.780%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-47.150µs; -14.095µs] or [-2.373%; -0.709%]
  • ignore throughput [+4.610op/s; +12.927op/s] or [+0.916%; +2.569%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [+1.550µs; +12.133µs] or [+0.107%; +0.834%]
  • ignore throughput [-5.517op/s; -0.559op/s] or [-0.803%; -0.081%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-93.174µs; -81.206µs] or [-3.241%; -2.825%]
  • ignore throughput [+10.156op/s; +11.688op/s] or [+2.919%; +3.360%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-12.773µs; -6.218µs] or [-1.103%; -0.537%]
  • ignore throughput [+4.832op/s; +9.724op/s] or [+0.559%; +1.126%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-46.840µs; -34.233µs] or [-4.344%; -3.175%]
  • ignore throughput [+31.313op/s; +42.680op/s] or [+3.376%; +4.602%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-19.956µs; +70.057µs] or [-1.069%; +3.753%]
  • ignore throughput [-16.782op/s; +32.457op/s] or [-3.133%; +6.058%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice net472

  • ignore allocated_mem [-43 bytes; +21 bytes] or [-0.007%; +0.003%]
  • ignore execution_time [-1.009µs; +14.301µs] or [-0.039%; +0.559%]
  • ignore throughput [-2.086op/s; +0.233op/s] or [-0.534%; +0.060%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice net6.0

  • ignore allocated_mem [-38 bytes; +46 bytes] or [-0.006%; +0.007%]
  • ignore execution_time [-104.332µs; -68.643µs] or [-5.285%; -3.477%]
  • ignore throughput [+19.769op/s; +28.840op/s] or [+3.903%; +5.693%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice netcoreapp3.1

  • ignore allocated_mem [-42 bytes; +23 bytes] or [-0.007%; +0.004%]
  • ignore execution_time [-134.484µs; -93.202µs] or [-3.410%; -2.364%]
  • ignore throughput [+6.408op/s; +9.029op/s] or [+2.527%; +3.560%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.001%; +0.008%]
  • 🟥 execution_time [+304.339ms; +305.790ms] or [+153.259%; +153.990%]
  • 🟩 throughput [+15974.957op/s; +17605.918op/s] or [+5.140%; +5.665%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • 🟥 execution_time [+302.207ms; +304.513ms] or [+151.436%; +152.592%]
  • ignore throughput [+14356.288op/s; +20502.023op/s] or [+2.263%; +3.232%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • 🟥 execution_time [+301.078ms; +304.193ms] or [+151.249%; +152.813%]
  • ignore throughput [+23169.026op/s; +30569.534op/s] or [+4.881%; +6.439%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.004%]
  • 🟥 execution_time [+302.136ms; +303.412ms] or [+151.723%; +152.363%]
  • 🟩 throughput [+16830.899op/s; +18628.510op/s] or [+5.639%; +6.241%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.007%; +0.006%]
  • 🟥 execution_time [+298.232ms; +300.546ms] or [+147.463%; +148.606%]
  • ignore throughput [+6539.187op/s; +11565.082op/s] or [+1.054%; +1.863%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • 🟥 execution_time [+303.854ms; +307.330ms] or [+154.007%; +155.768%]
  • ignore throughput [+368.108op/s; +8689.776op/s] or [+0.079%; +1.876%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync net472

  • ignore allocated_mem [+0 bytes; +1 bytes] or [+0.108%; +0.119%]
  • 🟥 execution_time [+298.650ms; +300.161ms] or [+149.895%; +150.654%]
  • ignore throughput [+5768.334op/s; +8949.505op/s] or [+1.496%; +2.322%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.006%]
  • 🟥 execution_time [+299.115ms; +305.766ms] or [+149.081%; +152.396%]
  • 🟩 throughput [+60850.482op/s; +68690.274op/s] or [+12.083%; +13.640%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.006%]
  • 🟥 execution_time [+300.950ms; +303.962ms] or [+149.720%; +151.218%]
  • ignore throughput [-6142.034op/s; -655.965op/s] or [-1.454%; -0.155%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.006%]
  • ignore execution_time [-1062.709µs; -228.079µs] or [-0.528%; -0.113%]
  • ignore throughput [-3161.970op/s; -1489.311op/s] or [-1.272%; -0.599%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.003%]
  • 🟩 execution_time [-16.356ms; -12.673ms] or [-7.606%; -5.893%]
  • 🟩 throughput [+26505.966op/s; +33153.771op/s] or [+7.271%; +9.095%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.008%]
  • ignore execution_time [-0.503ms; +3.433ms] or [-0.252%; +1.722%]
  • ignore throughput [+4173.248op/s; +9913.326op/s] or [+1.523%; +3.619%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark net472

  • ignore allocated_mem [-4.459KB; -4.431KB] or [-1.623%; -1.613%]
  • unstable execution_time [+9.580µs; +50.569µs] or [+2.366%; +12.491%]
  • ignore throughput [-262.813op/s; -57.682op/s] or [-10.576%; -2.321%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark net6.0

  • 🟥 allocated_mem [+43.148KB; +43.170KB] or [+15.739%; +15.747%]
  • unstable execution_time [-13.597µs; +61.958µs] or [-2.687%; +12.246%]
  • unstable throughput [-125.452op/s; +91.431op/s] or [-6.260%; +4.562%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark netcoreapp3.1

  • 🟩 allocated_mem [-16.298KB; -16.282KB] or [-5.941%; -5.935%]
  • ignore execution_time [-50.940µs; +5.603µs] or [-8.828%; +0.971%]
  • ignore throughput [-4.397op/s; +150.152op/s] or [-0.251%; +8.578%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark net472

  • ignore allocated_mem [-2 bytes; +2 bytes] or [-0.005%; +0.006%]
  • ignore execution_time [+0.813µs; +2.328µs] or [+1.408%; +4.033%]
  • ignore throughput [-653.450op/s; -228.282op/s] or [-3.771%; -1.317%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark net6.0

  • ignore allocated_mem [-4 bytes; +0 bytes] or [-0.010%; -0.001%]
  • unstable execution_time [+6.735µs; +11.188µs] or [+15.919%; +26.445%]
  • 🟥 throughput [-4862.533op/s; -3029.844op/s] or [-20.470%; -12.755%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark netcoreapp3.1

  • ignore allocated_mem [-1 bytes; +1 bytes] or [-0.002%; +0.002%]
  • unstable execution_time [-13.664µs; -6.244µs] or [-21.200%; -9.687%]
  • 🟩 throughput [+1649.640op/s; +3203.159op/s] or [+10.121%; +19.652%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog net472

  • ignore allocated_mem [+2 bytes; +3 bytes] or [+0.061%; +0.072%]
  • 🟥 execution_time [+302.111ms; +303.258ms] or [+152.703%; +153.284%]
  • ignore throughput [-127.136op/s; -105.491op/s] or [-2.124%; -1.763%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • 🟥 execution_time [+303.302ms; +305.488ms] or [+154.380%; +155.493%]
  • ignore throughput [-129.619op/s; -53.280op/s] or [-1.608%; -0.661%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • 🟥 execution_time [+300.818ms; +302.826ms] or [+150.597%; +151.602%]
  • ignore throughput [-170.009op/s; -106.996op/s] or [-2.166%; -1.363%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • ignore execution_time [-943.602µs; -356.937µs] or [-0.470%; -0.178%]
  • ignore throughput [-4146.464op/s; -1081.088op/s] or [-1.148%; -0.299%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.007%]
  • ignore execution_time [-171.079µs; +542.957µs] or [-0.086%; +0.271%]
  • 🟩 throughput [+39782.045op/s; +43880.874op/s] or [+7.530%; +8.306%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.006%]
  • ignore execution_time [+1.182ms; +4.860ms] or [+0.599%; +2.463%]
  • ignore throughput [+7071.397op/s; +15644.417op/s] or [+1.674%; +3.703%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.006%]
  • 🟥 execution_time [+300.394ms; +301.981ms] or [+149.719%; +150.510%]
  • ignore throughput [-3273.756op/s; -2296.090op/s] or [-2.162%; -1.516%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+0.000%; +0.009%]
  • 🟥 execution_time [+299.924ms; +301.240ms] or [+150.607%; +151.268%]
  • ignore throughput [+2184.239op/s; +3744.831op/s] or [+0.950%; +1.629%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.003%]
  • 🟥 execution_time [+303.038ms; +305.260ms] or [+153.681%; +154.808%]
  • ignore throughput [+2566.094op/s; +4867.161op/s] or [+1.445%; +2.741%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • 🟥 execution_time [+300.167ms; +301.114ms] or [+149.725%; +150.197%]
  • 🟩 throughput [+66181928.699op/s; +66472224.397op/s] or [+48.198%; +48.409%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore net6.0

  • ignore allocated_mem [+83 bytes; +85 bytes] or [+0.488%; +0.498%]
  • 🟥 execution_time [+419.511ms; +424.418ms] or [+521.736%; +527.839%]
  • 🟩 throughput [+1013.172op/s; +1180.960op/s] or [+7.832%; +9.129%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • 🟥 execution_time [+299.557ms; +300.449ms] or [+149.412%; +149.857%]
  • ignore throughput [+1715269.191op/s; +2652860.306op/s] or [+0.760%; +1.175%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.006%]
  • ignore execution_time [+0.676ms; +1.646ms] or [+0.338%; +0.823%]
  • ignore throughput [-17182.367op/s; -10135.369op/s] or [-1.918%; -1.131%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.007%]
  • ignore execution_time [-4.979ms; -3.844ms] or [-2.439%; -1.883%]
  • 🟩 throughput [+109908.115op/s; +117766.857op/s] or [+10.262%; +10.996%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.006%]
  • ignore execution_time [+3.157ms; +7.275ms] or [+1.598%; +3.681%]
  • 🟩 throughput [+59366.015op/s; +78223.839op/s] or [+6.871%; +9.054%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.000%; +0.009%]
  • ignore execution_time [+374.124µs; +1083.517µs] or [+0.187%; +0.541%]
  • ignore throughput [-5376.668op/s; +427.091op/s] or [-0.492%; +0.039%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.004%]
  • ignore execution_time [+6.340ms; +10.431ms] or [+3.303%; +5.434%]
  • 🟩 throughput [+96787.102op/s; +126414.852op/s] or [+7.491%; +9.785%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.001%; +0.008%]
  • ignore execution_time [-2.316ms; -0.645ms] or [-1.138%; -0.317%]
  • 🟩 throughput [+95848.095op/s; +104124.874op/s] or [+9.519%; +10.341%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.002%]
  • ignore execution_time [-1.716ms; -0.494ms] or [-0.854%; -0.246%]
  • ignore throughput [+10726.550op/s; +13584.872op/s] or [+2.390%; +3.027%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.009%]
  • ignore execution_time [-496.077µs; +1025.869µs] or [-0.248%; +0.512%]
  • 🟩 throughput [+53631.696op/s; +59370.072op/s] or [+9.739%; +10.781%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • ignore execution_time [-0.870ms; +3.210ms] or [-0.437%; +1.613%]
  • 🟩 throughput [+29345.671op/s; +39050.053op/s] or [+6.569%; +8.741%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.006%]
  • ignore execution_time [-1415.451µs; -313.137µs] or [-0.705%; -0.156%]
  • ignore throughput [-5608.681op/s; -1993.020op/s] or [-0.821%; -0.292%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.007%]
  • ignore execution_time [-0.732ms; +2.814ms] or [-0.366%; +1.408%]
  • 🟩 throughput [+50584.396op/s; +68114.734op/s] or [+5.652%; +7.610%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • ignore execution_time [+1.470ms; +5.382ms] or [+0.747%; +2.733%]
  • ignore throughput [+34931.450op/s; +49642.470op/s] or [+4.877%; +6.931%]

@bouwkast bouwkast changed the title Fix OTLP export CI timeout caused by forced HTTP/2 [OTEL] Fix OTLP export and test reliability Apr 14, 2026
@bouwkast bouwkast changed the title [OTEL] Fix OTLP export and test reliability Fix OTLP HTTP/protobuf export failures and improve OTLP integration test reliability Apr 14, 2026
@bouwkast bouwkast force-pushed the steven/attempt-to-fix-otel-tests-failing-constantly branch from b6d7dd5 to 96ecfea Compare April 14, 2026 21:18
<type fullname="System.Net.DnsEndPoint" />
<type fullname="System.Net.EndPoint" />
<type fullname="System.Net.HttpStatusCode" />
<type fullname="System.Net.HttpVersion" />
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The removal of System.Net.HttpVersion appears to be due to the removal of DefaultRequestVersion = HttpVersion.Version20 and DefaultVersionPolicy = HttpVersionPolicy.RequestVersionOrHigher

DefaultRequestVersion = HttpVersion.Version20,
DefaultVersionPolicy = HttpVersionPolicy.RequestVersionOrHigher
};
return new HttpClient(handler);
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't find any documentation / reason why it had to be HTTP/2 so I think this is fine, but feel free to request it to be reverted.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be because gRPC requires HTTP/2 🤔 But standard HTTP certainly shouldn't. So maybe we need to condition the client on that? (assuming this client is actually used in the grpc code path)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah gRPC pins to 2 here I've assumed that takes affect as there aren't test failures there so I think that is good enough

@bouwkast bouwkast force-pushed the steven/attempt-to-fix-otel-tests-failing-constantly branch 2 times, most recently from b8990e6 to 3a29f10 Compare April 15, 2026 18:21
@bouwkast bouwkast added type:bug area:tests unit tests, integration tests area:opentelemetry OpenTelemetry support labels Apr 20, 2026
andrewlock pushed a commit that referenced this pull request Apr 21, 2026
## Summary of changes

Skips `SubmitsOtlpLogs` and `SubmitsOtlpMetrics` as both have been
significantly more flaky recently and in some PRs (unrelated to them)
have been constantly failing and blocking builds.

## Reason for change

Skips `SubmitsOtlpLogs` and `SubmitsOtlpMetrics` as both have been
significantly more flaky recently and in some PRs (unrelated to them)
have been constantly failing and blocking builds.

## Implementation details

Marked as Skip

## Test coverage

Less

## Other details
<!-- Fixes #{issue} -->

I have been making progress, but haven't resolved this here [Fix OTLP
HTTP/protobuf export failures and improve OTLP integration test
reliability](#8449), but
I haven't reproduced it locally AND it seems to be getting significantly
worse recently - one theory is that our agents are under much higher
load which is causing them to hit this.



<!--  ⚠️ Note:

Where possible, please obtain 2 approvals prior to merging. Unless
CODEOWNERS specifies otherwise, for external teams it is typically best
to have one review from a team member, and one review from apm-dotnet.
Trivial changes do not require 2 reviews.

MergeQueue is NOT enabled in this repository. If you have write access
to the repo, the PR has 1-2 approvals (see above), and all of the
required checks have passed, you can use the Squash and Merge button to
merge the PR. If you don't have write access, or you need help, reach
out in the #apm-dotnet channel in Slack.
-->
Copy link
Copy Markdown
Member

@andrewlock andrewlock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yikes, what a mess, thanks for tackling all this 😅

Comment thread tracer/src/Datadog.Trace/Logging/DirectSubmission/Sink/OtlpSubmissionLogSink.cs Outdated
DefaultRequestVersion = HttpVersion.Version20,
DefaultVersionPolicy = HttpVersionPolicy.RequestVersionOrHigher
};
return new HttpClient(handler);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be because gRPC requires HTTP/2 🤔 But standard HTTP certainly shouldn't. So maybe we need to condition the client on that? (assuming this client is actually used in the grpc code path)

// final flush. Pin the APM URL back to the in-process MockAgent.
if (useAgentHostBackup && agent is MockTracerAgent.TcpUdpAgent tcpAgent)
{
SetEnvironmentVariable("DD_TRACE_AGENT_URL", $"http://127.0.0.1:{tcpAgent.Port}");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uuurgh, good catch

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, yeah good find

The vendored gRPC client already sets HTTP/2 per-request via
RequireHttp2, so forcing DefaultRequestVersion=HTTP/2 on the shared
HttpClient breaks HTTP/protobuf export against HTTP/1.1-only servers
(e.g., dd-apm-test-agent on port 4318).

Remove System.Net.HttpVersion from trimming XML since it is no longer
referenced.
Shut down the OtlpExporter in OtlpSubmissionLogSink.DisposeAsync()
so the underlying HttpClient is disposed after the final flush.
Replace fire-and-forget session clear with a retry helper that waits
for the test-agent HTTP endpoint to be ready.

Replace single-GET data fetch with a polling helper (WaitForTestAgentData)
that retries for up to 30 seconds, accounting for export flush timing
after process exit.

Fix incorrect env var OTEL_LOG_EXPORT_INTERVAL (does not exist) to
OTEL_BLRP_SCHEDULE_DELAY. Set to 500ms so the OTel SDK gets multiple
periodic exports before LoggerProviderSdk.Dispose() hits its 5s
shutdown timeout. This is especially important for gRPC, where the
first export warms the HTTP/2 connection.

Set OTEL_METRIC_EXPORT_INTERVAL to 60000ms so only the shutdown flush
fires, preventing duplicate metric batches from observable instruments
that broke snapshot comparison.

Remove [Flaky] attributes from SubmitsOtlpMetrics and SubmitsOtlpLogs.
Comment on lines +253 to +258
var otlpGeneralProtocolName = OtlpGeneralProtocol switch
{
OtlpProtocol.Grpc => "grpc",
OtlpProtocol.HttpProtobuf => "http/protobuf",
_ => "grpc",
};
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsure if these changes are needed TBH will double check again

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah so the issue here was that in tests we only set SetEnvironmentVariable("OTEL_EXPORTER_OTLP_PROTOCOL", protocol); (protocol was http/protobuf)

and that we weren't setting the metric/logs specific protocol so that when we ran the tests instead of running with http/protobuf they actually ran grpc

Also the OpenTelemetry spec says (right at the top):

The following configuration options MUST be available to configure the OTLP exporter. Each configuration option MUST be overridable by a signal specific option.

https://opentelemetry.io/docs/specs/otel/protocol/exporter/#configuration-options

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can confirm, this new logic is correct 👍🏼

No implementation honored timeoutMilliseconds - all paths just dispose
the HttpClient. The real flush is bounded by the HTTP client timeout
during the preceding DisposeAsync/StopAsync, so Shutdown() does not
need to block. Drop the parameter and the now-unused _timeoutMs field
in MetricReader.
@bouwkast bouwkast force-pushed the steven/attempt-to-fix-otel-tests-failing-constantly branch from fdde571 to 32d2ea4 Compare April 21, 2026 11:41
@bouwkast
Copy link
Copy Markdown
Collaborator Author

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Already looking forward to the next diff.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown
Contributor

@link04 link04 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did notice some env var I miss adding as well! Thank you for looking into this man.

Copy link
Copy Markdown
Contributor

@zacharycmontoya zacharycmontoya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these changes look necessary and correct. If the CI flakiness is reduced as result, then let's merge this

@bouwkast bouwkast marked this pull request as ready for review April 21, 2026 18:21
@bouwkast bouwkast requested review from a team as code owners April 21, 2026 18:21
@bouwkast bouwkast requested a review from anna-git April 21, 2026 18:21
@bouwkast bouwkast merged commit de8518e into master Apr 21, 2026
137 of 139 checks passed
@bouwkast bouwkast deleted the steven/attempt-to-fix-otel-tests-failing-constantly branch April 21, 2026 18:22
@github-actions github-actions Bot added this to the vNext-v3 milestone Apr 21, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e80e6ecf96

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

.WithKeys(ConfigurationKeys.OpenTelemetry.ExporterOtlpLogsProtocol)
.GetAs(
defaultValue: new(OtlpProtocol.Grpc, "grpc"),
defaultValue: new(OtlpGeneralProtocol, otlpGeneralProtocolName),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Propagate the general http/json protocol

For users who set only OTEL_EXPORTER_OTLP_PROTOCOL=http/json, this new fallback still resolves logs to the default gRPC protocol because OtlpGeneralProtocol does not parse http/json even though the logs-specific converter below accepts it and the configuration docs list it as valid. In that configuration logs will be sent to the gRPC default endpoint instead of the HTTP exporter unless OTEL_EXPORTER_OTLP_LOGS_PROTOCOL is also set explicitly.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos area:opentelemetry OpenTelemetry support area:tests unit tests, integration tests type:bug

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants