Skip to content

Take advantage of the Span<T> namespace changes#8477

Open
andrewlock wants to merge 9 commits intoandrew/update-vendors-8from
andrew/use-new-span-t
Open

Take advantage of the Span<T> namespace changes#8477
andrewlock wants to merge 9 commits intoandrew/update-vendors-8from
andrew/use-new-span-t

Conversation

@andrewlock
Copy link
Copy Markdown
Member

@andrewlock andrewlock commented Apr 17, 2026

Summary of changes

Remove some branching code that's no longer required after #8476 moved Span<T> to System namespace

Reason for change

This sort of stuff is the reason we made that change, to reduce maintenance.

Implementation details

Set 🤖 looking for possible cases, so it's not exhaustive, but gives a taster. I think most of these make sense. It's nothing outstanding but it's the little things.

Test coverage

Just a refactoring, so covered by existing tests.

Other details

By definition, we don't really expect to see performance improvements for this, other than potentially some reduced allocation in .NET Framework. The primary benefits are devx

Depends on the vendoring code stack:

Depends on a stack updating our vendored system code

@andrewlock andrewlock added type:refactor AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos labels Apr 17, 2026
@andrewlock andrewlock force-pushed the andrew/use-new-span-t branch from 6b128fd to 49790cb Compare April 17, 2026 15:05
@pr-commenter
Copy link
Copy Markdown

pr-commenter Bot commented Apr 17, 2026

Benchmarks

Benchmark execution time: 2026-04-21 11:24:48

Comparing candidate commit b1f8279 in PR branch andrew/use-new-span-t with baseline commit b821c57 in branch master.

Found 0 performance improvements and 1 performance regressions! Performance is the same for 26 metrics, 0 unstable metrics, 87 known flaky benchmarks.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

scenario:Benchmarks.Trace.DbCommandBenchmark.ExecuteNonQuery net472

  • 🟥 throughput [-31776.306op/s; -29037.415op/s] or [-8.950%; -8.178%]

Known flaky benchmarks

These benchmarks are marked as flaky and will not trigger a failure. Modify FLAKY_BENCHMARKS_REGEX to control which benchmarks are marked as flaky.

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.005%]
  • ignore execution_time [-1203.499µs; -275.412µs] or [-0.598%; -0.137%]
  • ignore throughput [-18.378op/s; +500.082op/s] or [-0.022%; +0.593%]

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.007%]
  • ignore execution_time [-1908.222µs; +1590.781µs] or [-0.952%; +0.794%]
  • 🟩 throughput [+8252.789op/s; +10446.434op/s] or [+6.937%; +8.781%]

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.007%]
  • ignore execution_time [+0.499ms; +2.667ms] or [+0.251%; +1.341%]
  • ignore throughput [+1150.579op/s; +2521.109op/s] or [+1.170%; +2.563%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces net472

  • ignore allocated_mem [-20 bytes; -19 bytes] or [-0.613%; -0.600%]
  • 🟥 execution_time [+300.247ms; +301.783ms] or [+148.993%; +149.755%]
  • ignore throughput [+15.494op/s; +19.117op/s] or [+2.788%; +3.439%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.009%; +0.002%]
  • 🟥 execution_time [+383.671ms; +386.295ms] or [+303.124%; +305.196%]
  • ignore throughput [+19.014op/s; +21.748op/s] or [+2.507%; +2.867%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.009%; +0.002%]
  • 🟥 execution_time [+400.290ms; +402.015ms] or [+354.242%; +355.768%]
  • ignore throughput [+3.108op/s; +6.183op/s] or [+0.439%; +0.873%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody net472

  • 🟥 allocated_mem [+1.308KB; +1.308KB] or [+27.529%; +27.541%]
  • ignore execution_time [-463.450µs; +177.264µs] or [-0.232%; +0.089%]
  • ignore throughput [-4676.270op/s; -4265.742op/s] or [-3.638%; -3.319%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody net6.0

  • 🟥 allocated_mem [+471 bytes; +472 bytes] or [+9.977%; +9.987%]
  • 🟩 execution_time [-16.220ms; -12.042ms] or [-7.575%; -5.624%]
  • ignore throughput [+4277.415op/s; +7055.872op/s] or [+3.122%; +5.150%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody netcoreapp3.1

  • 🟥 allocated_mem [+1.272KB; +1.272KB] or [+27.502%; +27.510%]
  • ignore execution_time [-11.387ms; -7.195ms] or [-5.422%; -3.426%]
  • ignore throughput [-1232.580op/s; +1058.670op/s] or [-1.114%; +0.957%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody net472

  • 🟥 allocated_mem [+1.307KB; +1.307KB] or [+105.746%; +105.759%]
  • ignore execution_time [-924.225µs; -309.684µs] or [-0.460%; -0.154%]
  • 🟥 throughput [-253601.762op/s; -249260.165op/s] or [-25.894%; -25.451%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody net6.0

  • 🟥 allocated_mem [+471 bytes; +472 bytes] or [+38.558%; +38.566%]
  • 🟩 execution_time [-26.583ms; -21.720ms] or [-11.855%; -9.686%]
  • ignore throughput [-61768.947op/s; -38638.981op/s] or [-6.599%; -4.128%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody netcoreapp3.1

  • 🟥 allocated_mem [+1.272KB; +1.272KB] or [+105.292%; +105.304%]
  • ignore execution_time [-0.614ms; +3.632ms] or [-0.306%; +1.813%]
  • 🟥 throughput [-139341.837op/s; -123152.220op/s] or [-20.021%; -17.695%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.007%; +0.003%]
  • ignore execution_time [-1126.079µs; -109.203µs] or [-0.561%; -0.054%]
  • ignore throughput [+381.652op/s; +1137.782op/s] or [+0.257%; +0.766%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.003%]
  • ignore execution_time [-1358.457µs; +2140.248µs] or [-0.685%; +1.080%]
  • 🟩 throughput [+10019.448op/s; +12985.238op/s] or [+6.375%; +8.262%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.007%; +0.003%]
  • ignore execution_time [+2.270ms; +6.251ms] or [+1.157%; +3.187%]
  • 🟩 throughput [+7412.008op/s; +10035.055op/s] or [+5.905%; +7.994%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.004%]
  • ignore execution_time [-310.353µs; -103.636µs] or [-0.155%; -0.052%]
  • ignore throughput [+65526.877op/s; +79322.777op/s] or [+1.993%; +2.413%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.008%]
  • ignore execution_time [-1.905ms; -0.265ms] or [-0.942%; -0.131%]
  • 🟩 throughput [+362922.254op/s; +386598.720op/s] or [+12.101%; +12.891%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.004%]
  • 🟩 execution_time [-19.051ms; -14.693ms] or [-8.782%; -6.773%]
  • 🟩 throughput [+207835.488op/s; +261285.648op/s] or [+8.250%; +10.371%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs net472

  • ignore allocated_mem [+0 bytes; +2 bytes] or [-0.001%; +0.007%]
  • 🟥 execution_time [+300.512ms; +301.030ms] or [+150.156%; +150.415%]
  • ignore throughput [+128.410op/s; +143.284op/s] or [+1.418%; +1.583%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs net6.0

  • ignore allocated_mem [-1 bytes; +2 bytes] or [-0.004%; +0.008%]
  • 🟥 execution_time [+299.436ms; +302.691ms] or [+151.006%; +152.648%]
  • ignore throughput [+370.851op/s; +581.251op/s] or [+2.836%; +4.446%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs netcoreapp3.1

  • ignore allocated_mem [-1 bytes; +2 bytes] or [-0.004%; +0.008%]
  • 🟥 execution_time [+300.015ms; +302.564ms] or [+151.124%; +152.408%]
  • ignore throughput [+122.601op/s; +251.490op/s] or [+1.184%; +2.428%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs net472

  • ignore allocated_mem [+3 bytes; +4 bytes] or [+0.186%; +0.199%]
  • 🟥 execution_time [+296.706ms; +297.414ms] or [+145.730%; +146.078%]
  • ignore throughput [+8.481op/s; +14.959op/s] or [+0.225%; +0.397%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.009%]
  • 🟥 execution_time [+296.845ms; +300.356ms] or [+145.117%; +146.833%]
  • ignore throughput [+99.410op/s; +149.055op/s] or [+1.444%; +2.165%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.009%]
  • 🟥 execution_time [+302.458ms; +304.852ms] or [+151.168%; +152.365%]
  • ignore throughput [+67.830op/s; +98.072op/s] or [+1.346%; +1.947%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [+3.567µs; +7.651µs] or [+0.732%; +1.571%]
  • ignore throughput [-31.350op/s; -14.608op/s] or [-1.527%; -0.711%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.000%; +0.010%]
  • ignore execution_time [+14.650µs; +41.270µs] or [+3.360%; +9.465%]
  • ignore throughput [-204.423op/s; -84.639op/s] or [-8.887%; -3.680%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.003%]
  • ignore execution_time [+6.838µs; +28.786µs] or [+1.465%; +6.167%]
  • ignore throughput [-142.684op/s; -61.993op/s] or [-6.587%; -2.862%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-6.318µs; -2.414µs] or [-1.706%; -0.652%]
  • ignore throughput [+18.514op/s; +46.591op/s] or [+0.686%; +1.726%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.007%]
  • 🟥 execution_time [+22.480µs; +46.062µs] or [+7.177%; +14.705%]
  • 🟥 throughput [-429.001op/s; -230.554op/s] or [-13.373%; -7.187%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.003%]
  • ignore execution_time [-14.677µs; +7.683µs] or [-4.015%; +2.102%]
  • ignore throughput [-90.944op/s; +42.455op/s] or [-3.264%; +1.524%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • 🟥 execution_time [+299.956ms; +300.618ms] or [+149.709%; +150.039%]
  • ignore throughput [+3505114.258op/s; +4076692.983op/s] or [+1.756%; +2.042%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest net6.0

  • ignore allocated_mem [+77 bytes; +79 bytes] or [+0.431%; +0.442%]
  • 🟥 execution_time [+414.424ms; +418.796ms] or [+450.289%; +455.039%]
  • 🟩 throughput [+1048.627op/s; +1177.215op/s] or [+8.617%; +9.673%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest netcoreapp3.1

  • ignore allocated_mem [+48 bytes; +50 bytes] or [+0.232%; +0.243%]
  • unstable execution_time [+333.583ms; +362.101ms] or [+253.286%; +274.940%]
  • 🟩 throughput [+665.907op/s; +864.808op/s] or [+6.446%; +8.372%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces net472

  • ignore allocated_mem [+2.771KB; +2.776KB] or [+4.923%; +4.931%]
  • unstable execution_time [+348.611ms; +431.895ms] or [+160.288%; +198.581%]
  • 🟥 throughput [-669.717op/s; -604.612op/s] or [-60.683%; -54.784%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces net6.0

  • ignore allocated_mem [-1.275KB; -1.272KB] or [-3.007%; -3.001%]
  • unstable execution_time [+202.454ms; +335.663ms] or [+86.277%; +143.045%]
  • 🟥 throughput [-743.672op/s; -660.245op/s] or [-49.603%; -44.039%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces netcoreapp3.1

  • ignore allocated_mem [+180 bytes; +183 bytes] or [+0.427%; +0.434%]
  • 🟥 execution_time [+341.417ms; +348.907ms] or [+204.207%; +208.687%]
  • 🟥 throughput [-400.424op/s; -365.474op/s] or [-27.881%; -25.447%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-76.791µs; -55.380µs] or [-3.864%; -2.787%]
  • ignore throughput [+14.870op/s; +20.543op/s] or [+2.955%; +4.082%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-17.608µs; +2.191µs] or [-1.210%; +0.151%]
  • ignore throughput [-0.201op/s; +9.028op/s] or [-0.029%; +1.314%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-155.635µs; -58.681µs] or [-5.414%; -2.041%]
  • ignore throughput [+8.271op/s; +26.917op/s] or [+2.377%; +7.737%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-13.908µs; -7.767µs] or [-1.201%; -0.671%]
  • ignore throughput [+5.971op/s; +10.580op/s] or [+0.691%; +1.225%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [-49.303µs; -36.662µs] or [-4.573%; -3.400%]
  • ignore throughput [+33.646op/s; +44.890op/s] or [+3.628%; +4.840%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • ignore execution_time [+40.232µs; +60.741µs] or [+2.155%; +3.254%]
  • ignore throughput [-16.512op/s; -11.060op/s] or [-3.082%; -2.064%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice net472

  • ignore allocated_mem [-43 bytes; +21 bytes] or [-0.007%; +0.003%]
  • ignore execution_time [-11.347µs; +0.206µs] or [-0.443%; +0.008%]
  • ignore throughput [+0.021op/s; +1.776op/s] or [+0.005%; +0.455%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice net6.0

  • ignore allocated_mem [-38 bytes; +46 bytes] or [-0.006%; +0.007%]
  • ignore execution_time [-113.349µs; -69.215µs] or [-5.742%; -3.506%]
  • ignore throughput [+20.651op/s; +31.813op/s] or [+4.077%; +6.280%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice netcoreapp3.1

  • ignore allocated_mem [-42 bytes; +23 bytes] or [-0.007%; +0.004%]
  • ignore execution_time [-122.254µs; -80.586µs] or [-3.100%; -2.044%]
  • ignore throughput [+5.536op/s; +8.224op/s] or [+2.183%; +3.243%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.001%; +0.008%]
  • 🟥 execution_time [+302.922ms; +304.603ms] or [+152.546%; +153.392%]
  • ignore throughput [+4353.233op/s; +5877.717op/s] or [+1.401%; +1.891%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • 🟥 execution_time [+302.019ms; +303.210ms] or [+151.342%; +151.939%]
  • ignore throughput [+12208.560op/s; +16252.572op/s] or [+1.925%; +2.562%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • 🟥 execution_time [+300.931ms; +304.034ms] or [+151.175%; +152.734%]
  • ignore throughput [+18355.102op/s; +26824.596op/s] or [+3.867%; +5.651%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.004%]
  • 🟥 execution_time [+301.954ms; +303.287ms] or [+151.631%; +152.300%]
  • ignore throughput [+1912.830op/s; +3626.113op/s] or [+0.641%; +1.215%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.007%; +0.006%]
  • 🟥 execution_time [+297.990ms; +299.713ms] or [+147.343%; +148.195%]
  • ignore throughput [-2627.594op/s; +2090.978op/s] or [-0.423%; +0.337%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • 🟥 execution_time [+303.390ms; +306.958ms] or [+153.771%; +155.580%]
  • ignore throughput [-2101.567op/s; +6339.567op/s] or [-0.454%; +1.369%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync net472

  • ignore allocated_mem [+0 bytes; +1 bytes] or [+0.108%; +0.119%]
  • 🟥 execution_time [+299.238ms; +301.417ms] or [+150.190%; +151.284%]
  • ignore throughput [+5030.962op/s; +7231.685op/s] or [+1.305%; +1.876%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.006%]
  • 🟥 execution_time [+298.271ms; +305.993ms] or [+148.661%; +152.510%]
  • 🟩 throughput [+36740.034op/s; +46165.861op/s] or [+7.295%; +9.167%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.006%]
  • 🟥 execution_time [+299.665ms; +302.501ms] or [+149.081%; +150.492%]
  • ignore throughput [-15385.918op/s; -9905.300op/s] or [-3.642%; -2.345%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.006%]
  • ignore execution_time [-2.165ms; -1.279ms] or [-1.076%; -0.636%]
  • ignore throughput [-1996.665op/s; -803.776op/s] or [-0.803%; -0.323%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.003%]
  • 🟩 execution_time [-17.219ms; -13.560ms] or [-8.007%; -6.306%]
  • 🟩 throughput [+22334.869op/s; +29318.121op/s] or [+6.127%; +8.043%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.008%]
  • ignore execution_time [-0.508ms; +3.444ms] or [-0.255%; +1.727%]
  • ignore throughput [+9545.854op/s; +15207.868op/s] or [+3.484%; +5.551%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark net472

  • ignore allocated_mem [+11.930KB; +11.958KB] or [+4.342%; +4.352%]
  • unstable execution_time [+7.254µs; +47.781µs] or [+1.792%; +11.802%]
  • ignore throughput [-250.257op/s; -44.999op/s] or [-10.071%; -1.811%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark net6.0

  • 🟩 allocated_mem [-19.053KB; -19.033KB] or [-6.950%; -6.943%]
  • unstable execution_time [-21.452µs; +50.877µs] or [-4.240%; +10.056%]
  • unstable throughput [-96.074op/s; +121.641op/s] or [-4.794%; +6.070%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark netcoreapp3.1

  • 🟩 allocated_mem [-19.375KB; -19.359KB] or [-7.063%; -7.057%]
  • ignore execution_time [-59.448µs; -2.255µs] or [-10.302%; -0.391%]
  • ignore throughput [+24.364op/s; +180.757op/s] or [+1.392%; +10.327%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark net472

  • ignore allocated_mem [-2 bytes; +2 bytes] or [-0.005%; +0.006%]
  • ignore execution_time [+0.311µs; +1.985µs] or [+0.539%; +3.439%]
  • ignore throughput [-536.945op/s; -74.316op/s] or [-3.098%; -0.429%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark net6.0

  • ignore allocated_mem [-4 bytes; +0 bytes] or [-0.010%; -0.001%]
  • unstable execution_time [+7.891µs; +14.996µs] or [+18.652%; +35.445%]
  • 🟥 throughput [-5453.759op/s; -3544.516op/s] or [-22.959%; -14.921%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark netcoreapp3.1

  • ignore allocated_mem [-1 bytes; +1 bytes] or [-0.002%; +0.002%]
  • unstable execution_time [-14.546µs; -7.516µs] or [-22.567%; -11.661%]
  • 🟩 throughput [+1930.962op/s; +3420.389op/s] or [+11.847%; +20.985%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog net472

  • ignore allocated_mem [+2 bytes; +3 bytes] or [+0.061%; +0.072%]
  • 🟥 execution_time [+302.449ms; +303.345ms] or [+152.874%; +153.327%]
  • ignore throughput [-114.146op/s; -93.613op/s] or [-1.907%; -1.564%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • 🟥 execution_time [+303.158ms; +305.289ms] or [+154.307%; +155.391%]
  • ignore throughput [-140.906op/s; -65.198op/s] or [-1.748%; -0.809%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog netcoreapp3.1

  • ignore allocated_mem [-1 bytes; +0 bytes] or [-0.027%; -0.017%]
  • 🟥 execution_time [+300.056ms; +301.953ms] or [+150.215%; +151.165%]
  • ignore throughput [-178.499op/s; -114.570op/s] or [-2.274%; -1.459%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • ignore execution_time [-530.260µs; +191.307µs] or [-0.264%; +0.095%]
  • 🟥 throughput [-22490.441op/s; -20606.799op/s] or [-6.226%; -5.705%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.007%]
  • ignore execution_time [-655.156µs; +334.472µs] or [-0.327%; +0.167%]
  • 🟩 throughput [+42916.515op/s; +46247.693op/s] or [+8.123%; +8.754%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.006%]
  • ignore execution_time [+1.426ms; +5.139ms] or [+0.723%; +2.605%]
  • ignore throughput [-2366.536op/s; +6002.079op/s] or [-0.560%; +1.421%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.006%]
  • 🟥 execution_time [+300.342ms; +302.123ms] or [+149.693%; +150.581%]
  • ignore throughput [-3232.522op/s; -2248.117op/s] or [-2.134%; -1.484%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+0.000%; +0.009%]
  • 🟥 execution_time [+301.179ms; +302.519ms] or [+151.238%; +151.911%]
  • ignore throughput [+1878.287op/s; +3776.858op/s] or [+0.817%; +1.642%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.003%]
  • 🟥 execution_time [+302.929ms; +305.374ms] or [+153.626%; +154.866%]
  • ignore throughput [-4079.413op/s; -2162.080op/s] or [-2.298%; -1.218%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • 🟥 execution_time [+300.238ms; +301.088ms] or [+149.760%; +150.184%]
  • 🟩 throughput [+65826861.281op/s; +66122694.848op/s] or [+47.939%; +48.155%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore net6.0

  • ignore allocated_mem [+104 bytes; +106 bytes] or [+0.611%; +0.621%]
  • unstable execution_time [+355.300ms; +394.727ms] or [+441.878%; +490.913%]
  • 🟩 throughput [+932.580op/s; +1106.059op/s] or [+7.209%; +8.550%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [+nan%; +nan%]
  • 🟥 execution_time [+299.651ms; +300.548ms] or [+149.459%; +149.907%]
  • ignore throughput [+1699538.227op/s; +2636884.027op/s] or [+0.753%; +1.168%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.006%]
  • ignore execution_time [+434.280µs; +965.546µs] or [+0.217%; +0.483%]
  • ignore throughput [-17412.582op/s; -15041.976op/s] or [-1.943%; -1.679%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.007%]
  • ignore execution_time [-4.730ms; -3.572ms] or [-2.317%; -1.750%]
  • 🟩 throughput [+97315.134op/s; +105987.500op/s] or [+9.086%; +9.896%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.003%; +0.006%]
  • ignore execution_time [+0.577ms; +4.720ms] or [+0.292%; +2.388%]
  • 🟩 throughput [+62659.803op/s; +81879.779op/s] or [+7.253%; +9.477%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.000%; +0.009%]
  • ignore execution_time [-328.832µs; +10.513µs] or [-0.164%; +0.005%]
  • ignore throughput [+7056.291op/s; +9281.768op/s] or [+0.646%; +0.850%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.004%]
  • ignore execution_time [+6.348ms; +10.442ms] or [+3.308%; +5.440%]
  • 🟩 throughput [+73074.147op/s; +103211.628op/s] or [+5.656%; +7.989%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.001%; +0.008%]
  • ignore execution_time [-2.417ms; -0.855ms] or [-1.187%; -0.420%]
  • 🟩 throughput [+93486.570op/s; +101537.225op/s] or [+9.285%; +10.084%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.008%; +0.002%]
  • ignore execution_time [-1270.107µs; -73.826µs] or [-0.632%; -0.037%]
  • ignore throughput [+6653.305op/s; +9666.212op/s] or [+1.482%; +2.154%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.004%; +0.009%]
  • ignore execution_time [-1.943ms; -0.237ms] or [-0.970%; -0.119%]
  • 🟩 throughput [+47603.147op/s; +52047.012op/s] or [+8.644%; +9.451%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.006%; +0.004%]
  • ignore execution_time [-1736.305µs; +2388.295µs] or [-0.872%; +1.200%]
  • 🟩 throughput [+30943.072op/s; +40866.800op/s] or [+6.926%; +9.147%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin net472

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.006%]
  • ignore execution_time [-530.824µs; +553.271µs] or [-0.265%; +0.276%]
  • ignore throughput [-15716.660op/s; -11882.099op/s] or [-2.300%; -1.739%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin net6.0

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.007%]
  • ignore execution_time [-1586.229µs; +1898.852µs] or [-0.793%; +0.950%]
  • 🟩 throughput [+68446.784op/s; +85967.919op/s] or [+7.647%; +9.605%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin netcoreapp3.1

  • ignore allocated_mem [+0 bytes; +0 bytes] or [-0.005%; +0.005%]
  • ignore execution_time [+0.904ms; +4.805ms] or [+0.459%; +2.440%]
  • ignore throughput [+25101.558op/s; +41098.679op/s] or [+3.505%; +5.738%]

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 25f6d5e446

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

{
#if NETCOREAPP
// don't allocate this inside the loop (CA2014)
Span<Guid> guidSpan = stackalloc Guid[2];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep the partial-trust Guid byte path

When the net461 tracer runs in a partial-trust AppDomain, creating the shared ID generator now JITs a stackalloc/MemoryMarshal.Cast path. The removed fallback explicitly avoided unsafe-style reinterpretation because this code can be reached by manual instrumentation under partial trust; using stack allocation/reinterpretation here can fail before any trace/span IDs are generated. Keep the old Guid.ToByteArray() path for non-NETCOREAPP targets unless partial-trust support is being dropped.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We explicitly don't support partial-trust, and haven't for a long time, so I don't think this matters? 🤔

andrewlock and others added 9 commits April 21, 2026 11:06
Remove the outer #if NETCOREAPP guard from ValueStringBuilder.
Now that Span<T> is in the System namespace for vendored types,
the ref struct compiles on .NET Framework too. All dependencies
(ArrayPool, MemoryMarshal, MemoryExtensions) are available via
vendored global usings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove #if NETCOREAPP around Span<byte>/ReadOnlySpan<byte> overloads
of GenerateHash, GenerateV1Hash, and GenerateV1AHash. Delegate the
byte[] private methods to the ReadOnlySpan<byte> versions to remove
code duplication. The string-accepting methods keep their #if gate
as they depend on Encoding.GetBytes(string, Span<byte>).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove #if NETCOREAPP3_1_OR_GREATER around all eight Span<byte>
overloads (WriteVarLong, WriteVarLongZigZag, WriteVarInt,
WriteVarIntZigZag, ReadVarLong, ReadVarLongZigZag, ReadVarInt,
ReadVarIntZigZag). These only index into the span and need no
.NET Core-specific BCL methods.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the two-step byte[] allocation path on .NET Framework with
a single stackalloc byte[16] + BinaryPrimitives.WriteUInt64LittleEndian
on all TFMs. BinaryPrimitives and FnvHash64 Span overloads are now
available everywhere via vendored global usings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the byte[]-allocating Guid.ToByteArray() path with
stackalloc Guid[2] + MemoryMarshal.Cast<Guid, ulong> on all TFMs.
MemoryMarshal is available via vendored global using on .NET
Framework. Removes unused _buffer field and GetBuffer helper.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use stackalloc byte[16] for MD5 hash and stackalloc char[36] for
hex formatting on all TFMs, avoiding two intermediate array
allocations. Now uses the unified Md5Helper.ComputeMd5Hash(string,
Span<byte>) signature.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace ArrayPool<char> rent/return for 2-char hex buffer with
stackalloc char[2] on all TFMs. The chars are written by
HexString.ToHexChars and appended to a StringBuilder, so no
ToString/ToArray overhead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The conditional using for System.Buffers is already handled by
GlobalUsings.cs which imports the correct namespace (BCL or
vendored) based on the TFM.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace StringBuilderCache fallback with ValueStringBuilder and
stackalloc on all TFMs. AppendAsLowerInvariant lowercases each
segment inline, avoiding the extra string allocation from the
previous .ToLowerInvariant() call on the final result. Add a
string? overload to ValueStringBuilder.AppendAsLowerInvariant to
avoid needing explicit .AsSpan() at each call site.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@andrewlock andrewlock force-pushed the andrew/update-vendors-8 branch from c448c23 to ed790e1 Compare April 21, 2026 10:19
@andrewlock andrewlock requested review from a team as code owners April 21, 2026 10:19
@andrewlock andrewlock force-pushed the andrew/use-new-span-t branch from 25f6d5e to b1f8279 Compare April 21, 2026 10:19
/// </summary>
/// <returns>The 64-bit FNV hash of the data, as a <c>ulong</c></returns>
public static ulong GenerateHash(ReadOnlySpan<byte> data, Version version)
public static ulong GenerateHash(Span<byte> data, Version version)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Granted I'm reviewing these PRs out of order, but I'm trying to understand why these went from ReadOnlySpan to Span and I don't think I see why

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, me neither, I think the AI was stupid and this should be fixed 👀

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed locally, just switched to ReadOnlySpan<T> (I'll push later, to avoid PR rebase stampede 😄 )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos type:refactor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants