This document provides an overview of the benchmark implementation for the Microting.EntityFrameworkCore.MySql project.
File: docs/DatabaseVersions.md
A comprehensive table documenting all MySQL and MariaDB versions tested in the CI/CD pipeline, including:
- 8 MySQL versions (8.0.40 to 9.5.0)
- 12 MariaDB versions (10.5.27 to 12.1.2)
- Platform availability (Ubuntu/Windows)
- SQL modes used for each database type
- Benchmark performance columns with
TIME_PLACEHOLDERmarkers that can be replaced with actual timing data after benchmarks complete:- Insert (ms) - Average insert operation times
- Update (ms) - Average update operation times
- Query (ms) - Average query operation times
Location: benchmark/EFCore.MySql.Benchmarks/
A complete .NET 10.0 console application using BenchmarkDotNet with:
EFCore.MySql.Benchmarks.csproj- Project file with dependenciesModels.cs- Entity models for benchmarking (SimpleEntity, ComplexEntity, RelatedEntity)BenchmarkConfig.cs- Configuration and base classes for benchmarksInsertBenchmarks.cs- 6 insert operation benchmarksUpdateBenchmarks.cs- 5 update operation benchmarksQueryBenchmarks.cs- 12 query/retrieval operation benchmarksProgram.cs- Main entry point with CLI interfaceREADME.md- Comprehensive usage documentation
Insert Benchmarks (6 scenarios):
- Single simple entity insert
- Batch 10 simple entities
- Batch 100 simple entities
- Single complex entity with many columns
- Complex entity with related entities (foreign keys)
- Batch 10 complex entities
Update Benchmarks (5 scenarios):
- Single entity by primary key
- Batch 10 entities
- Single complex entity with many columns
- Entity found by query
- Batch 10 complex entities with filter
Query Benchmarks (12 scenarios):
- Find single by ID
- First with filter
- Top 10 rows
- Filter and order
- Complex entity with filter
- Complex entity with join (Include)
- GROUP BY with aggregations
- Multiple filter conditions
- COUNT query
- ANY/EXISTS query
- Complex aggregation (SUM, AVG, MAX)
- Query with navigation property filter
File: .github/workflows/pr-build.yml
Added benchmark step to the PR build workflow:
- Runs after all tests pass
- Executes for each database version in the matrix
- Enabled by default (can be disabled by setting
enableBenchmarks: false) - Uploads results as artifacts with 30-day retention
- Results saved per database version and OS combination
The benchmark project works on both:
- Linux: Tested and verified
- Windows: Full compatibility with PowerShell and Windows paths
Benchmarks are configured via environment variables:
BENCHMARK_DB_HOST- Database server (default: localhost)BENCHMARK_DB_PORT- Port (default: 3306)BENCHMARK_DB_USER- Username (default: root)BENCHMARK_DB_PASSWORD- Password (default: Password12!)BENCHMARK_DB_NAME- Database name (default: pomelo_benchmark)
-
Start a database server:
docker run --name mysql_benchmark -e MYSQL_ROOT_PASSWORD=Password12! -p 127.0.0.1:3306:3306 -d mysql:8.0 -
Navigate to benchmark directory:
cd benchmark/EFCore.MySql.Benchmarks -
Run specific benchmarks:
# Insert benchmarks dotnet run -c Release -- insert # Update benchmarks dotnet run -c Release -- update # Query benchmarks dotnet run -c Release -- query # All benchmarks dotnet run -c Release -- all
-
View results: Results are saved to
BenchmarkDotNet.Artifacts/directory with detailed reports.
Benchmarks are enabled by default in GitHub Actions for PR builds. They will:
- Run after all tests pass
- Execute for each database version in the matrix
- Upload results as downloadable artifacts
To disable benchmarks in CI, set enableBenchmarks: false in the workflow environment variables.
Linux/macOS:
export BENCHMARK_DB_HOST=127.0.0.1
export BENCHMARK_DB_PORT=3307
export BENCHMARK_DB_PASSWORD=MyPassword
dotnet run -c Release -- allWindows (PowerShell):
$env:BENCHMARK_DB_HOST="127.0.0.1"
$env:BENCHMARK_DB_PORT="3307"
$env:BENCHMARK_DB_PASSWORD="MyPassword"
dotnet run -c Release -- allThe benchmarks are designed to:
- Detect regressions: Compare results across versions to identify performance issues
- Realistic scenarios: Use patterns common in real applications
- Various complexities: From simple CRUD to complex joins and aggregations
- Memory profiling: Track GC allocations and memory usage
- Insert benchmarks: ~2-5 minutes
- Update benchmarks: ~3-6 minutes (includes seeding)
- Query benchmarks: ~5-10 minutes (includes large dataset seeding)
- Total for all benchmarks: ~15-25 minutes per database version
- BenchmarkDotNet 0.14.0: Industry-standard .NET benchmarking library
- Entity Framework Core 10.0: Latest EF Core version
- MySqlConnector 2.5.0: MySQL ADO.NET driver
- Microting.EntityFrameworkCore.MySql: The provider being benchmarked
- Warmup iterations: 1 (for faster CI execution)
- Measurement iterations: 5 (balance between accuracy and speed)
- Memory diagnostics: Enabled for all benchmarks
- Job toolchain: In-process emit for reliability
- SimpleEntity: Lightweight entity for basic CRUD tests
- ComplexEntity: Realistic entity with 10 columns of various types
- RelatedEntity: Child entity for testing relationships and joins
.github/workflows/pr-build.yml- Added benchmark steps with enableBenchmarks flag and aggregation jobDirectory.Packages.props- Added BenchmarkDotNet package versionPomelo.EFCore.MySql.sln- Added benchmark project to solutiondocs/DatabaseVersions.md- New documentation with benchmark time placeholdersbenchmark/- New directory with entire benchmark project (9 files)scripts/update-benchmark-times.ps1- Helper script for manual benchmark time updatesscripts/aggregate-benchmark-results.ps1- Automated script for CI aggregation
After benchmarks run in CI, the TIME_PLACEHOLDER values in docs/DatabaseVersions.md are automatically updated with actual performance data:
The GitHub Actions workflow includes an AggregateBenchmarkResults job that:
- Runs After All Tests: Depends on the
BuildAndTestjob completing for all database versions - Downloads Artifacts: Collects all benchmark results from the matrix runs
- Parses Results: Extracts mean execution times from BenchmarkDotNet CSV reports
- Updates Documentation: Replaces
TIME_PLACEHOLDERmarkers with actual timing data - Uploads Updated File: Creates a
benchmark-summary-documentationartifact containing:DatabaseVersions-Updated.md- Updated documentation with real benchmark timesbenchmark-summary.md- Summary of the aggregation process
After a PR build completes with benchmarks enabled:
- Go to the Actions tab for the PR
- Find the completed workflow run
- Download the benchmark-summary-documentation artifact
- Extract and review DatabaseVersions-Updated.md to see actual performance data
For local testing or manual updates, use the aggregation script:
# After downloading individual benchmark artifacts
./scripts/aggregate-benchmark-results.ps1 -ArtifactsDir ./benchmark-artifacts -OutputFile docs/DatabaseVersions.mdThe script:
- Parses BenchmarkDotNet CSV output files
- Calculates average Mean execution times per benchmark category
- Updates the markdown table with actual millisecond values
- Reports remaining placeholders and completion status
Potential improvements for future versions:
- Add more specialized benchmarks (e.g., stored procedures, bulk operations)
- Implement performance regression detection and alerts
- Add comparison reports between database versions
- Create dashboard for historical performance trends
- Add benchmarks for JSON columns and spatial data (NTS)
Common issues and solutions are documented in:
benchmark/EFCore.MySql.Benchmarks/README.md- Comprehensive troubleshooting guide
Key points:
- Always run in Release mode (
-c Release) - Ensure database has sufficient resources
- The benchmark database will be deleted and recreated during setup
- First run may be slower due to cold caches