Skip to content

feat(tracker-core): migrate SQLite and MySQL drivers to sqlx 0.8#1718

Merged
josecelano merged 23 commits intotorrust:developfrom
josecelano:1717-1525-05-migrate-sqlite-and-mysql-to-sqlx
Apr 30, 2026
Merged

feat(tracker-core): migrate SQLite and MySQL drivers to sqlx 0.8#1718
josecelano merged 23 commits intotorrust:developfrom
josecelano:1717-1525-05-migrate-sqlite-and-mysql-to-sqlx

Conversation

@josecelano
Copy link
Copy Markdown
Member

@josecelano josecelano commented Apr 29, 2026

Closes #1717

Migrates the SQLite and MySQL persistence drivers from the sync r2d2 stack to async sqlx 0.8.

Current status

Structural migration and post-migration benchmark validation are complete.

Completed on this branch:

  • SQLite and MySQL driver implementations use sqlx pools and async trait methods.
  • Schema initialization remains eager via initialize_database().
  • Schema creation still uses raw sqlx::query() DDL; sqlx::migrate!() is not introduced here.
  • Sync-to-async runtime bridge helpers have been removed; async initialization is propagated through all current call paths.
  • The temporary packages/tracker-core/src/databases/sqlx/ staging tree has been deleted; the canonical databases/driver/ and databases/traits/ directories are now the single persistence surface.
  • Legacy dependencies (r2d2, r2d2_sqlite, r2d2_mysql) have been removed from packages/tracker-core/Cargo.toml.
  • Legacy compatibility code has been removed: the Error::ConnectionPool variant and all From impls for r2d2::Error, rusqlite::Error, mysql::Error, and mysql::UrlError are gone from databases/error.rs. The From<rusqlite::Error> impl in authentication/key/mod.rs was replaced with From<sqlx::Error>.
  • Stale r2d2_* references in driver doc comments have been refreshed to describe the sqlx-based pools.
  • Validation passing: cargo machete, linter all, doc tests, and cargo test --workspace --all-targets.
  • Persistence benchmarks re-run and compared against the 2026-04-28 baseline. Report: packages/tracker-core/docs/benchmarking/runs/2026-04-30/REPORT.md. No regression detected: MySQL totals are ~13–16% faster (mysql 8.4: 7381 → 6231 ms; mysql 8.0: 7633 → 6678 ms); SQLite totals shift within expected per-run jitter on the 100-op suite.

See docs/issues/1717-1525-05-migrate-sqlite-and-mysql-to-sqlx.md for the full task breakdown and progress review.

2026-04-30 update

  • Task 7 (persistence benchmarking) is done. See the report linked above.
  • Bench harness change (42e3e5df): the persistence_benchmark MySQL setup now waits for the second ready for connections line on the container's stderr (the TCP listener, not the local socket) and then pings SELECT 1 in a 30 × 500 ms retry loop before initializing the schema. This is needed because sqlx does not retry the first connect() (unlike the old r2d2 pool), so the harness must guarantee MySQL is accepting TCP connections before the benchmark proceeds.
  • Production-code follow-up (6510da53 / related): expanded the # Panics Rustdoc section of bittorrent_tracker_core::databases::setup::initialize_database to document that it will panic if the database is not yet accepting connections, or on any other underlying sqlx::Error. This makes the new sqlx-based connect-without-retry behavior explicit to callers.

2026-04-30 update — Copilot review addressed

Copilot performed an automated review on this PR (15 inline comments). All actionable items have been resolved across 6 GPG-signed commits (5512ac30..c1cce750); 1 item was already obsolete.

Commit Subject Addresses
5512ac30 fix(tracker-core): accept no-op MySQL upserts as success #3, #4
2c190249 fix(tracker-core): drop torrent_aggregate_metrics table on schema teardown #5, #6
61e44277 fix(tracker-core): reject negative valid_until and propagate row-count overflow #8, #11, #12, #13, #14, #15
53297d3e fix(tracker-core): build SQLite connection options from filesystem path #10
78c63a4d test(tracker-core): bound persistence retry with timeout #1
c1cce750 docs(bootstrap): restore #[must_use] and fix typo on setup() #2, #9

Highlights:

  • MySQL upsert semantics: removed the rows_affected() == 0 check on INSERT … ON DUPLICATE KEY UPDATE paths — a no-op update legitimately reports 0 affected rows and was being misclassified as InsertFailed.
  • Data integrity: negative valid_until timestamps now return Error::MalformedDatabaseRecord instead of being silently flipped positive via unsigned_abs(). Row-count → usize conversions now propagate errors instead of unwrap_or(0).
  • SQLite path handling: switched from SqliteConnectOptions::from_str("sqlite://{path}") to SqliteConnectOptions::new().filename(path) so relative paths like ./storage/... aren't reinterpreted as URL authority.
  • Schema teardown: drop_database_tables() in both drivers now drops the torrent_aggregate_metrics table that was being created but never cleaned up.
  • Test determinism: replaced the fixed-iteration yield_now() retry loop in it_should_persist_the_number_of_completed_peers_for_each_torrent_into_the_database with tokio::time::timeout(5s, ...) + sleep(50ms).

Item #7 (sqlx subdir not wired in) was already resolved in commit a4dbc63a earlier in this PR.

Validation: linter all exit 0, full workspace test suite passing, MySQL-gated tests passing.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 29, 2026

Codecov Report

❌ Patch coverage is 49.18567% with 468 lines in your changes missing coverage. Please review.
✅ Project coverage is 81.16%. Comparing base (7cff141) to head (66f1246).
⚠️ Report is 24 commits behind head on develop.

Files with missing lines Patch % Lines
...tence_benchmark/driver_bench/operations/torrent.rs 0.00% 142 Missing ⚠️
...sistence_benchmark/driver_bench/operations/keys.rs 0.00% 56 Missing ⚠️
...nce_benchmark/driver_bench/operations/whitelist.rs 0.00% 55 Missing ⚠️
...-core/src/databases/driver/mysql/auth_key_store.rs 0.00% 33 Missing ⚠️
...ges/tracker-core/src/databases/driver/mysql/mod.rs 0.00% 27 Missing ⚠️
...rc/databases/driver/mysql/torrent_metrics_store.rs 0.00% 25 Missing ⚠️
...rsistence_benchmark/driver_bench/database/mysql.rs 0.00% 23 Missing ⚠️
...core/src/databases/driver/mysql/whitelist_store.rs 0.00% 16 Missing ⚠️
...core/src/databases/driver/sqlite/auth_key_store.rs 60.60% 7 Missing and 6 partials ⚠️
...c/databases/driver/sqlite/torrent_metrics_store.rs 52.00% 9 Missing and 3 partials ⚠️
... and 19 more
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1718      +/-   ##
===========================================
- Coverage    81.39%   81.16%   -0.23%     
===========================================
  Files          348      348              
  Lines        25017    24917     -100     
  Branches     25017    24917     -100     
===========================================
- Hits         20362    20225     -137     
- Misses        4395     4471      +76     
+ Partials       260      221      -39     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Migrates the tracker’s SQLite/MySQL persistence layer from sync r2d2 drivers to async sqlx 0.8, propagating async DB access through containers, services, tests, and benchmarking utilities.

Changes:

  • Converted persistence traits and implementations to async fn and updated call sites to .await.
  • Reworked SQLite/MySQL drivers to use sqlx pools and raw DDL schema management.
  • Updated tests/benchmarks/environments to initialize and use async containers and persistence APIs.

Reviewed changes

Copilot reviewed 90 out of 91 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
src/container.rs Makes app container initialization async
src/bootstrap/jobs/tracker_apis.rs Awaits async HTTP API container init in tests
src/bootstrap/jobs/http_tracker.rs Awaits async HTTP tracker container init in tests
src/bootstrap/app.rs Makes bootstrap setup async
src/app.rs Awaits async bootstrap setup
packages/udp-tracker-server/src/server/mod.rs Awaits UDP tracker core container init in tests
packages/udp-tracker-server/src/handlers/scrape.rs Awaits async test service initializers
packages/udp-tracker-server/src/handlers/mod.rs Makes test service initializers async; awaits DB init
packages/udp-tracker-server/src/handlers/announce.rs Awaits async test service initializers; awaits DB init
packages/udp-tracker-server/src/environment.rs Makes environment/container init async; removes block_on
packages/udp-tracker-core/src/container.rs Makes UDP core container initialization async
packages/tracker-core/tests/integration.rs Updates integration test for async DB load path
packages/tracker-core/tests/common/test_env.rs Makes test env/container init async
packages/tracker-core/src/whitelist/test_helpers.rs Makes whitelist test helpers async
packages/tracker-core/src/whitelist/repository/persisted.rs Converts persisted whitelist repository methods/tests to async
packages/tracker-core/src/whitelist/mod.rs Awaits async whitelist test helpers
packages/tracker-core/src/whitelist/manager.rs Awaits async DB whitelist operations; updates tests
packages/tracker-core/src/torrent/manager.rs Makes torrent DB load async; updates tests
packages/tracker-core/src/test_helpers.rs Makes handler test initializer async
packages/tracker-core/src/statistics/persisted/mod.rs Awaits async global downloads load
packages/tracker-core/src/statistics/persisted/downloads.rs Makes downloads metric repository async
packages/tracker-core/src/statistics/event/handler.rs Awaits async persistence updates during event handling
packages/tracker-core/src/lib.rs Awaits async handler initializers in tests
packages/tracker-core/src/databases/traits/whitelist.rs Converts whitelist trait to async via async_trait
packages/tracker-core/src/databases/traits/torrent_metrics.rs Converts torrent metrics trait to async via async_trait
packages/tracker-core/src/databases/traits/schema.rs Converts schema migrator trait to async via async_trait
packages/tracker-core/src/databases/traits/auth_keys.rs Converts auth key trait to async via async_trait
packages/tracker-core/src/databases/sqlx/traits/whitelist.rs Adds parallel sqlx async whitelist trait (unused)
packages/tracker-core/src/databases/sqlx/traits/torrent_metrics.rs Adds parallel sqlx async torrent metrics trait (unused)
packages/tracker-core/src/databases/sqlx/traits/schema.rs Adds parallel sqlx async schema trait (unused)
packages/tracker-core/src/databases/sqlx/traits/mod.rs Adds parallel sqlx trait exports (unused)
packages/tracker-core/src/databases/sqlx/traits/database.rs Adds parallel sqlx aggregate async DB trait (unused)
packages/tracker-core/src/databases/sqlx/traits/auth_keys.rs Adds parallel sqlx async auth keys trait (unused)
packages/tracker-core/src/databases/sqlx/mod.rs Adds parallel sqlx module root (unused)
packages/tracker-core/src/databases/sqlx/driver/sqlite/whitelist_store.rs Adds parallel sqlx sqlite whitelist store impl (unused)
packages/tracker-core/src/databases/sqlx/driver/sqlite/torrent_metrics_store.rs Adds parallel sqlx sqlite torrent metrics store impl (unused)
packages/tracker-core/src/databases/sqlx/driver/sqlite/schema_migrator.rs Adds parallel sqlx sqlite schema migrator impl (unused)
packages/tracker-core/src/databases/sqlx/driver/sqlite/mod.rs Adds parallel sqlx sqlite driver + tests (unused)
packages/tracker-core/src/databases/sqlx/driver/sqlite/auth_key_store.rs Adds parallel sqlx sqlite auth key store impl (unused)
packages/tracker-core/src/databases/sqlx/driver/mysql/whitelist_store.rs Adds parallel sqlx mysql whitelist store impl (unused)
packages/tracker-core/src/databases/sqlx/driver/mysql/torrent_metrics_store.rs Adds parallel sqlx mysql torrent metrics store impl (unused)
packages/tracker-core/src/databases/sqlx/driver/mysql/schema_migrator.rs Adds parallel sqlx mysql schema migrator impl (unused)
packages/tracker-core/src/databases/sqlx/driver/mysql/mod.rs Adds parallel sqlx mysql driver + container tests (unused)
packages/tracker-core/src/databases/sqlx/driver/mysql/auth_key_store.rs Adds parallel sqlx mysql auth key store impl (unused)
packages/tracker-core/src/databases/sqlx/driver/mod.rs Adds parallel sqlx shared driver tests (unused)
packages/tracker-core/src/databases/setup.rs Makes initialize_database async and eagerly creates tables
packages/tracker-core/src/databases/error.rs Adds sqlx::Error conversion; broadens ConnectionError source type
packages/tracker-core/src/databases/driver/sqlite/whitelist_store.rs Ports sqlite whitelist store to async sqlx
packages/tracker-core/src/databases/driver/sqlite/torrent_metrics_store.rs Ports sqlite torrent metrics store to async sqlx
packages/tracker-core/src/databases/driver/sqlite/schema_migrator.rs Ports sqlite schema migrator to async sqlx
packages/tracker-core/src/databases/driver/sqlite/mod.rs Ports sqlite driver to async sqlx pool
packages/tracker-core/src/databases/driver/sqlite/auth_key_store.rs Ports sqlite auth key store to async sqlx
packages/tracker-core/src/databases/driver/mysql/whitelist_store.rs Ports mysql whitelist store to async sqlx
packages/tracker-core/src/databases/driver/mysql/torrent_metrics_store.rs Ports mysql torrent metrics store to async sqlx
packages/tracker-core/src/databases/driver/mysql/schema_migrator.rs Ports mysql schema migrator to async sqlx
packages/tracker-core/src/databases/driver/mysql/mod.rs Ports mysql driver to async sqlx pool
packages/tracker-core/src/databases/driver/mysql/auth_key_store.rs Ports mysql auth key store to async sqlx
packages/tracker-core/src/databases/driver/mod.rs Updates driver test harness to await async trait methods
packages/tracker-core/src/container.rs Makes tracker-core container init async (awaits DB init)
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/sampling.rs Adds async benchmarking sampling helper
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/operations/whitelist.rs Converts whitelist benchmarks to async
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/operations/torrent.rs Converts torrent benchmarks to async; refactors into helpers
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/operations/mod.rs Makes benchmark dispatch async
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/operations/keys.rs Converts auth key benchmarks to async
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/mod.rs Awaits async benchmark operations
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/database/sqlite.rs Makes sqlite benchmark DB init async
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/database/mysql.rs Awaits async DB init for mysql benchmark
packages/tracker-core/src/bin/persistence_benchmark/driver_bench/database/mod.rs Awaits async schema ops during reset; async sqlite init
packages/tracker-core/src/authentication/mod.rs Updates authentication tests for async DB init
packages/tracker-core/src/authentication/key/repository/persisted.rs Makes key repository persistence async; updates tests
packages/tracker-core/src/authentication/handler.rs Awaits persisted key repository operations; updates mocks/tests
packages/tracker-core/src/announce_handler.rs Makes downloads-metric load helper async
packages/tracker-core/Cargo.toml Adds sqlx + async-trait; machete ignore for proc-macro-only dep
packages/rest-tracker-api-core/src/statistics/services.rs Awaits async tracker-core container init in tests
packages/rest-tracker-api-core/src/container.rs Makes REST API core container init async
packages/http-tracker-core/src/services/scrape.rs Updates scrape service tests for async DB init
packages/http-tracker-core/src/services/announce.rs Updates announce service tests for async DB init
packages/http-tracker-core/src/container.rs Makes HTTP tracker core container init async
packages/http-tracker-core/benches/helpers/util.rs Makes bench helper init async
packages/http-tracker-core/benches/helpers/sync.rs Awaits async bench helper init
packages/axum-rest-tracker-api-server/tests/server/v1/contract/context/whitelist.rs Awaits async force_database_error
packages/axum-rest-tracker-api-server/tests/server/v1/contract/context/auth_key.rs Awaits async force_database_error
packages/axum-rest-tracker-api-server/tests/server/mod.rs Makes force_database_error async
packages/axum-rest-tracker-api-server/src/server.rs Awaits async REST API container init in tests
packages/axum-rest-tracker-api-server/src/environment.rs Makes environment/container init async; removes block_on
packages/axum-http-tracker-server/src/v1/handlers/announce.rs Updates handler tests for async DB init
packages/axum-http-tracker-server/src/server.rs Makes test container initializer async
packages/axum-http-tracker-server/src/environment.rs Makes environment/container init async; removes block_on
docs/issues/1717-1525-05-migrate-sqlite-and-mysql-to-sqlx.md Updates implementation notes/decisions for sqlx migration
.github/skills/dev/git-workflow/open-pull-request/SKILL.md Updates PR workflow doc with upstream repo resolution

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +80 to +99
// Load torrents from the database to ensure the completed stats are persisted.
let mut restored = false;
for _ in 0..10 {
test_env
.tracker_core_container
.torrents_manager
.load_torrents_from_database()
.await
.unwrap();

if let Some(swarm_metadata) = test_env.get_swarm_metadata(&info_hash).await {
assert!(swarm_metadata.downloads() == 1);
restored = true;
break;
}

assert!(test_env.get_swarm_metadata(&info_hash).await.unwrap().downloads() == 1);
tokio::task::yield_now().await;
}

assert!(restored);
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This retry loop uses yield_now() with a fixed 10-iteration bound but no time-based backoff/timeout. Because persistence is performed by background event listeners, this can still be flaky depending on scheduler timing. Prefer a bounded tokio::time::timeout plus sleep/interval retry (or a helper that waits until the DB write is observed) to make the test deterministic.

Copilot uses AI. Check for mistakes.
Comment thread src/bootstrap/app.rs
Comment on lines 24 to 29
/// # Panics
///
/// Setup can file if the configuration is invalid.
#[must_use]
#[instrument(skip())]
pub fn setup() -> (Configuration, AppContainer) {
pub async fn setup() -> (Configuration, AppContainer) {
#[cfg(not(test))]
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setup() was previously annotated #[must_use]. This is still useful on an async fn because it warns if the returned future is created but never awaited/used. Consider keeping #[must_use] to prevent accidental omission of .await/usage at call sites.

Copilot uses AI. Check for mistakes.
Comment on lines +59 to +77
async fn save_torrent_downloads(&self, info_hash: &InfoHash, completed: u32) -> Result<(), Error> {
let insert = ::sqlx::query(
"INSERT INTO torrents (info_hash, completed) VALUES (?, ?) ON DUPLICATE KEY UPDATE completed = VALUES(completed)",
)
.bind(info_hash.to_string())
.bind(i64::from(completed))
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?
.rows_affected();

if insert == 0 {
Err(Error::InsertFailed {
location: std::panic::Location::caller(),
driver: DRIVER,
})
} else {
Ok(())
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

save_torrent_downloads() treats rows_affected() == 0 as an insert failure. With MySQL INSERT ... ON DUPLICATE KEY UPDATE, rows_affected() can legitimately be 0 when the row exists and the UPDATE is a no-op (same value), which would incorrectly raise Error::InsertFailed. Consider treating 0 affected rows as success for this upsert (or use a different success criterion / statement that guarantees a non-zero affected row count).

Copilot uses AI. Check for mistakes.
Comment on lines +53 to +71
async fn save_torrent_aggregate_metric(&self, metric_name: &str, completed: NumberOfDownloads) -> Result<(), Error> {
let insert = ::sqlx::query(
"INSERT INTO torrent_aggregate_metrics (metric_name, value) VALUES (?, ?) ON DUPLICATE KEY UPDATE value = VALUES(value)",
)
.bind(metric_name)
.bind(i64::from(completed))
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?
.rows_affected();

if insert == 0 {
Err(Error::InsertFailed {
location: std::panic::Location::caller(),
driver: DRIVER,
})
} else {
Ok(())
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

save_torrent_aggregate_metric() treats rows_affected() == 0 as an insert failure. With MySQL INSERT ... ON DUPLICATE KEY UPDATE, 0 affected rows can be a valid outcome for a no-op update (same value), so this can incorrectly return Error::InsertFailed. Adjust the logic so no-op upserts are considered success.

Copilot uses AI. Check for mistakes.
Comment on lines +57 to +78
async fn drop_database_tables(&self) -> Result<(), Error> {
let drop_whitelist_table = "
DROP TABLE whitelist;"
.to_string();
DROP TABLE whitelist;";

let drop_torrents_table = "
DROP TABLE torrents;"
.to_string();
DROP TABLE torrents;";

let drop_keys_table = "
DROP TABLE keys;"
.to_string();

let conn = self.pool.get().map_err(|e| (e, DRIVER))?;
DROP TABLE keys;";

conn.execute(&drop_whitelist_table, [])
.and_then(|_| conn.execute(&drop_torrents_table, []))
.and_then(|_| conn.execute(&drop_keys_table, []))?;
::sqlx::query(drop_whitelist_table)
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?;
::sqlx::query(drop_torrents_table)
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?;
::sqlx::query(drop_keys_table)
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?;
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create_database_tables() creates the torrent_aggregate_metrics table, but drop_database_tables() does not drop it. This makes schema reset incomplete (e.g., tests/helpers calling drop_database_tables() may leave this table and its data behind). Drop torrent_aggregate_metrics as part of the schema teardown to mirror creation.

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +84
async fn drop_database_tables(&self) -> Result<(), Error> {
let drop_whitelist_table = "
DROP TABLE `whitelist`;"
.to_string();
DROP TABLE `whitelist`;";

let drop_torrents_table = "
DROP TABLE `torrents`;"
.to_string();
DROP TABLE `torrents`;";

let drop_keys_table = "
DROP TABLE `keys`;"
.to_string();

let mut conn = self.pool.get().map_err(|e| (e, DRIVER))?;
DROP TABLE `keys`;";

conn.query_drop(&drop_whitelist_table)?;
conn.query_drop(&drop_torrents_table)?;
conn.query_drop(&drop_keys_table)?;
::sqlx::query(drop_whitelist_table)
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?;
::sqlx::query(drop_torrents_table)
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?;
::sqlx::query(drop_keys_table)
.execute(&self.pool)
.await
.map_err(|e| (e, DRIVER))?;
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create_database_tables() creates the torrent_aggregate_metrics table, but drop_database_tables() does not drop it. This makes schema reset incomplete (e.g., tests/helpers calling drop_database_tables() may leave this table and its data behind). Drop torrent_aggregate_metrics as part of the schema teardown to mirror creation.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +2
pub mod driver;
pub mod traits;
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The databases/sqlx/ module tree is added in this PR but is not referenced from packages/tracker-core/src/databases/mod.rs (or anywhere else), so it will not be compiled and its tests won’t run. If the migration is complete, remove this dead module; if it’s still needed, wire it into the crate’s module tree and ensure it’s covered by CI/tests.

Copilot uses AI. Check for mistakes.
…mporary sqlx staging tree

Closes Task 6 of subissue 1717-1525-05.

- Delete dead temporary staging tree under packages/tracker-core/src/databases/sqlx/
  (canonical async drivers and traits live in databases/driver/ and databases/traits/).
- Remove obsolete dependencies r2d2, r2d2_sqlite (rusqlite re-export), and r2d2_mysql
  from packages/tracker-core/Cargo.toml.
- Drop legacy compatibility surface from databases/error.rs:
  remove ConnectionPool variant and From impls for r2d2::Error, r2d2_sqlite::rusqlite::Error,
  r2d2_mysql::mysql::Error and r2d2_mysql::mysql::UrlError; update tests and module docs.
- Replace From<r2d2_sqlite::rusqlite::Error> with From<sqlx::Error> in
  authentication/key/mod.rs and update the related test.
- Refresh stale r2d2_* doc comments in the canonical SQLite and MySQL driver modules.
- Update subissue spec to mark Task 6 acceptance items as done.
The persistence benchmark used to fail intermittently when running the
MySQL driver because sqlx does not retry the first connection. The
official mysql container emits 'ready for connections' twice (first on
the unix socket during init, then on TCP), so we now wait for the
second occurrence on stderr and additionally ping with SELECT 1 in a
short retry loop before initializing the schema.

Add the new technical terms (finalises, mysqld, syscall, testcontainer)
to the project dictionary so cspell stays happy across follow-up
benchmark documentation.
Compare the post-sqlx persistence benchmark against the 2026-04-28
baseline on the same hardware. MySQL totals are ~13-16% faster (mysql
8.4: 7381->6231 ms, mysql 8.0: 7633->6678 ms), with notable per-op wins
on the whitelist/keys removal paths. SQLite totals shift within
expected jitter on a 100-op suite. Conclusion: no regression introduced
by the SQLx migration.

Also document in initialize_database that the function will panic if
the underlying database is not yet accepting connections (sqlx does
not retry the first query) or if any other sqlx::Error occurs while
creating the schema.
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 81 out of 82 changed files in this pull request and generated 8 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +92 to 99
if insert == 0 {
Err(Error::InsertFailed {
location: std::panic::Location::caller(),
driver: DRIVER,
})
} else {
Ok(usize::try_from(insert).unwrap_or(0))
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rows_affected() is a u64, but this converts it to usize via usize::try_from(insert).unwrap_or(0). On 32-bit targets (or overflow), this would silently return 0 even though the insert succeeded, which breaks the method contract. Prefer propagating a conversion error (e.g. as Error::MalformedDatabaseRecord / InvalidQuery) rather than defaulting to 0; same consideration applies to the similar conversion used for delete error codes.

Copilot uses AI. Check for mistakes.
Comment thread src/bootstrap/app.rs Outdated
@@ -24,9 +24,8 @@ use crate::container::AppContainer;
/// # Panics
///
/// Setup can file if the configuration is invalid.
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc comment typo: "Setup can file" should be "Setup can fail".

Copilot uses AI. Check for mistakes.
Comment on lines +29 to +31
let options = SqliteConnectOptions::from_str(&format!("sqlite://{db_path}"))
.map_err(|e| (e, DRIVER))?
.create_if_missing(true);
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SQLite connection string is built as sqlite://{db_path}. For relative paths (e.g. default config uses ./storage/...), the // form can be parsed as having an authority component and effectively turn the path into an absolute /storage/... path or otherwise mis-resolve. Consider constructing SqliteConnectOptions from the filesystem path directly (or using the sqlite: URL form that preserves relative paths) so the configured path/to/db semantics remain correct.

Copilot uses AI. Check for mistakes.
Comment on lines +30 to +35
Ok(match valid_until {
Some(value) => authentication::PeerKey {
key: parsed_key,
valid_until: Some(DurationSinceUnixEpoch::from_secs(value.unsigned_abs())),
},
None => authentication::PeerKey {
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

valid_until is read as i64 and then converted with unsigned_abs(). If the DB ever contains a negative timestamp (corrupt/manual edit), this will silently turn it into a positive value instead of rejecting it as malformed. Consider explicitly checking for value < 0 and returning Error::MalformedDatabaseRecord (and apply the same rule in get_key_from_keys below).

Copilot uses AI. Check for mistakes.
Comment on lines +28 to +33
Ok(match valid_until {
Some(valid_until) => authentication::PeerKey {
key,
valid_until: Some(Duration::from_secs(valid_until.unsigned_abs())),
Some(value) => authentication::PeerKey {
key: parsed_key,
valid_until: Some(DurationSinceUnixEpoch::from_secs(value.unsigned_abs())),
},
None => authentication::PeerKey {
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

valid_until is read as i64 and then converted with unsigned_abs(). If the DB ever contains a negative timestamp (corrupt/manual edit), this will silently turn it into a positive value instead of rejecting it as malformed. Consider explicitly checking for value < 0 and returning Error::MalformedDatabaseRecord (and apply the same rule in get_key_from_keys below).

Copilot uses AI. Check for mistakes.
Comment on lines 57 to 64
if insert == 0 {
Err(Error::InsertFailed {
location: Location::caller(),
driver: DRIVER,
})
} else {
Ok(insert)
Ok(usize::try_from(insert).unwrap_or(0))
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rows_affected() is a u64, but this converts it to usize via usize::try_from(insert).unwrap_or(0). On 32-bit targets (or overflow), this would silently return 0 even though the insert succeeded, which breaks the method contract. Prefer propagating a conversion error (e.g. as Error::MalformedDatabaseRecord / InvalidQuery) rather than defaulting to 0; same consideration applies to the similar conversion used for delete error codes.

Copilot uses AI. Check for mistakes.
Comment on lines +57 to +64
if insert == 0 {
Err(Error::InsertFailed {
location: Location::caller(),
driver: DRIVER,
})
} else {
Ok(usize::try_from(insert).unwrap_or(0))
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rows_affected() is a u64, but this converts it to usize via usize::try_from(insert).unwrap_or(0). On 32-bit targets (or overflow), this would silently return 0 even though the insert succeeded, which breaks the method contract. Prefer propagating a conversion error (e.g. as Error::MalformedDatabaseRecord / InvalidQuery) rather than defaulting to 0; same consideration applies to the similar conversion used for delete error codes.

Copilot uses AI. Check for mistakes.
Comment on lines 94 to 101
if insert == 0 {
Err(Error::InsertFailed {
location: Location::caller(),
driver: DRIVER,
})
} else {
Ok(insert)
Ok(usize::try_from(insert).unwrap_or(0))
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rows_affected() is a u64, but this converts it to usize via usize::try_from(insert).unwrap_or(0). On 32-bit targets (or overflow), this would silently return 0 even though the insert succeeded, which breaks the method contract. Prefer propagating a conversion error (e.g. as Error::MalformedDatabaseRecord / InvalidQuery) rather than defaulting to 0; same consideration applies to the similar conversion used for delete error codes.

Copilot uses AI. Check for mistakes.
… changes

When a commit set has to be split into multiple focused commits and
project-words.txt has been touched, do not try to assign each new
dictionary entry to the commit that introduced its term. Commit all
project-words.txt changes first (as a single commit, or folded into
the first logical commit), then create the focused commits for the
real changes. This keeps the cspell linter green at every commit and
keeps the substantive commits focused on their real intent.
INSERT ... ON DUPLICATE KEY UPDATE legitimately reports rows_affected() == 0 when the row already exists with the same value (no-op update). Treating that as a failure produced spurious InsertFailed errors. Drop the rows_affected() == 0 check for save_torrent_aggregate_metric and save_torrent_downloads in the MySQL driver; real failures still surface as Err from execute().

Addresses Copilot review items torrust#3 and torrust#4 on PR torrust#1718.
…rdown

The schema teardown for both the SQLite and MySQL drivers was missing a DROP TABLE for torrent_aggregate_metrics, leaving the table behind after rollback. Add the missing statement so test/setup teardown leaves a clean schema.

Addresses Copilot review items torrust#5 and torrust#6 on PR torrust#1718.
…t overflow

Replace silent unsigned_abs() coercion of i64 valid_until timestamps loaded from the database with a parse_valid_until helper that rejects negative values as Error::MalformedDatabaseRecord (timestamps before the Unix epoch are not representable as DurationSinceUnixEpoch).

Replace usize::try_from(rows_affected).unwrap_or(0) in the auth-key and whitelist stores (SQLite + MySQL) with proper error propagation as Error::MalformedDatabaseRecord, so a u64 row count that does not fit in usize is no longer silently squashed to 0.

Addresses Copilot review items torrust#8, torrust#11, torrust#12, #13, #14 and torrust#15 on PR torrust#1718.
SqliteConnectOptions::from_str(&format!("sqlite://{db_path}")) reinterprets the leading segment of a relative path (e.g. ./storage/...) as the URL authority, which mangles the resolved file path. Build the options directly with SqliteConnectOptions::new().filename(db_path) so the path is preserved verbatim.

Addresses Copilot review item torrust#10 on PR torrust#1718.
Replace the fixed 10-iteration yield_now() loop in it_should_persist_the_number_of_completed_peers_for_each_torrent_into_the_database with a tokio::time::timeout(5s, ...) wrapper plus a 50 ms sleep between attempts. The test now fails loudly on a stalled system rather than silently after an arbitrary burst of immediate retries.

Addresses Copilot review item #1 on PR torrust#1718.
Re-add the #[must_use] attribute on setup() (lost in an earlier refactor) so callers cannot accidentally drop the returned (Configuration, AppContainer) tuple, and fix the "Setup can file" typo in the panics section to "Setup can fail".

Addresses Copilot review items #2 and torrust#9 on PR torrust#1718.
@josecelano josecelano marked this pull request as ready for review April 30, 2026 12:02
@josecelano josecelano requested a review from a team as a code owner April 30, 2026 12:02
@josecelano josecelano requested a review from Copilot April 30, 2026 12:02
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 82 out of 83 changed files in this pull request and generated 2 comments.

Comments suppressed due to low confidence (1)

packages/tracker-core/src/databases/driver/sqlite/mod.rs:77

  • Similar to save_torrent_downloads, this SQLite INSERT ... ON CONFLICT ... DO UPDATE can report rows_affected() == 0 on a no-op update, despite succeeding. Returning InsertFailed when insert == 0 can produce false failures; consider treating 0 as success and only erroring on execute() failure.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 59 to 77
@@ -65,34 +77,32 @@ impl TorrentMetricsStore for Sqlite {
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For SQLite upserts, rows_affected() can legitimately be 0 when the ON CONFLICT ... DO UPDATE clause results in a no-op update (same value), even though the query succeeded. Treating insert == 0 as InsertFailed can therefore surface false errors. Prefer relying on execute() returning Err for failure and return Ok(()) regardless of rows_affected() here (or use a more robust success criterion).

Copilot uses AI. Check for mistakes.
Comment on lines +93 to +96
if let Some(swarm_metadata) = test_env.get_swarm_metadata(&info_hash).await {
assert!(swarm_metadata.downloads() == 1);
break true;
}
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retry loop will still fail immediately if swarm metadata exists but downloads() is not yet 1: the assert!(swarm_metadata.downloads() == 1) will panic instead of continuing to wait. Consider checking downloads() == 1 and only breaking when it matches (otherwise sleep and retry), so the timeout actually bounds the wait for the desired state rather than asserting on intermediate states.

Copilot uses AI. Check for mistakes.
`ON CONFLICT ... DO UPDATE` may legitimately report `rows_affected() == 0`
when the existing row already holds the same value (no-op update). The
previous code treated that as an `InsertFailed` error, which could turn
benign re-saves into spurious failures.

Drop the `rows_affected() == 0` check on the SQLite `save_torrent_aggregate_metric`
and `save_torrent_downloads` upserts (mirroring the MySQL fix in 5512ac3).
A real failure still surfaces as `Err` from `execute()`. Also remove the
now-unused `std::panic::Location` import in `sqlite/mod.rs`.

Addresses Copilot review comment #16 on PR torrust#1718.
…ermediate state

In `it_should_persist_the_number_of_completed_peers_for_each_torrent_into_the_database`,
the retry loop previously asserted `swarm_metadata.downloads() == 1` on the
first observation. If the background event listener has stored the row but
not yet updated the in-memory `downloads` counter, that assertion would
panic the test instead of letting the bounded `tokio::time::timeout` wait
for the desired state.

Change the check to `if downloads() == 1 { break true }` so the timeout
actually governs the wait and intermediate observations are tolerated.

Addresses Copilot review comment #17 on PR torrust#1718.
@josecelano
Copy link
Copy Markdown
Member Author

ACK 66f1246

@josecelano josecelano merged commit b6a545a into torrust:develop Apr 30, 2026
22 of 23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1525-05: Migrate SQLite and MySQL drivers to sqlx

2 participants