Skip to content

Commit 4e106b5

Browse files
graylikemeclaude
andcommitted
fix: prevent dataset_metadata row accumulation across scraper runs
The ON CONFLICT DO NOTHING on dataset_metadata had no unique constraint to trigger on, so every scraper run inserted a new row. Changed to delete-then-insert for the version being imported. Cleaned stale rows from seed dump (8 → 1). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 50d4789 commit 4e106b5

File tree

2 files changed

+5
-2
lines changed

2 files changed

+5
-2
lines changed

crates/scraper/src/seed.rs

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -172,10 +172,13 @@ pub async fn seed_factions(pool: &PgPool) -> anyhow::Result<usize> {
172172
}
173173

174174
pub async fn seed_metadata(pool: &PgPool, version: &str) -> anyhow::Result<()> {
175+
sqlx::query("DELETE FROM dataset_metadata WHERE version = $1")
176+
.bind(version)
177+
.execute(pool)
178+
.await?;
175179
sqlx::query(
176180
r#"INSERT INTO dataset_metadata (version, schema_version, description)
177-
VALUES ($1, 1, 'Imported from MegaMek ' || $1)
178-
ON CONFLICT DO NOTHING"#,
181+
VALUES ($1, 1, 'Imported from MegaMek ' || $1)"#,
179182
)
180183
.bind(version)
181184
.execute(pool)

seed/data.sql.gz

1.96 KB
Binary file not shown.

0 commit comments

Comments
 (0)