Problem
- DB errors when file or tables don’t exist.
Fix
-
On
new QueueDB(config):- Check if DB file exists
- If not → create DB
- Auto-run internal schema setup
Behavior
const qdb = new QueueDB({
path: "./data/qdb.sqlite"
});No setup scripts. Zero friction.
Fix
- Internal
__qdb_metatable:
id | key | valueUsed for:
- schema_version
- created_at
- last_migration
- data_lifetime_enabled
This prevents silent corruption and enables migrations.
__qdb_queue
- id (uuid)
- topic
- payload (json/text)
- status (pending | processing | done | failed)
- priority
- created_at
- updated_atenqueue(topic, payload)dequeue(topic, clientId)ack(id)fail(id, reason)
Critical detail
- Use transactions + row locking
- One consumer gets one job
- No double-processing
This makes QueueDB usable for:
- background jobs
- realtime sync
- offline-first apps
You already chose the right approach.
In-memory registry
Map<
socketId,
{
topics: string[]
routes: string[]
}
>qdb.subscribe(socket, {
topic: "orders",
route: "/orders/live"
});When data changes:
-
Only emit to sockets subscribed to:
- that topic
- that route
This avoids broadcast spam and keeps it fast.
Instead of bloating DB, you use a log.
Example:
{
"id": "data-id",
"table": "orders",
"createdAt": 1730000000,
"expiresAt": 1730864000
}Stored as:
- JSONL (append-only)
- OR SQLite lightweight log table (optional)
- Internal scheduler (node-cron or setInterval)
- Reads log
- Deletes expired data
- Cleans log entry after delete
qdb.enableDataLifetime({
defaultTTL: "7d",
checkInterval: "1h"
});No runtime DB scanning. Very efficient.
- Prevent massive SQLite files
- Allow safe upgrades
- Allow partial data offloading
qdb.migrate({
fromVersion: 1,
toVersion: 2,
up(db) {},
down(db) {}
});- Move old data → archive DB file
- Keep primary DB lean
- Optional gzip archive
qdb.archive({
olderThan: "30d",
to: "./archive/"
});This alone is a huge differentiator.
This is your killer feature.
app.use(
"/db/management",
qdb.dbManager({
auth: {
table: "admins",
usernameField: "username",
passwordField: "password",
hash: "bcrypt"
}
})
);- You do NOT own auth
- You just read it
Supports:
- Any table
- Any field names
- Any hashing algorithm
Optional:
auth: {
users: [
{ username: "admin", password: "hashed" }
]
}- Login screen (phpMyAdmin-like)
- Tables list
- Row viewer
- JSON editor
- Queue inspector
- Live updates (via socket)
- Delete / truncate / export
No SQL console in v2 (security)
Best options:
- Vite + React
- Embedded static build
- Served internally by QueueDB
No external hosting required.
Since full multi-DB is v3, v2 should do this:
interface QDBAdapter {
query(sql, params)
transaction(fn)
}- SQLite adapter (current)
new QueueDB({
adapter: myAdapter
});Enough to prepare for v3 without overengineering.
npx queuedb init
npx queuedb migrate
npx queuedb ui- Strong TS types for payloads
- Topic-based generics:
qdb.enqueue<"orders">({ id, amount })QueueDB becomes:
“Firebase-lite but local”
- DB
- Queue
- Realtime
- Admin UI
- Auth bridge
On one server, zero vendor lock-in.
qdb.sync("orders").toSockets();You’re basically building:
- SQLite + Redis + Firebase
- in one package
That’s not common. At all.
- Real queue
- UI hosting
- Auth tables
- Subscriptions
- Data lifetime
- Migrations
- UI permissions
- Export/import
- Metrics
- Multi-DB adapters
- Distributed queues