This guide covers deploying HotCRM to production using Docker, Kubernetes, or Vercel.
- Node.js ≥ 20.9.0
- Docker ≥ 24.0 (for Docker/K8s deployments)
- kubectl ≥ 1.28 (for Kubernetes deployments)
- PostgreSQL 16+ (for Docker/K8s deployments)
- Redis 7+ (for Docker/K8s deployments)
# Clone the repository
git clone https://github.com/objectstack-ai/hotcrm.git
cd hotcrm
# Build and start all services
docker compose up -d
# Verify services are running
docker compose ps| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
Application server port |
DB_PASSWORD |
hotcrm |
PostgreSQL password |
DB_PORT |
5432 |
Exposed PostgreSQL port |
REDIS_PORT |
6379 |
Exposed Redis port |
LOG_LEVEL |
info |
Logging level (debug, info, warn, error) |
# Production build
docker build -t hotcrm/app:latest .
# Verify the image
docker run --rm hotcrm/app:latest node -e "console.log('HotCRM ready')"The application exposes a /health endpoint for container orchestration:
curl http://localhost:3000/healthkubectl apply -f k8s/namespace.yamlkubectl create secret generic hotcrm-secrets \
--namespace hotcrm \
--from-literal=database-url='postgresql://hotcrm:PASSWORD@db-host:5432/hotcrm' \
--from-literal=redis-url='redis://redis-host:6379'# Deploy app, service, and ingress
kubectl apply -f k8s/deployment.yaml
# Deploy horizontal pod autoscaler
kubectl apply -f k8s/hpa.yaml# Check pod status
kubectl get pods -n hotcrm
# Check service
kubectl get svc -n hotcrm
# Check ingress
kubectl get ingress -n hotcrm
# View logs
kubectl logs -n hotcrm -l app.kubernetes.io/name=hotcrm --tail=100The HPA automatically scales between 2–10 replicas based on CPU (70%) and memory (80%) utilization.
Manual scaling:
kubectl scale deployment hotcrm-app -n hotcrm --replicas=5The Ingress expects a TLS secret named hotcrm-tls. Create it with:
kubectl create secret tls hotcrm-tls \
--namespace hotcrm \
--cert=path/to/tls.crt \
--key=path/to/tls.keyOr use cert-manager for automatic certificate provisioning.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Ingress │────▶│ HotCRM App │────▶│ PostgreSQL │
│ (nginx) │ │ (Node.js) │ │ (Primary) │
└─────────────┘ └──────┬──────┘ └─────────────┘
│
┌──────▼──────┐
│ Redis │
│ (Cache) │
└─────────────┘
| Component | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---|---|---|---|---|
| HotCRM App | 250m | 1000m | 512Mi | 1Gi |
| PostgreSQL | 500m | 2000m | 1Gi | 4Gi |
| Redis | 100m | 500m | 256Mi | 512Mi |
HotCRM outputs structured JSON logs to stdout, compatible with:
- Kubernetes:
kubectl logsor any log aggregator (Fluentd, Loki) - Docker:
docker compose logs -f
Pod CrashLoopBackOff
kubectl describe pod -n hotcrm <pod-name>
kubectl logs -n hotcrm <pod-name> --previousDatabase Connection Refused
- Verify the
DATABASE_URLsecret is correct - Check that PostgreSQL is accessible from within the cluster
High Memory Usage
- Review the Redis
maxmemorysetting (default: 256MB) - Check for memory leaks with
kubectl top pods -n hotcrm
HotCRM can be deployed to Vercel as a serverless application using Hono + Turso Driver mode.
In this mode the full ObjectStack kernel runs server-side inside a Vercel Serverless Function. When
TURSO_DATABASE_URL is set, data persists in a Turso cloud database across cold starts. Without it,
the driver falls back to :memory: (ephemeral SQLite, same behaviour as the old InMemoryDriver).
- A catch-all Vercel Serverless Function (
api/[[...route]].ts) bootstraps the ObjectStack kernel on first request. @objectstack/driver-tursoprovides a persistent Turso/libSQL data store. WhenTURSO_DATABASE_URLis set, data persists across cold starts; otherwise falls back to:memory:.HonoHttpServerfrom@objectstack/plugin-hono-serverhandles HTTP routing (without TCP listener).createRestApiPlugin()auto-generates CRUD endpoints for all 65+ business objects.- The kernel instance is reused across warm invocations (Vercel Fluid Compute).
- Console UI is served at
/console/(default redirect from/) for data browsing and management. - Studio UI is served at
/_studio/for metadata inspection and development.
Vercel Serverless Function
┌──────────────────────────────────────────┐
│ Request → Hono (Web Standard fetch) │
│ ↓ │
│ ObjectStack Kernel │
│ ├── Auth Plugin (better-auth) │
│ ├── REST API Plugin (auto CRUD) │
│ ├── Dispatcher Plugin (auth, graphql) │
│ ├── 6 Business Plugins │
│ ├── Console UI (/console/) │
│ ├── Studio UI (/_studio/) │
│ └── TursoDriver (Turso cloud / :memory:) │
│ ↓ │
│ Response │
└──────────────────────────────────────────┘
- A Vercel account
- Node.js ≥ 20.9.0, pnpm ≥ 9
| Variable | Required | Default | Description |
|---|---|---|---|
AUTH_SECRET |
Yes (production) | Dev fallback | Secret key used by the Auth Plugin (@objectstack/plugin-auth) for signing sessions and tokens. In production, set a strong random value (e.g. openssl rand -base64 32). A development fallback is used when not set. |
TURSO_DATABASE_URL |
No (recommended for prod) | :memory: |
Turso database URL (e.g. libsql://my-db-orgname.turso.io). When set, data persists across cold starts. |
TURSO_AUTH_TOKEN |
When using remote URL | — | JWT auth token for the Turso database. Required when TURSO_DATABASE_URL points to a remote Turso cloud database. |
# 1. Install dependencies
pnpm install
# 2. Preview locally with Vercel CLI
npx vercel dev
# 3. Test API endpoints
curl http://localhost:3000/api/v1/accountThe repository includes a vercel.json that configures the deployment automatically.
Option A — Git Integration (recommended)
- Push the repository to GitHub.
- Import the repository in the Vercel Dashboard.
- Vercel detects
vercel.jsonand deploys the serverless function automatically. - Every push to the default branch triggers a new deployment.
⚠️ IMPORTANT: Verify Vercel project settingsAfter importing the repository, go to Project Settings → General → Build & Development Settings in the Vercel Dashboard and ensure:
- Build Command: Leave blank or set to
pnpm --filter @hotcrm/ai build— do NOT set a custom override. Vercel project settings overridevercel.json, so any stale value here (e.g. an oldecho 'Server mode — no build step required') will prevent the correct build from running.- Output Directory: Leave blank (auto-detected).
- Install Command: Leave blank or set to
pnpm install.- Framework Preset: "Other" (no framework).
If these fields have custom values from a previous configuration, clear them so that
vercel.jsonis used as the source of truth.
Option B — Vercel CLI
# Install the Vercel CLI
npm i -g vercel
# Deploy from the project root
vercel deployThe .npmrc configures node-linker=hoisted to create a flat node_modules without
pnpm symlinks — this is required because Vercel rejects symlinked directories when
packaging serverless functions. The patch-console-plugin.cjs script runs as an
additional safety net to dereference any remaining symlinks before the build. The 6
business plugin packages (CRM, Finance, Marketing, Products, Support, HR) plus the AI
utility library are built from TypeScript source to dist/.
Build pipeline:
pnpm install ← install all workspace deps (flat / hoisted)
node scripts/patch-console-plugin.cjs ← dereference any remaining pnpm symlinks
pnpm --filter @hotcrm/{ai,crm,...} build ← compile all business plugins → dist/
@vercel/node compiles api/[[...route]].ts ← bundles function + TS imports
| Field | Value | Purpose |
|---|---|---|
installCommand |
pnpm install |
Installs all workspace dependencies |
buildCommand |
bash scripts/build-vercel.sh |
Runs the Vercel build script that builds core, patches console plugin, compiles all business plugins, and copies SPA static assets to public/ |
functions.memory |
1024 MB |
Memory allocated to the serverless function |
functions.maxDuration |
60 s |
Maximum execution time per request (Pro plan) |
functions.includeFiles |
{packages/*/dist,node_modules/@object-ui/console/dist,node_modules/@objectstack/*/dist,node_modules/@libsql,node_modules/better-sqlite3,node_modules/@opentelemetry/api}/** |
Bundles business plugin dist/, all @objectstack packages (including Auth, Studio, TursoDriver), libSQL/better-sqlite3 native deps, and OpenTelemetry with the function |
headers |
/console/assets/*, /_studio/assets/* |
Sets Cache-Control: public, max-age=31536000, immutable on content-hashed static assets |
rewrites |
/(.*) → /api/[[...route]] |
Routes non-static requests to the catch-all handler (static files in public/ are served by Vercel's CDN before rewrites are evaluated) |
SPA static assets (JS, CSS, fonts, images) are served directly from Vercel's CDN edge network instead of routing through the serverless function:
- Build step:
scripts/build-vercel.shcopies Console assets topublic/console/assets/and Studio assets topublic/_studio/assets/after building all packages. - Vercel routing order: redirects → headers → filesystem → rewrites. Files in
public/(the output directory) are matched at the filesystem step, before rewrites, so they bypass theapi/[[...route]]serverless function entirely. - Cache headers: Content-hashed assets (e.g.,
index-DDpLaQOV.js) getCache-Control: public, max-age=31536000, immutablefor optimal CDN and browser caching. - SPA fallback: Only
index.htmland client-side routes still go through the API handler. The static SPA plugin in the serverless function handles these requests, plus serves as the fallback for non-Vercel deployments (Docker, local dev).
The serverless function at api/[[...route]].ts uses a singleton bootstrap pattern:
- First request (cold start): bootstraps the ObjectStack kernel, registers all plugins, creates the Hono app
- Subsequent requests (warm): reuses the existing kernel and in-memory data (Vercel Fluid Compute)
- Cold start after idle: data resets — a fresh kernel is bootstrapped
Timeout Protection — Every kernel.use() call has a 10 s per-plugin timeout, kernel.bootstrap() has a 30 s timeout, and the entire bootstrap function has a 50 s budget (10 s margin for Vercel's 60 s function limit). If any step exceeds its timeout, the handler immediately returns 503 Service Unavailable with a JSON error body instead of silently hanging until the Vercel 60 s limit. Timestamped diagnostic logs ([HotCRM] [<elapsed>ms] …) are emitted at every bootstrap step so the blocking plugin can be identified from the Vercel Function Logs tab.
The function does not use HonoServerPlugin (which binds to a TCP port). Instead, it manually creates
a HonoHttpServer, registers it as the http.server service, and uses the handle() adapter from
@hono/node-server/vercel to convert the Hono app into a standard Node.js serverless handler
((IncomingMessage, ServerResponse) => void) that Vercel's @vercel/node runtime recognises.
- With
TURSO_DATABASE_URL(production recommended): data persists in a Turso cloud database across cold starts and instance recycling. - Without
TURSO_DATABASE_URL(local dev / demo): falls back to:memory:(ephemeral SQLite) — records persist only while the function instance is warm; data resets on cold start. - Warm invocations share data — records created in one request are visible in the next.
- Data is ephemeral without Turso — without
TURSO_DATABASE_URL, records persist only while the function instance is warm; data resets on cold start. SetTURSO_DATABASE_URL+TURSO_AUTH_TOKENfor persistent production data. - No authentication — this mode does not enforce permissions or user sessions out of the box.
- No WebSocket — Vercel Serverless Functions do not support persistent connections.
- Cold start latency — the first request after idle may take 2–5 seconds for kernel bootstrap.
- Demo / staging only — designed for demos, design reviews, and CI previews, not production workloads.
For production deployments with persistent data, see Docker Deployment or Kubernetes Deployment above.
Deployment shows 404 for all routes
-
Check build command override: Go to Vercel Dashboard → Project Settings → General → Build & Development Settings. If the Build Command field has a custom value, clear it so
vercel.jsonis used. The most common cause of 404 is an overridden build command that prevents@hotcrm/aifrom being compiled. -
Verify the build log: The build log should show:
Running "pnpm --filter @hotcrm/ai build" > @hotcrm/ai@1.0.0 build > tscIf it instead shows
Server mode — no build step requiredor any other message, thevercel.jsonbuildCommand is being overridden by project settings. -
Check function deployment: After deployment, go to the Vercel Dashboard → Project → Functions tab. You should see
api/[[...route]]listed as a serverless function. If it's not listed, the function was not compiled — check for TypeScript errors in the build log. -
Cold start timeout: The first request after deployment can take 2–5 seconds. If you get a 504 timeout, consider increasing
maxDurationinvercel.json.
Build command fails
- The
@hotcrm/aipackage must build successfully before the function can be bundled. If it fails, check for TypeScript errors inpackages/ai/src/. - Do NOT use
pnpm -r build— several packages have TypeScript errors that prevent compilation. Only@hotcrm/aineeds to be built.
The previous MSW deployment mode (fully client-side with Service Worker) is still available via:
pnpm -w run build:msw
npx serve dist/studioThis produces a static SPA where all API calls are intercepted by a browser Service Worker.
Data is stored in browser memory and lost on page refresh. See scripts/build-msw.sh for details.