Privacy-first contribution verification for developers with private repositories.
ContributionPulse connects to GitLab, Azure DevOps, and GitHub using read-only tokens, computes daily contribution aggregates, and exposes proof-style dashboards/reports without storing source-level metadata.
- Verify contribution activity from private repositories.
- Keep user data privacy-safe by design.
- Support shareable public reports and PDF export.
- Scale sync via background jobs and retries.
- Store only day-level aggregate counts (
commitCount,mergeCount,prCount,pipelineCount). - Never store code, diffs, commit messages, repository names, or raw provider payloads.
- Encrypt provider credentials at rest with AES-256-GCM.
- Keep all secret handling server-side only.
- Frontend/App shell: Next.js 14 (App Router), TypeScript, Tailwind, shadcn/ui
- Auth: Supabase Auth (email magic link)
- Database: PostgreSQL + Prisma
- Queue/worker: BullMQ + Redis
- Charts: Recharts
- Client data/state: React Query (server communication), Zustand (UI state)
flowchart LR
U[User Browser] --> W[Next.js App]
W --> SA[Supabase Auth]
W --> API[App Route Handlers]
API --> DB[(Postgres via Prisma)]
API --> Q[(BullMQ Queue on Redis)]
WRK[Worker Process] --> Q
WRK --> GL[GitLab APIs]
WRK --> AZ[Azure DevOps APIs]
WRK --> GH[GitHub APIs]
WRK --> DB
WRK --> PUB[(Redis Pub/Sub)]
API --> PUB
U <-->|SSE /api/sync/events| API
- Web app process: UI rendering, API route handling, auth checks, report generation.
- Worker process: all sync jobs, provider API calls, aggregation, upserts.
- Redis: queue broker + realtime pub/sub channel.
- Postgres: tenant-scoped persistent storage.
- User signs in via Supabase magic link.
- Server calls
requireAppUser()for protected pages/routes. - App user record is upserted by
supabaseUserId. - All operations are scoped by
appUser.id.
- User submits provider credentials (GitLab/GitHub token, Azure token + org).
- API encrypts token using AES-256-GCM (
MASTER_KEY). - Integration is upserted per unique
(userId, provider). - Optional author-email aliases are stored for commit matching.
- User clicks Sync now or queues historical backfill.
- API enqueues BullMQ
sync-userjob. - Worker pulls job and iterates user integrations.
- For each integration:
- mark
syncState=RUNNING - decrypt token in memory
- fetch provider events with pagination + retries + pacing
- aggregate to UTC day buckets
- upsert into
DailyActivity - set
syncState=IDLE,lastSyncedAt=now
- mark
- On failure: mark
syncState=FAILED, emit sanitized error logs.
- Worker publishes sync lifecycle events (
started/completed/failed) to Redis pub/sub. - Browser listens through SSE endpoint
/api/sync/events. - Dashboard shows toasts and refreshes updated data.
- Dashboard reads only aggregate rows (
DailyActivity). - React Query handles all API calls (sync, backfill, shares, highlights, settings actions).
- Public report uses tokenized, read-only route; no private metadata is exposed.
- PDF export renders aggregate-only proof.
sequenceDiagram
autonumber
participant UI as Browser UI
participant API as Next.js API
participant Q as BullMQ/Redis
participant W as Worker
participant P as Provider APIs
participant DB as Postgres
participant SSE as SSE Stream
UI->>API: POST /api/sync (or /api/sync/backfill)
API->>Q: enqueue sync-user {userId, options}
API-->>UI: 200 {ok:true}
W->>Q: consume job
W->>SSE: publish sync_started
SSE-->>UI: event: sync_started
W->>DB: load user + integrations
loop each provider integration
W->>DB: set syncState=RUNNING
W->>P: fetch paginated events with retry/rate pacing
W->>W: aggregate events by UTC day
W->>DB: upsert DailyActivity(user,provider,date)
W->>DB: set syncState=IDLE,lastSyncedAt
end
W->>SSE: publish sync_completed
SSE-->>UI: event: sync_completed
UI->>API: refresh dashboard data
- Retries transient failures (
429,5xx) with incremental backoff. - Supports both page-based and continuation-token pagination.
- Applies per-host minimum interval (
minIntervalMs) between outbound requests.
- Backfill submits fixed year range (
from=Jan 1,to=Dec 31) as job options. - Jobs are listed, retried, deleted, or cleaned from the dashboard.
erDiagram
User ||--o{ Integration : has
User ||--o{ DailyActivity : has
User ||--o{ ManualHighlight : has
User ||--o{ PublicShare : has
User ||--o{ SyncJob : has
User {
string id PK
string supabaseUserId UK
string email
}
Integration {
string id PK
string userId FK
enum provider
string encryptedToken
string tokenIv
string tokenTag
string gitlabBaseUrl
string azureOrg
string[] authorEmails
enum syncState
datetime lastSyncedAt
}
DailyActivity {
string id PK
string userId FK
enum provider
datetime date
int commitCount
int mergeCount
int prCount
int pipelineCount
}
ManualHighlight {
string id PK
string userId FK
datetime date
string note
}
PublicShare {
string id PK
string userId FK
string token UK
datetime expiresAt
datetime revokedAt
}
SyncJob {
string id PK
string userId FK
enum provider NULL
datetime from NULL
datetime to NULL
int backfillYear NULL
enum status
int attemptCount
int maxAttempts
datetime availableAt
datetime lockedAt NULL
datetime startedAt NULL
datetime finishedAt NULL
string errorMessage NULL
datetime createdAt
datetime updatedAt
}
Integrationkeeps encrypted secrets only.DailyActivityis aggregate-only.SyncJobstores orchestration metadata only (no provider raw payloads, no repo names, no commit messages).- No model stores repository names or commit messages.
QUEUED: job waiting for processing (availableAt <= nowmeans runnable).RUNNING: worker has lock and is executing provider sync.COMPLETED: sync finished successfully,finishedAtset.FAILED: retries exhausted or non-retryable failure,errorMessagerecorded.
Retry/backoff behavior:
attemptCountincrements when a worker locks a queued job.- if
attemptCount < maxAttempts, job is re-queued with futureavailableAt. - backoff is exponential and capped (current implementation caps at 60 seconds).
/api/integrations/*-> connect/update/disconnect providers/api/sync+/api/sync/backfill*-> queue operations/api/sync/events-> SSE stream for realtime sync status/api/share-> create/revoke public report links/api/highlights-> manual highlight CRUD (currently create)/api/report/pdf/[token]-> PDF export/api/account/delete-> account/data deletion
- React Query handles all client-initiated API communication.
- Mutations update local UI state and trigger selective refresh.
- Zustand handles non-server UI state (chart filters/year, etc.).
- Algorithm: AES-256-GCM
- Key source:
MASTER_KEYenv var (base64, 32 bytes) - Token encryption on write, decryption only in worker sync path.
- All structured logs go through sanitization.
- Token-like values are redacted before output.
- Provider secrets are never sent to browser.
- Sensitive operations run in server routes/worker only.
- Tokenized URL
- Optional expiration timestamp
- Revocation support
- Primary tenant key is
userId. - Uniques enforce per-tenant isolation (
userId + provider,userId + provider + date). - Every query/mutation path uses authenticated
appUser.id.
- Supports
gitlab.comand self-hosted base URL. - Uses events API to derive commits/merges/pipelines aggregates.
- Requires PAT + organization name.
- Traverses projects/repositories/commits.
- Skips inaccessible repos (403/404) without failing whole sync.
- Uses user/repos, commits, pulls, and workflow runs endpoints.
- Handles empty/inaccessible repositories gracefully (e.g., 409/404/403 skip paths).
- Node.js 20+ (project currently targets modern runtime)
- PostgreSQL
- Redis
- Supabase project (URL + anon + service role key)
Copy .env.example to .env.local and set:
DATABASE_URLDIRECT_URLNEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEYSUPABASE_SERVICE_ROLE_KEYMASTER_KEY(base64-encoded 32-byte key)REDIS_URLSYNC_QUEUE_BACKEND(bullorsupabase)CRON_SECRET(required whenSYNC_QUEUE_BACKEND=supabase)APP_URLNEXT_PUBLIC_APP_NAME(optional, defaultContributionPulse)NEXT_PUBLIC_APP_SLUG(optional, default derived from app name)
Generate MASTER_KEY:
openssl rand -base64 32npm install
npm run prisma:generate
npx prisma migrate deploy
npm run devnpm run worker
npm run worker:nightlyworker:nightly registers the repeatable nightly sync schedule. Run it once per environment.
Replace BullMQ/Redis solution for low-cost environments:
- Set:
SYNC_QUEUE_BACKEND=supabase
CRON_SECRET=<strong-random-secret>-
Do not run
npm run workerfor queue processing. -
Trigger queue processing by calling:
POST /api/internal/sync/process
Authorization: Bearer <CRON_SECRET>Process manually in dev:
npm run queue:process- Schedule this endpoint with Supabase Cron (example every minute):
select
cron.schedule(
'process-contribution-sync-queue',
'* * * * *',
$$
select
net.http_post(
url := 'https://contribution-pulse.vercel.app/api/internal/sync/process?limit=10',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer CRON_SECRET'
),
body := '{}'::jsonb
);
$$
);This keeps BullMQ code in the project for future scale, while allowing Redis-free operation today.
npm testCurrent automated tests include:
- encryption helper correctness (AES-256-GCM)
- daily aggregation/upsert logic
- Web service: Next.js app
- Worker service: BullMQ worker process
- Scheduler service/job:
worker:nightly(or one-time bootstrap) - Managed Postgres (e.g., Supabase Postgres)
- Managed Redis
Set these in GitHub repo settings -> Secrets and variables -> Actions:
VERCEL_TOKENVERCEL_ORG_IDVERCEL_PROJECT_IDDATABASE_URLDIRECT_URLNEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEYSUPABASE_SERVICE_ROLE_KEYMASTER_KEYREDIS_URLAPP_URL
Dockerfile-> production image for Next.js web appdocker-compose.yml-> web + worker + Redis local/prod-like stack.dockerignore-> optimized build context
flowchart LR
U[Browser] --> WEB[web container<br/>Next.js]
WEB --> DB[(Postgres)]
WEB --> R[(Redis)]
WEB --> SA[Supabase Auth]
WEB --> Q[(BullMQ queue)]
WRK[worker container<br/>npm run worker] --> Q
WRK --> DB
WRK --> GL[GitLab API]
WRK --> AZ[Azure DevOps API]
WRK --> GH[GitHub API]
WRK --> R
docker compose up --build -dServices:
web-> app onhttp://localhost:3000worker-> background sync workerredis-> queue broker + pub/sub
Stop:
docker compose downdocker-compose.ymluses.envfor app secrets.- Keep
DATABASE_URLpointing to your managed Postgres (or add a Postgres service if desired). - Run
npm run worker:nightlyonce per environment to register the repeatable nightly sync schedule.