Part of Connapse — open-source AI knowledge management platform.
This guide covers deploying Connapse in various environments.
- Quick Start (Docker Compose)
- First-Time Setup
- Local Development
- Production Deployment
- Configuration Reference
- Backup and Restore
- Troubleshooting
The fastest way to run Connapse with all dependencies.
- Docker 24+ and Docker Compose 2.20+
- Git (to clone the repository)
- 8GB RAM minimum (16GB recommended for Ollama)
- 20GB disk space (for databases, models, and uploaded files)
- Clone the repository:
git clone https://github.com/Destrayon/Connapse.git
cd Connapse- Create environment file:
cp .env.example .envEdit .env and set your passwords:
POSTGRES_PASSWORD=your_secure_password_here
MINIO_ROOT_USER=connapse_admin
MINIO_ROOT_PASSWORD=your_secure_minio_password_here
# Auth (required for v0.2.0+)
CONNAPSE_ADMIN_EMAIL=admin@example.com
CONNAPSE_ADMIN_PASSWORD=YourSecureAdminPassword123!
# JWT signing secret — generate with: openssl rand -base64 64
Identity__Jwt__Secret=replace-with-a-long-random-secret-at-least-64-charsImportant:
Identity__Jwt__Secretmust be at least 32 characters and kept secret. It signs all JWT tokens.
- Start services:
# Without Ollama (use external embedding/LLM services)
docker compose up -d
# With Ollama (includes local embedding + LLM)
docker compose --profile with-ollama up -dCompose file layout:
docker-compose.ymlis the production base (no host-exposed ports, services isolated on an internal network).docker-compose.dev.ymlis a development overlay — see Local Development.
- Wait for services to be ready:
docker compose ps
# All services should show "healthy" status- Pull Ollama models (if using Ollama):
docker compose exec ollama ollama pull nomic-embed-text
docker compose exec ollama ollama pull llama2 # Optional: for LLM features- Initialize the database:
The database schema is created automatically on first startup via EF Core migrations.
- Access the application:
- Web UI: http://localhost:5001 (shows the container list on the home page)
- MinIO Console: http://localhost:9001 (login:
connapse_admin/ your password) - Ollama API: http://localhost:11434
- Open http://localhost:5001 — you are redirected to the login page
- If
CONNAPSE_ADMIN_EMAIL/CONNAPSE_ADMIN_PASSWORDwere set, log in with those credentials - If no env vars were set, the login page shows a First-Time Setup form — create the admin account there
- After login, open Settings and click "Test Connection" for PostgreSQL, MinIO, and Ollama
- All tests should show ✅ Success
After starting services for the first time, you need to create the initial admin account. This applies to both Docker Compose and manual deployments.
If CONNAPSE_ADMIN_EMAIL and CONNAPSE_ADMIN_PASSWORD environment variables are set, the admin account is seeded automatically on first startup. Otherwise:
- Open the Web UI (e.g.,
http://localhost:5001) — you are redirected to the login page - When no users exist in the database, the login page automatically shows a First-Time Setup form instead of the standard login
- Fill in your email and password to create the admin account
- The first user created this way is assigned the Admin role
After the initial admin account exists, registration is invite-only:
- Log in as an admin and navigate to
/admin/users - Create an invitation for the new user
- Share the generated link with the user — it follows the format
/register?token=<token> - The invited user visits that link to set their password and complete registration
There is no open registration endpoint. All new users must be invited by an admin.
Run the application directly on your machine for development.
- .NET 10 SDK (Download)
- Docker & Docker Compose (for backing services)
- Ollama (optional — run natively or via Docker)
Run infrastructure in Docker, application in .NET runtime for hot reload and debugging.
- Start backing services using the dev override (exposes ports to the host):
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -dWhy the extra
-f?docker-compose.ymlis the production base — it keeps all services on an internal network with no exposed ports.docker-compose.dev.ymloverlays port bindings sodotnet runcan reach PostgreSQL (5432), MinIO (9000/9001), and Ollama (11434) on localhost.
- Run the application (
appsettings.jsonalready points at localhost defaults):
dotnet run --project src/Connapse.Web- Open your browser: https://localhost:5001
Install all dependencies on your machine.
-
Install PostgreSQL 17:
- macOS:
brew install postgresql@17 - Ubuntu:
sudo apt install postgresql-17 - Windows: Download installer
- macOS:
-
Install pgvector extension:
# macOS/Linux
brew install pgvector # or build from source
sudo apt install postgresql-17-pgvector # Ubuntu
# Windows: Download from https://github.com/pgvector/pgvector/releases- Create database:
psql -U postgres
CREATE DATABASE connapse;
CREATE USER connapse WITH PASSWORD 'connapse_dev';
GRANT ALL PRIVILEGES ON DATABASE connapse TO connapse;
\c connapse
CREATE EXTENSION IF NOT EXISTS vector;
\q- Install MinIO:
# macOS
brew install minio
# Linux
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /usr/local/bin/
# Windows: Download from https://min.io/download- Start MinIO:
mkdir -p ~/minio/data
export MINIO_ROOT_USER=connapse_dev
export MINIO_ROOT_PASSWORD=connapse_dev_secret
minio server ~/minio/data --console-address ":9001"-
Install Ollama: https://ollama.ai/download
-
Pull models:
ollama pull nomic-embed-text
ollama pull llama2- Verify:
ollama listdotnet run --project src/Connapse.WebRecommended IDE:
- Visual Studio 2022 (17.8+) with .NET 10 workload
- JetBrains Rider 2024.1+
- VS Code with C# Dev Kit
Useful Commands:
# Build
dotnet build
# Run tests
dotnet test
# Run specific test category
dotnet test --filter "Category=Unit"
dotnet test --filter "Category=Integration"
# Watch mode (auto-rebuild on changes)
dotnet watch --project src/Connapse.Web
# EF Core migrations — Storage DbContext (documents, containers, etc.)
dotnet ef migrations add MigrationName --project src/Connapse.Storage --startup-project src/Connapse.Web
# EF Core migrations — Identity DbContext (users, PATs, audit logs, agents)
dotnet ef migrations add MigrationName --project src/Connapse.Identity --startup-project src/Connapse.Web
# Format code
dotnet formatInstall the standalone connapse-cli and point it at your local dev server:
dotnet tool install -g Connapse.CLI
connapse auth login --server https://localhost:5001NEVER commit secrets to git. Use environment variables or secret managers.
Azure Key Vault:
# Install provider
dotnet add package Azure.Extensions.AspNetCore.Configuration.Secrets
# Configure in Program.cs
builder.Configuration.AddAzureKeyVault(
new Uri("https://your-keyvault.vault.azure.net/"),
new DefaultAzureCredential());AWS Secrets Manager:
dotnet add package Amazon.Extensions.Configuration.SystemsManager
builder.Configuration.AddSystemsManager("/connapse/production");Environment Variables (Kubernetes, Docker):
# Kubernetes Secret
apiVersion: v1
kind: Secret
metadata:
name: connapse-secrets
type: Opaque
data:
postgres-password: <base64-encoded-password>
minio-secret-key: <base64-encoded-key>Option A: Reverse Proxy (Recommended)
Use nginx or Traefik with Let's Encrypt:
# /etc/nginx/sites-available/connapse
server {
listen 443 ssl http2;
server_name connapse.example.com;
ssl_certificate /etc/letsencrypt/live/connapse.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/connapse.example.com/privkey.pem;
location / {
proxy_pass http://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket support (SignalR)
location /hubs/ {
proxy_pass http://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}Option B: Kestrel Direct HTTPS
# Generate self-signed cert (dev only)
dotnet dev-certs https -ep ./cert.pfx -p YourPassword
# Production: Use real certificate
export ASPNETCORE_Kestrel__Certificates__Default__Path=/path/to/cert.pfx
export ASPNETCORE_Kestrel__Certificates__Default__Password=YourPassword
export ASPNETCORE_URLS="https://+:443;http://+:80"-- Create read-only user for analytics
CREATE ROLE connapse_readonly;
GRANT CONNECT ON DATABASE connapse TO connapse_readonly;
GRANT USAGE ON SCHEMA public TO connapse_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO connapse_readonly;
-- Create app user with limited permissions
CREATE USER connapse_app WITH PASSWORD 'secure_password';
GRANT CONNECT ON DATABASE connapse TO connapse_app;
GRANT USAGE, CREATE ON SCHEMA public TO connapse_app;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO connapse_app;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO connapse_app;
-- Enable SSL
ALTER SYSTEM SET ssl = on;
SELECT pg_reload_conf();Connection string with SSL:
Host=postgres.example.com;Database=connapse;Username=connapse_app;Password=***;SSL Mode=Require;Trust Server Certificate=false
# Create dedicated user (not root)
mc admin user add myminio connapse_app <secure-password>
# Create policy with minimal permissions
cat > /tmp/connapse-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::knowledge-files/*",
"arn:aws:s3:::knowledge-files"
]
}
]
}
EOF
mc admin policy create myminio connapse-policy /tmp/connapse-policy.json
mc admin policy attach myminio connapse-policy --user connapse_appMulti-stage build for minimal image size:
# Build stage
FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
WORKDIR /src
# Copy project files
COPY ["src/Connapse.Web/Connapse.Web.csproj", "Connapse.Web/"]
COPY ["src/Connapse.Core/Connapse.Core.csproj", "Connapse.Core/"]
COPY ["src/Connapse.Identity/Connapse.Identity.csproj", "Connapse.Identity/"]
COPY ["src/Connapse.Ingestion/Connapse.Ingestion.csproj", "Connapse.Ingestion/"]
COPY ["src/Connapse.Search/Connapse.Search.csproj", "Connapse.Search/"]
COPY ["src/Connapse.Storage/Connapse.Storage.csproj", "Connapse.Storage/"]
# Restore dependencies
RUN dotnet restore "Connapse.Web/Connapse.Web.csproj"
# Copy everything else
COPY src/ .
# Build and publish
WORKDIR "/src/Connapse.Web"
RUN dotnet publish -c Release -o /app/publish --no-restore
# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:10.0 AS runtime
WORKDIR /app
# Create non-root user
RUN groupadd -r connapse && useradd -r -g connapse connapse
RUN chown -R connapse:connapse /app
USER connapse
COPY --from=build /app/publish .
EXPOSE 8080
ENTRYPOINT ["dotnet", "Connapse.Web.dll"]Build and push:
docker build -t yourregistry/connapse:v1.0.0 .
docker push yourregistry/connapse:v1.0.0# docker-compose.prod.yml
services:
postgres:
image: pgvector/pgvector:pg17
environment:
POSTGRES_DB: connapse
POSTGRES_USER: connapse_app
POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
secrets:
- postgres_password
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- backend
restart: unless-stopped
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER_FILE: /run/secrets/minio_user
MINIO_ROOT_PASSWORD_FILE: /run/secrets/minio_password
secrets:
- minio_user
- minio_password
volumes:
- miniodata:/data
networks:
- backend
restart: unless-stopped
web:
image: yourregistry/connapse:v1.0.0
environment:
ASPNETCORE_ENVIRONMENT: Production
ConnectionStrings__DefaultConnection: "Host=postgres;Database=connapse;Username=connapse_app;Password_FILE=/run/secrets/postgres_password;SSL Mode=Require"
Knowledge__Storage__MinIO__Endpoint: "minio:9000"
Knowledge__Storage__MinIO__AccessKey_FILE: /run/secrets/minio_user
Knowledge__Storage__MinIO__SecretKey_FILE: /run/secrets/minio_password
secrets:
- postgres_password
- minio_user
- minio_password
networks:
- frontend
- backend
depends_on:
- postgres
- minio
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "443:443"
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
networks:
- frontend
depends_on:
- web
restart: unless-stopped
secrets:
postgres_password:
file: ./secrets/postgres_password.txt
minio_user:
file: ./secrets/minio_user.txt
minio_password:
file: ./secrets/minio_password.txt
networks:
frontend:
backend:
volumes:
pgdata:
miniodata:All settings can be overridden via environment variables using the hierarchy:
appsettings.json < appsettings.{Env}.json < User Secrets < Environment Variables < CLI Args
Use double underscores __ to represent nested keys:
# appsettings.json:
# {
# "Knowledge": {
# "Embedding": {
# "BaseUrl": "http://ollama:11434"
# }
# }
# }
# Equivalent environment variable:
export Knowledge__Embedding__BaseUrl="http://ollama:11434"| Variable | Description | Default |
|---|---|---|
ASPNETCORE_ENVIRONMENT |
Environment (Development, Staging, Production) | Production |
ASPNETCORE_URLS |
Bind URLs | http://+:8080 |
ConnectionStrings__DefaultConnection |
PostgreSQL connection string | (required) |
Knowledge__Storage__MinIO__Endpoint |
MinIO endpoint | minio:9000 |
Knowledge__Storage__MinIO__AccessKey |
MinIO access key | (required) |
Knowledge__Storage__MinIO__SecretKey |
MinIO secret key | (required) |
Knowledge__Storage__MinIO__UseSSL |
Use HTTPS for MinIO | false |
Knowledge__Storage__MinIO__BucketName |
MinIO bucket name | knowledge-files |
Knowledge__Embedding__BaseUrl |
Ollama/OpenAI base URL | http://ollama:11434 |
Knowledge__Embedding__Model |
Embedding model name | nomic-embed-text |
Knowledge__Embedding__Dimensions |
Expected vector dimensions | 768 |
Knowledge__Chunking__Strategy |
Default chunking strategy | Semantic |
Knowledge__Chunking__MaxTokens |
Max tokens per chunk | 512 |
Knowledge__Chunking__Overlap |
Overlap tokens between chunks | 50 |
Knowledge__Search__Mode |
Default search mode | Hybrid |
Knowledge__Search__TopK |
Default result count | 10 |
Knowledge__Search__FusionMethod |
Fusion method: ConvexCombination or DBSF | ConvexCombination |
Knowledge__Search__FusionAlpha |
Semantic weight (0.0-1.0) | 0.5 |
Knowledge__Search__AutoCut |
Auto-trim after largest score gap | false |
Knowledge__Search__MinimumScore |
Minimum similarity score floor | 0 |
Knowledge__Upload__MaxFileSizeBytes |
Max upload size | 104857600 (100MB) |
Knowledge__Upload__ConcurrentIngestions |
Parallel ingestion workers | 4 |
| Variable | Description | Default |
|---|---|---|
CONNAPSE_ADMIN_EMAIL |
Admin email for auto-seed on first startup | (optional) |
CONNAPSE_ADMIN_PASSWORD |
Admin password for auto-seed on first startup | (optional) |
Identity__Jwt__Secret |
HS256 JWT signing secret (min 32 chars) | (required in production) |
Identity__Jwt__Issuer |
JWT issuer claim | Connapse |
Identity__Jwt__Audience |
JWT audience claim | Connapse |
Identity__Jwt__AccessTokenExpiryMinutes |
JWT access token lifetime | 90 |
Identity__Jwt__RefreshTokenExpiryDays |
JWT refresh token lifetime | 30 |
Identity__Cookie__SlidingExpirationDays |
Cookie session lifetime | 14 |
| Variable | Description | Default |
|---|---|---|
Knowledge__Llm__Provider |
LLM provider (Ollama, OpenAI, AzureOpenAI, Anthropic) |
Ollama |
Knowledge__Llm__BaseUrl |
Ollama/OpenAI base URL | http://ollama:11434 |
Knowledge__Llm__Model |
LLM model name | llama2 |
Knowledge__Llm__ApiKey |
API key for OpenAI/AzureOpenAI/Anthropic | (optional) |
Knowledge__Llm__AzureDeploymentName |
Azure OpenAI deployment name | (optional) |
Knowledge__Llm__Temperature |
Generation temperature | 0.7 |
Knowledge__Llm__MaxTokens |
Max output tokens | 2048 |
| Variable | Description | Default |
|---|---|---|
Knowledge__Embedding__Provider |
Embedding provider (Ollama, OpenAI, AzureOpenAI) |
Ollama |
Knowledge__Embedding__ApiKey |
API key for OpenAI/AzureOpenAI | (optional) |
Knowledge__Embedding__AzureDeploymentName |
Azure OpenAI deployment name | (optional) |
| Variable | Description | Default |
|---|---|---|
Identity__AwsSso__IssuerUrl |
AWS IAM Identity Center issuer URL | (optional) |
Identity__AwsSso__Region |
AWS region for SSO | (optional) |
Identity__AzureAd__ClientId |
Azure AD app registration client ID | (optional) |
Identity__AzureAd__TenantId |
Azure AD tenant ID | (optional) |
Identity__AzureAd__ClientSecret |
Azure AD client secret (confidential client) | (optional) |
Note: Search is now scoped to containers. There is no global search endpoint; all search requests require a container ID.
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning",
"Npgsql": "Warning"
}
},
"ConnectionStrings": {
"DefaultConnection": "Host=postgres;Database=connapse;Username=connapse;Password=***"
},
"Knowledge": {
"Embedding": {
"Provider": "Ollama",
"BaseUrl": "http://ollama:11434",
"Model": "nomic-embed-text",
"Dimensions": 768,
"Timeout": 30,
"BatchSize": 4
},
"Chunking": {
"Strategy": "Semantic",
"MaxTokens": 512,
"Overlap": 50,
"SimilarityThreshold": 0.8
},
"Search": {
"Mode": "Hybrid",
"TopK": 10,
"FusionMethod": "ConvexCombination",
"FusionAlpha": 0.5,
"AutoCut": false
},
"Llm": {
"Provider": "Ollama",
"BaseUrl": "http://ollama:11434",
"Model": "llama2",
"Temperature": 0.7,
"MaxTokens": 2048
},
"Storage": {
"VectorStoreProvider": "PgVector",
"DocumentStoreProvider": "Postgres",
"FileStorageProvider": "MinIO",
"MinioEndpoint": "minio:9000",
"MinioAccessKey": "connapse_dev",
"MinioSecretKey": "connapse_dev_secret",
"MinioUseSSL": false,
"MinioBucketName": "connapse-files"
},
"Upload": {
"MaxFileSizeBytes": 104857600,
"AllowedExtensions": [".txt", ".md", ".pdf", ".docx", ".pptx", ".csv", ".json", ".xml", ".yaml"],
"ConcurrentIngestions": 4,
"QueueCapacity": 1000
}
},
"Identity": {
"AwsSso": {
"IssuerUrl": "",
"Region": "us-east-1"
},
"AzureAd": {
"ClientId": "",
"TenantId": "",
"ClientSecret": ""
}
}
}Manual Backup:
# Dump entire database
docker compose exec -T postgres pg_dump -U connapse connapse > backup_$(date +%Y%m%d_%H%M%S).sql
# Dump specific tables
docker compose exec -T postgres pg_dump -U connapse connapse -t documents -t chunks > backup_tables.sql
# Custom format (compressed)
docker compose exec -T postgres pg_dump -U connapse -Fc connapse > backup.dumpAutomated Backup (cron):
# /etc/cron.d/connapse-backup
0 2 * * * docker compose -f /opt/connapse/docker-compose.yml exec -T postgres pg_dump -U connapse -Fc connapse > /backups/connapse_$(date +\%Y\%m\%d).dumpRestore:
# From SQL file
docker compose exec -T postgres psql -U connapse connapse < backup.sql
# From custom format
docker compose exec -T postgres pg_restore -U connapse -d connapse backup.dumpUsing mc (MinIO Client):
# Install mc
brew install minio/stable/mc # macOS
# Or download from https://min.io/docs/minio/linux/reference/minio-mc.html
# Configure alias
mc alias set myminio http://localhost:9000 connapse_dev connapse_dev_secret
# Mirror to local directory
mc mirror myminio/knowledge-files /backups/minio/knowledge-files
# Mirror to S3
mc mirror myminio/knowledge-files s3/my-backup-bucket/connapse-minio/Automated Backup (cron):
# /etc/cron.d/connapse-minio-backup
0 3 * * * mc mirror myminio/knowledge-files /backups/minio/knowledge-files#!/bin/bash
# backup.sh
BACKUP_DIR="/backups/connapse/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
# Backup PostgreSQL
docker compose exec -T postgres pg_dump -U connapse -Fc connapse > "$BACKUP_DIR/postgres.dump"
# Backup MinIO (incremental)
mc mirror myminio/knowledge-files "$BACKUP_DIR/minio"
# Backup settings and compose files
cp .env "$BACKUP_DIR/"
cp docker-compose.yml "$BACKUP_DIR/"
# Create tarball
tar czf "$BACKUP_DIR.tar.gz" "$BACKUP_DIR"
rm -rf "$BACKUP_DIR"
echo "Backup completed: $BACKUP_DIR.tar.gz"- Deploy fresh infrastructure (Docker Compose or cloud)
- Restore PostgreSQL:
docker compose exec -T postgres pg_restore -U connapse -d connapse < postgres.dump- Restore MinIO files:
mc mirror /backups/minio/knowledge-files myminio/knowledge-files- Restart services:
docker compose restartSymptoms: Application won't start, logs show Npgsql connection error
Solutions:
# Check if Postgres is running
docker compose ps postgres
# Check logs
docker compose logs postgres
# Test connection
docker compose exec postgres psql -U connapse -d connapse -c "SELECT 1;"
# Verify connection string
docker compose exec web env | grep ConnectionStringsSymptoms: Upload fails with The specified bucket does not exist
Solutions:
# Create bucket
mc alias set local http://localhost:9000 connapse_dev connapse_dev_secret
mc mb local/knowledge-files
# Or via Docker
docker compose exec minio mc mb /data/knowledge-filesSymptoms: Ingestion fails during embedding phase
Solutions:
# List available models
docker compose exec ollama ollama list
# Pull missing model
docker compose exec ollama ollama pull nomic-embed-text
# Verify model works
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "test"
}'Symptoms: Upload returns 429 Too Many Requests
Solutions:
- Wait for current jobs to complete
- Increase queue capacity:
{
"Knowledge": {
"Upload": {
"QueueCapacity": 2000,
"ConcurrentIngestions": 8
}
}
}Symptoms: Application crashes or becomes unresponsive
Solutions:
- Increase Docker memory limit:
# docker-compose.yml
services:
web:
deploy:
resources:
limits:
memory: 4G- Reduce concurrent ingestions
- Increase chunking (smaller chunks = less memory per operation)
Enable detailed logging:
{
"Logging": {
"LogLevel": {
"Default": "Debug",
"Connapse": "Trace",
"Microsoft.AspNetCore": "Information"
}
}
}Check service health:
# PostgreSQL
docker compose exec postgres pg_isready -U connapse
# MinIO
curl http://localhost:9000/minio/health/live
# Ollama
curl http://localhost:11434/api/tags
# Application (once implemented)
curl https://localhost:5001/health-- Increase shared buffers (25% of RAM)
ALTER SYSTEM SET shared_buffers = '2GB';
-- Increase work_mem for complex queries
ALTER SYSTEM SET work_mem = '64MB';
-- Enable parallel queries
ALTER SYSTEM SET max_parallel_workers_per_gather = 4;
-- Optimize for SSD
ALTER SYSTEM SET random_page_cost = 1.1;
-- Reload config
SELECT pg_reload_conf();-- HNSW index (faster search, more memory)
CREATE INDEX ON chunk_vectors USING hnsw (embedding vector_cosine_ops)
WITH (m = 16, ef_construction = 64);
-- IVFFLAT index (less memory, more setup time)
CREATE INDEX ON chunk_vectors USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);Prometheus + Grafana dashboard:
# docker-compose.monitoring.yml
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- promdata:/prometheus
ports:
- "9090:9090"
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: admin
volumes:
- grafana-data:/var/lib/grafana
volumes:
promdata:
grafana-data:- Ingestion: docs/min, avg latency, queue depth
- Search: queries/sec, p50/p95/p99 latency, cache hit rate
- Storage: DB size, MinIO storage used, vector index size
- System: CPU, memory, disk I/O, network
- architecture.md — System architecture
- api.md — API reference
- connectors.md — Connector types and configuration
- aws-sso-setup.md — AWS IAM Identity Center integration
- azure-identity-setup.md — Azure AD OAuth2+PKCE integration
- Docker Compose Docs
- PostgreSQL Docs
- pgvector GitHub
- MinIO Docs