Complete environment variable reference for MetaHuman Engine.
Create .env.local in the project root.
| Variable | Default | Description |
|---|---|---|
VITE_API_BASE_URL |
(empty) | Backend API URL. Leave empty for mock mode. |
VITE_CHAT_TRANSPORT |
auto |
Transport mode: http, sse, websocket, auto |
| Mode | Description | Use Case |
|---|---|---|
auto |
Auto-detect capability (WebSocket → SSE → HTTP) | Recommended |
http |
Standard HTTP requests | Simple setups |
sse |
Server-Sent Events for streaming | One-way streaming |
websocket |
Full-duplex WebSocket | Real-time bidirectional |
# Backend API (leave empty for mock mode)
VITE_API_BASE_URL=http://localhost:8000
# Transport selection
VITE_CHAT_TRANSPORT=autoCreate server/.env in the server directory.
| Variable | Default | Required | Description |
|---|---|---|---|
OPENAI_API_KEY |
(empty) | No | OpenAI API key for AI responses |
OPENAI_BASE_URL |
(empty) | No | Custom OpenAI-compatible endpoint |
OPENAI_MODEL |
gpt-3.5-turbo |
No | Model to use |
LLM_PROVIDER |
openai |
No | LLM provider: openai, azure |
| Variable | Default | Description |
|---|---|---|
TTS_PROVIDER |
edge |
TTS provider: edge, openai |
ASR_PROVIDER |
whisper |
ASR provider: whisper |
| Variable | Default | Description |
|---|---|---|
RATE_LIMIT_RPM |
60 |
Requests per minute per IP |
| Variable | Default | Description |
|---|---|---|
CORS_ALLOW_ORIGINS |
(empty) | Comma-separated allowed origins |
Example:
CORS_ALLOW_ORIGINS=http://localhost:5173,https://myapp.com| Variable | Default | Description |
|---|---|---|
HOST |
0.0.0.0 |
Server bind address |
PORT |
8000 |
Server port |
LOG_LEVEL |
info |
Logging level: debug, info, warning, error |
# AI Configuration
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-4
# Rate Limiting
RATE_LIMIT_RPM=100
# CORS
CORS_ALLOW_ORIGINS=http://localhost:5173,https://mydomain.com
# Logging
LOG_LEVEL=infoFor services like Azure OpenAI, LocalAI, or Ollama:
OPENAI_API_KEY=your-key
OPENAI_BASE_URL=https://your-endpoint.com/v1
OPENAI_MODEL=your-model-nameDevelopment:
# .env.local
VITE_API_BASE_URL=http://localhost:8000
VITE_CHAT_TRANSPORT=autoStaging:
# .env.staging
VITE_API_BASE_URL=https://staging-api.example.com
VITE_CHAT_TRANSPORT=websocketProduction:
# .env.production
VITE_API_BASE_URL=https://api.example.com
VITE_CHAT_TRANSPORT=websocketUse with Vite modes:
npm run dev -- --mode staging
npm run build -- --mode productionVision processing runs entirely in the browser — no backend configuration needed.
Browser requirements:
- WebGL 2.0 enabled
- Camera permission granted
- HTTPS or localhost
Uses Web Speech API (browser-native):
Supported browsers:
- Chrome 25+
- Edge 79+
- Safari 14.1+ (limited)
No backend configuration required.
Edge TTS (Default):
- No configuration needed
- Works offline
- Multiple voices available
OpenAI TTS:
TTS_PROVIDER=openai
OPENAI_API_KEY=sk-...- Never commit
.envfiles to version control - Use environment-specific keys
- Rotate keys regularly
- Use minimal permissions
-
Restrict origins in production:
# Good CORS_ALLOW_ORIGINS=https://myapp.com # Bad CORS_ALLOW_ORIGINS=*
-
No trailing slashes in origins
-
Include protocol (http/https)
Adjust based on your infrastructure:
# Cloud deployment (higher)
RATE_LIMIT_RPM=1000
# Self-hosted (lower)
RATE_LIMIT_RPM=60.env.[mode].local(highest).env.[mode].env.local.env(lowest)
- Environment variables (system)
.envfile- Default values (lowest)
Frontend:
# Restart dev server
npm run devBackend:
# Restart uvicorn
# (Auto-reloads on code changes, not env changes)- Check
CORS_ALLOW_ORIGINSexact match - Verify protocol consistency
- Check for trailing slashes
- Verify key format:
sk-... - Check for extra whitespace
- Test with curl:
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY"
# .env.local (Frontend)
# Empty = mock mode, no backend needed
# server/.env (Backend)
# Not needed without AI features# .env.local
VITE_API_BASE_URL=http://localhost:8000
VITE_CHAT_TRANSPORT=auto
# server/.env
OPENAI_API_KEY=sk-...
CORS_ALLOW_ORIGINS=http://localhost:5173
RATE_LIMIT_RPM=60# .env.production
VITE_API_BASE_URL=https://api.mydomain.com
VITE_CHAT_TRANSPORT=websocket
# server/.env (on server)
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
CORS_ALLOW_ORIGINS=https://mydomain.com
RATE_LIMIT_RPM=1000
LOG_LEVEL=warning