Complete installation instructions for MetaHuman Engine.
| Component | Specification |
|---|---|
| OS | Windows 10+, macOS 11+, or Linux (Ubuntu 20.04+) |
| RAM | 4 GB |
| Disk | 2 GB free space |
| Browser | Chrome 90+, Edge 90+, Firefox 90+, Safari 15+ |
| Component | Specification |
|---|---|
| RAM | 8 GB |
| GPU | WebGL 2.0 compatible (for optimal 3D performance) |
| Network | Broadband (for streaming responses) |
macOS (Homebrew):
brew install node@18Ubuntu/Debian:
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejsWindows: Download from nodejs.org (LTS version)
Verify installation:
node --version # Should show v18.x.x
npm --version # Should show 9.x.x or higher# Clone the repository
git clone https://github.com/LessUp/meta-human.git
cd meta-human
# Install dependencies
npm install
# Verify installation
npm run typecheckCreate .env.local:
# Backend API URL (optional - uses mock if not set)
VITE_API_BASE_URL=http://localhost:8000
# Transport mode: http, sse, websocket, auto
VITE_CHAT_TRANSPORT=autonpm run devAccess at http://localhost:5173
Step 1: Install Python 3.10+
macOS:
brew install python@3.10Ubuntu:
sudo apt-get update
sudo apt-get install python3.10 python3.10-venvWindows: Download from python.org
Verify:
python --version # Should show 3.10.x or higherStep 2: Setup Virtual Environment
cd server
# Create virtual environment
python -m venv venv
# Activate
source venv/bin/activate # Linux/Mac
# or: venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txtStep 3: Configure Environment
Create server/.env:
# Required for AI responses (optional - mock mode works without)
OPENAI_API_KEY=sk-your-api-key-here
# Optional: Custom endpoint
OPENAI_BASE_URL=https://api.openai.com/v1
# Optional: Model selection
OPENAI_MODEL=gpt-3.5-turbo
# Rate limiting (requests per minute)
RATE_LIMIT_RPM=60
# CORS origins (comma-separated)
CORS_ALLOW_ORIGINS=http://localhost:5173,https://yourdomain.comStep 4: Start Backend
uvicorn app.main:app --reload --port 8000Backend runs at http://localhost:8000
API docs at http://localhost:8000/docs
Step 1: Install Docker
See docker.com for platform-specific instructions.
Step 2: Build and Run
# Build image
docker build -t metahuman-backend ./server
# Run container
docker run -p 8000:8000 \
-e OPENAI_API_KEY=sk-... \
-e CORS_ALLOW_ORIGINS=http://localhost:5173 \
metahuman-backendOr use docker-compose:
version: '3.8'
services:
backend:
build: ./server
ports:
- "8000:8000"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- CORS_ALLOW_ORIGINS=http://localhost:5173Step 1: Build
npm run build:pagesStep 2: Configure Repository
- Go to GitHub repository → Settings → Pages
- Set source to "GitHub Actions"
Step 3: Set Environment Variables
Go to Settings → Secrets and variables → Actions:
| Variable | Value |
|---|---|
VITE_API_BASE_URL |
Your backend URL |
Step 4: Deploy
git push origin mainOr trigger "Deploy Pages" workflow manually.
Step 1: Create Blueprint Service
Import from GitHub using render.yaml in project root.
Step 2: Configure Service
| Setting | Value |
|---|---|
| Root Directory | server |
| Build Command | pip install -r requirements.txt |
| Start Command | uvicorn app.main:app --host 0.0.0.0 --port $PORT |
| Health Check | /health |
Step 3: Set Environment Variables
CORS_ALLOW_ORIGINS=https://your-frontend-domain.com
OPENAI_API_KEY=sk-...
Step 4: Connect Frontend
Update GitHub Repository variable:
VITE_API_BASE_URL=https://your-backend.onrender.com
# Should return the Vite dev server page
curl http://localhost:5173# Should return JSON status
curl http://localhost:8000/health
# Expected response:
# {
# "status": "ok",
# "services": {
# "chat": "available",
# "llm": "mock_mode",
# "tts": "available",
# "asr": "unavailable"
# }
# }# Test chat endpoint
curl -X POST http://localhost:8000/v1/chat \
-H "Content-Type: application/json" \
-d '{"userText": "Hello"}'Clear cache and retry:
npm cache clean --force
rm -rf node_modules package-lock.json
npm installUpgrade pip:
pip install --upgrade pip setuptools
pip install -r requirements.txtFrontend (port 5173):
# Find process
lsof -ti:5173
# Kill process
lsof -ti:5173 | xargs kill -9Backend (port 8000):
lsof -ti:8000 | xargs kill -9- Verify
CORS_ALLOW_ORIGINSincludes frontend URL - Ensure no trailing slash mismatches
- Check protocol (http vs https)