This document provides tips and best practices for optimizing Evolution API performance on Dokku.
- Performance Overview
- Resource Optimization
- Database Optimization
- Caching with Redis
- Storage Optimization
- Network Optimization
- Monitoring and Alerting
- Scaling Strategies
| Team Size | Expected Performance | Configuration |
|---|---|---|
| 1-10 users | 100-500 msg/day | Minimal (256MB RAM, 0.5 CPU) |
| 10-50 users | 500-2000 msg/day | Small (512MB RAM, 1 CPU) |
| 50-200 users | 2000-10000 msg/day | Medium (1GB RAM, 2 CPU) + Optional Redis |
| 200+ users | 10000+ msg/day | High (2GB+ RAM, 4+ CPU) + Optional Redis + Optimizations |
Note: Redis is optional even for larger teams. Many deployments with 50-200 users work perfectly with PostgreSQL-only setup.
Monitor these metrics to ensure optimal performance:
- Response Time: < 100ms for API calls
- CPU Usage: < 70% average
- RAM Usage: < 80% of allocated
- Database Size: Monitor growth rate
- Disk I/O: < 50% of available
- Network Latency: < 50ms
Check current usage:
# View current resource usage
dokku ps:report evo
# Monitor in real-time
docker stats $(dokku ps:inspect evo | jq -r '.[].ID')Windows (PowerShell):
ssh your-server "ps:report evo"For 1-10 users (default):
dokku resource:limit evo --memory 256m --cpu 0.5For 10-50 users:
dokku resource:limit evo --memory 512m --cpu 1For 50-200 users:
dokku resource:limit evo --memory 1024m --cpu 2For 200+ users:
dokku resource:limit evo --memory 2048m --cpu 4Increase swap memory (if needed):
# On the host server
fallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstabFor high-volume deployments, disable unnecessary data logging:
# Disable message history (saves ~70% database space)
dokku config:set evo DATABASE_SAVE_DATA_HISTORIC=false
# Disable message updates (saves ~20% database space)
dokku config:set evo DATABASE_SAVE_MESSAGE_UPDATE=false
# Disable label tracking (saves ~5% database space)
dokku config:set evo DATABASE_SAVE_DATA_LABELS=falseRecommended for production:
# Keep essential data only
dokku config:set evo DATABASE_SAVE_DATA_INSTANCE=true
dokku config:set evo DATABASE_SAVE_DATA_NEW_MESSAGE=true
dokku config:set evo DATABASE_SAVE_DATA_CONTACTS=true
dokku config:set evo DATABASE_SAVE_DATA_CHATS=true
dokku config:set evo DATABASE_SAVE_DATA_HISTORIC=false
dokku config:set evo DATABASE_SAVE_MESSAGE_UPDATE=false
dokku config:set evo DATABASE_SAVE_DATA_LABELS=falseRegular vacuum (optimize database):
# Weekly maintenance
dokku postgres:connect evo -c "VACUUM ANALYZE;"
# Full vacuum (requires more time, but better results)
dokku postgres:connect evo -c "VACUUM FULL;"Automated vacuum script (/root/vacuum-evolution.sh):
#!/bin/bash
echo "Starting database vacuum at $(date)"
dokku postgres:connect evo -c "VACUUM ANALYZE;" >> /var/log/evolution-vacuum.log 2>&1
echo "Vacuum completed at $(date)"Add to cron (run weekly on Sunday at 3 AM):
chmod +x /root/vacuum-evolution.sh
echo "0 3 * * 0 /root/vacuum-evolution.sh" | crontab -Manually clean old data:
# Clean messages older than 30 days
dokku postgres:connect evo << EOF
DELETE FROM "Message" WHERE "datetime" < NOW() - INTERVAL '30 days';
EOF
# Clean old sessions
dokku postgres:connect evo << EOF
DELETE FROM "Session" WHERE "updatedAt" < NOW() - INTERVAL '90 days';
EOFAutomated cleanup script (/root/cleanup-evolution-db.sh):
#!/bin/bash
echo "Starting database cleanup at $(date)"
dokku postgres:connect evo << EOF
DELETE FROM "Message" WHERE "datetime" < NOW() - INTERVAL '30 days';
DELETE FROM "Session" WHERE "updatedAt" < NOW() - INTERVAL '90 days';
VACUUM ANALYZE;
EOF
echo "Cleanup completed at $(date)"Add to cron (run monthly on the 1st at 2 AM):
chmod +x /root/cleanup-evolution-db.sh
echo "0 2 1 * * /root/cleanup-evolution-db.sh" | crontab -Check for missing indexes:
dokku postgres:connect evo -c "
SELECT schemaname, tablename, attname, n_distinct, correlation
FROM pg_stats
WHERE schemaname = 'public'
ORDER BY n_distinct DESC LIMIT 20;"
⚠️ Important: Redis is NOT required for Evolution API to function. This section is ONLY for teams with 50+ users experiencing performance issues. See the complete Redis Integration Guide for detailed instructions, monitoring, and troubleshooting.
Only add Redis if you meet ALL of these criteria:
- ✅ Team has 50+ active users
- ✅ Message volume exceeds 2000 messages/day
- ✅ API response times consistently exceed 200ms
- ✅ Database queries are causing bottlenecks
- ✅ You're willing to add infrastructure complexity
Do NOT add Redis if:
- ❌ Team has less than 50 users (PostgreSQL-only setup is sufficient)
- ❌ Current performance is satisfactory
- ❌ Want to minimize complexity and costs
For teams that determined Redis is needed, here's a quick setup:
# Install Redis plugin (if not installed)
dokku plugin:install https://github.com/dokku/dokku-redis.git redis
# Create Redis instance
dokku redis:create evo
# Link to application
dokku redis:link evo evo
# Enable Redis cache
dokku config:set evo CACHE_REDIS_ENABLED=true
dokku config:set evo CACHE_REDIS_URI="$(dokku config:get evo REDIS_URL)"
# Restart application
dokku ps:restart evo📖 Complete Guide: For detailed Redis setup, configuration, monitoring, troubleshooting, and removal instructions, see the Redis Integration Guide.
Set Redis key prefix:
dokku config:set evo CACHE_REDIS_PREFIX_KEY="evo"Check Redis usage:
dokku redis:info evoMonitor Redis:
dokku redis:connect evo
> INFO memory
> DBSIZE
> INFO stats- Monitor memory usage - Redis should not exceed 80% of allocated memory
- Set expiration times for cached data
- Use appropriate data structures (hashes, lists, sets)
- Regular backups -
dokku redis:backup evo backup-$(date +%Y%m%d) - Measure before and after - Ensure Redis actually improves performance
Manual cleanup:
# Enter container
dokku enter evo web
# Find files older than 30 days
find /evolution/instances -type f -mtime +30
# Delete files older than 30 days
find /evolution/instances -type f -mtime +30 -delete
# Check storage usage
du -sh /evolution/instancesAutomated cleanup script (/root/cleanup-evolution-media.sh):
#!/bin/bash
echo "Starting media cleanup at $(date)"
dokku enter evo web bash -c "find /evolution/instances -type f -mtime +30 -delete"
echo "Media cleanup completed at $(date)"
du -sh /var/lib/dokku/data/storage/evoAdd to cron (run weekly on Saturday at 4 AM):
chmod +x /root/cleanup-evolution-media.sh
echo "0 4 * * 6 /root/cleanup-evolution-media.sh" | crontab -Check storage usage:
# Total storage
du -sh /var/lib/dokku/data/storage/evo
# Breakdown by folder
du -h --max-depth=1 /var/lib/dokku/data/storage/evo | sort -h
# Check available space
df -h /var/lib/dokku/data/storageSet up storage alerts:
# Create alert script
cat > /root/check-storage.sh << 'EOF'
#!/bin/bash
USAGE=$(df /var/lib/dokku/data/storage | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $USAGE -gt 80 ]; then
echo "WARNING: Storage usage is at ${USAGE}%" | mail -s "Evolution API Storage Alert" you@example.com
fi
EOF
chmod +x /root/check-storage.sh
echo "0 */6 * * * /root/check-storage.sh" | crontab -Compress old media:
dokku enter evo web bash -c "
find /evolution/instances -type f -name '*.jpg' -mtime +7 -exec mogrify -quality 80 {} \;
find /evolution/instances -type f -name '*.png' -mtime +7 -exec mogrify -quality 80 {} \;
"HTTP/2 improves performance for multiple concurrent requests:
# Ensure HTTPS is enabled first
dokku letsencrypt:enable evo
# HTTP/2 is automatically enabled with HTTPS on modern Dokku/nginxFor high-volume media delivery, consider using a CDN:
- CloudFlare (free tier available)
- Amazon CloudFront
- Fastly
Benefits:
- Faster media delivery
- Reduced server load
- Better global performance
Increase nginx buffer size (if needed):
Create /home/dokku/evo/nginx.conf.d/buffers.conf:
client_body_buffer_size 16k;
client_max_body_size 50m;Reload nginx:
dokku nginx:build-config evo
dokku ps:restart evoInstall monitoring tools:
# Install htop for interactive process monitoring
apt-get install htop
# Install netdata for comprehensive monitoring
bash <(curl -Ss https://my-netdata.io/kickstart.sh)Access monitoring:
- Netdata:
http://your-server:19999 - htop:
htop(in terminal)
Monitor logs:
# Real-time logs
dokku logs evo -t
# Save logs to file
dokku logs evo > /var/log/evolution-$(date +%Y%m%d).logSet up log rotation:
Create /etc/logrotate.d/evolution:
/var/log/evolution-*.log {
weekly
rotate 4
compress
delaycompress
missingok
notifempty
}
Create /root/monitor-evolution.sh:
#!/bin/bash
echo "=== Evolution API Performance Report ==="
echo "Date: $(date)"
echo ""
echo "=== Resource Usage ==="
dokku ps:report evo | grep -E "memory|cpu"
echo ""
echo "=== Database Size ==="
dokku postgres:connect evo -c "SELECT pg_size_pretty(pg_database_size('evo'));"
echo ""
echo "=== Storage Usage ==="
du -sh /var/lib/dokku/data/storage/evo
echo ""
echo "=== Disk Space ==="
df -h /var/lib/dokku/data/storage | grep -v "Filesystem"
echo ""
echo "=== Application Status ==="
curl -s -I https://evo.example.com | head -1
echo ""Make executable and run daily:
chmod +x /root/monitor-evolution.sh
echo "0 8 * * * /root/monitor-evolution.sh | mail -s 'Evolution API Daily Report' you@example.com" | crontab -CPU Alert:
cat > /root/check-cpu.sh << 'EOF'
#!/bin/bash
CPU=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | sed 's/%us,//')
if (( $(echo "$CPU > 80" | bc -l) )); then
echo "CPU usage is high: ${CPU}%" | mail -s "Evolution API CPU Alert" you@example.com
fi
EOF
chmod +x /root/check-cpu.sh
echo "*/15 * * * * /root/check-cpu.sh" | crontab -Memory Alert:
cat > /root/check-memory.sh << 'EOF'
#!/bin/bash
MEM=$(free | grep Mem | awk '{print ($3/$2) * 100}')
if (( $(echo "$MEM > 80" | bc -l) )); then
echo "Memory usage is high: ${MEM}%" | mail -s "Evolution API Memory Alert" you@example.com
fi
EOF
chmod +x /root/check-memory.sh
echo "*/15 * * * * /root/check-memory.sh" | crontab -Increase resources as needed:
# Start with minimal
dokku resource:limit evo --memory 256m --cpu 0.5
# Scale up as user base grows
dokku resource:limit evo --memory 512m --cpu 1 # 10-50 users
dokku resource:limit evo --memory 1024m --cpu 2 # 50-200 users
dokku resource:limit evo --memory 2048m --cpu 4 # 200+ usersScale to multiple instances:
# Scale to 2 instances
dokku ps:scale evo web=2
# Scale to 4 instances (for high volume)
dokku ps:scale evo web=4Requirements for horizontal scaling:
- Redis for shared caching
- PostgreSQL for shared database
- Load balancer (nginx automatically configured by Dokku)
Check current scaling:
dokku ps:report evo | grep -i scaleDokku automatically load balances between multiple instances using nginx.
Verify load balancing:
# Check nginx configuration
dokku nginx:show-config evo
# Monitor instance distribution
watch -n 1 'docker stats --no-stream | grep evo'Simple test:
# Test response time
time curl -H "apikey: YOUR_API_KEY" https://evo.example.comLoad testing with Apache Bench:
# Install Apache Bench
apt-get install apache2-utils
# Test with 100 concurrent requests
ab -n 1000 -c 100 -H "apikey: YOUR_API_KEY" https://evo.example.com/Advanced load testing with wrk:
# Install wrk
apt-get install wrk
# Test with 10 threads, 100 connections for 30 seconds
wrk -t10 -c100 -d30s -H "apikey: YOUR_API_KEY" https://evo.example.com/- ✅ Set appropriate resource limits based on team size
- ✅ Enable automatic backups
- ✅ Configure persistent storage
- ✅ Enable HTTPS with Let's Encrypt
- ✅ Monitor logs regularly
- ✅ Enable Redis caching
- ✅ Disable unnecessary database logging
- ✅ Set up automated database vacuum
- ✅ Implement storage cleanup
- ✅ Configure monitoring and alerts
- ✅ Horizontal scaling (multiple instances)
- ✅ Database connection pooling
- ✅ CDN for media delivery
- ✅ Separate media storage server
- ✅ Advanced monitoring with Netdata or Prometheus
- ✅ Load balancing with multiple servers
Check:
- CPU usage:
dokku ps:report evo - RAM usage:
free -h - Database size:
dokku postgres:connect evo -c "SELECT pg_size_pretty(pg_database_size('evo'));" - Disk I/O:
iostat -x 1
Solutions:
- Increase resources
- Enable Redis
- Optimize database queries
- Clean old data
Check:
- Number of active instances
- Message processing volume
- Database queries
Solutions:
- Scale horizontally
- Enable Redis caching
- Optimize database indexes
- Disable unnecessary features
Check:
- Memory leaks in logs
- Number of connections
- Cache size
Solutions:
- Increase RAM allocation
- Restart application regularly
- Limit number of instances
- Enable Redis to offload memory
Check:
- Database size
- Number of connections
- Query performance
Solutions:
- Run VACUUM ANALYZE
- Clean old data
- Disable unnecessary logging
- Add database indexes
- Useful Commands - Management commands reference
- Configuration - Advanced configuration options
- System Requirements - Hardware recommendations