- Installation Issues
- Runtime Errors
- Performance Problems
- Agent Issues
- Integration Problems
- Common Error Messages
Symptom:
Error: Ollama service not found or not running
Solutions:
- Check if Ollama is installed:
ollama --version- Install Ollama if missing:
- Visit https://ollama.ai/
- Download and install for your OS
- Start Ollama service:
ollama serve- Verify Ollama is running:
curl http://localhost:11434/api/tagsSymptom:
Error: No models found. Please install at least one model.
Solutions:
- List installed models:
ollama list- Install recommended models:
ollama pull llama3.1:8b
ollama pull codellama:7b- Verify model installation:
ollama listSymptom:
Error: Python 3.9 or higher required
Solutions:
- Check Python version:
python --version- Install Python 3.9+:
- Ubuntu/Debian:
sudo apt install python3.9 - macOS:
brew install python@3.9 - Windows: Download from python.org
- Use virtual environment with correct version:
python3.9 -m venv venv
source venv/bin/activateSymptom:
Error: Failed to install dependencies
Solutions:
- Update pip:
pip install --upgrade pip- Install with verbose output:
pip install -e . -v- Install system dependencies (Ubuntu/Debian):
sudo apt install python3-dev build-essential- Install system dependencies (macOS):
brew install python@3.9
xcode-select --installSymptom:
Error: Failed to load model 'llama3.1:8b'
Solutions:
- Verify model exists:
ollama list- Re-pull the model:
ollama pull llama3.1:8b- Check Ollama logs:
journalctl -u ollama -f # Linux
~/Library/Logs/Ollama/server.log # macOS- Try a different model:
codegenie --model codellama:7bSymptom:
Error: Out of memory
MemoryError: Unable to allocate array
Solutions:
- Use smaller models:
codegenie --model llama3.1:8b # Instead of 70b- Close other applications:
- Free up RAM by closing unnecessary programs
- Increase swap space (Linux):
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile- Configure model parameters:
# ~/.config/codegenie/config.yaml
models:
default: "llama3.1:8b"
context_length: 2048 # Reduce from default 4096Symptom:
PermissionError: [Errno 13] Permission denied
Solutions:
- Check file permissions:
ls -la /path/to/project- Fix ownership:
sudo chown -R $USER:$USER /path/to/project- Run with appropriate permissions:
# Don't use sudo unless necessary
codegenie- Check config directory permissions:
chmod 755 ~/.config/codegenie
chmod 644 ~/.config/codegenie/config.yamlSymptom:
ConnectionError: Failed to connect to Ollama service
Solutions:
- Check Ollama is running:
ps aux | grep ollama- Verify port is accessible:
curl http://localhost:11434/api/tags- Check firewall settings:
# Linux
sudo ufw status
sudo ufw allow 11434
# macOS
# System Preferences > Security & Privacy > Firewall- Configure custom Ollama URL:
# ~/.config/codegenie/config.yaml
ollama:
url: "http://localhost:11434"Symptom:
- Responses take more than 30 seconds
- Agent appears frozen
Solutions:
- Use faster models:
codegenie --model llama3.1:8b # Faster than 70b- Enable caching:
# ~/.config/codegenie/config.yaml
cache:
enabled: true
ttl: 3600- Reduce context length:
models:
context_length: 2048- Check system resources:
top # or htop- Close resource-intensive applications
Symptom:
- System becomes slow
- Swap usage increases significantly
Solutions:
- Monitor memory usage:
codegenie --debug
# Check logs for memory usage- Use smaller models:
models:
default: "llama3.1:8b"
code_generation: "codellama:7b"- Limit concurrent operations:
execution:
max_concurrent_tasks: 2- Clear cache periodically:
rm -rf ~/.cache/codegenieSymptom:
- CPU at 100% constantly
- System becomes unresponsive
Solutions:
- Limit CPU threads:
# ~/.config/codegenie/config.yaml
performance:
max_threads: 4- Use GPU acceleration (if available):
# Ensure Ollama uses GPU
ollama run llama3.1:8b --gpu- Reduce parallel processing:
execution:
parallel_execution: falseSymptom:
- Agent doesn't respond to commands
- Stuck on "Thinking..."
Solutions:
- Check agent status:
codegenie --status- Restart agent:
# Press Ctrl+C to stop
codegenie- Clear session state:
rm -rf ~/.codegenie/sessions/current- Check logs:
tail -f ~/.codegenie/logs/codegenie.logSymptom:
- Generated code doesn't work
- Code doesn't match requirements
Solutions:
- Provide more context:
You: Create a user registration endpoint with email validation,
password hashing using bcrypt, and PostgreSQL database storage
- Review and correct:
You: The password hashing is incorrect. Use bcrypt with salt rounds of 12
- Use specialized agents:
You: @developer Create the endpoint
You: @security Review the security
- Enable learning mode:
learning:
learn_from_corrections: trueSymptom:
Warning: Conflicting recommendations from agents
Solutions:
- Review conflict details:
You: Explain the conflict
- Choose preferred approach:
You: Use the Security Agent's recommendation
- Configure agent priorities:
agents:
priority_order:
- security
- performance
- developerSymptom:
- Autonomous execution fails
- Unexpected behavior in autonomous mode
Solutions:
- Enable intervention points:
autonomous:
intervention_points: true- Reduce autonomous scope:
autonomous:
max_steps: 10 # Limit number of steps- Review execution plan first:
You: Show me the execution plan before starting
- Use manual mode for complex tasks:
You: /autonomous off
Symptom:
- VS Code extension not responding
- IntelliJ plugin errors
Solutions:
- Verify extension installation:
- VS Code: Check Extensions panel
- IntelliJ: Check Plugins settings
- Check CodeGenie service:
codegenie --service status- Restart IDE:
- Close and reopen your IDE
- Check extension logs:
- VS Code: Developer Tools > Console
- IntelliJ: Help > Show Log
- Reinstall extension:
# VS Code
code --uninstall-extension codegenie.vscode
code --install-extension codegenie.vscodeSymptom:
- Can't commit changes
- Git operations fail
Solutions:
- Verify Git installation:
git --version- Check Git configuration:
git config --list- Configure Git credentials:
git config --global user.name "Your Name"
git config --global user.email "your@email.com"- Check repository status:
git statusSymptom:
- GitHub Actions failing
- Jenkins builds not triggered
Solutions:
- Verify webhook configuration:
curl -X GET https://api.github.com/repos/owner/repo/hooks \
-H "Authorization: token YOUR_TOKEN"- Check API credentials:
integrations:
github:
token: "your_token"
webhook_secret: "your_secret"- Review CI/CD logs:
- GitHub: Actions tab
- Jenkins: Build console output
- Test webhook manually:
curl -X POST https://your-codegenie-instance/webhook \
-H "Content-Type: application/json" \
-d '{"event": "push", "repository": "owner/repo"}'Cause: Too much context for the model to handle
Solution:
models:
context_length: 2048 # Reduce context
context:
max_history: 10 # Limit conversation historyCause: Too many requests to Ollama
Solution:
rate_limiting:
enabled: true
requests_per_minute: 30Cause: Specified model not installed
Solution:
ollama pull llama3.1:8b
# Or specify different model
codegenie --model codellama:7bCause: Syntax error in config file
Solution:
# Validate config
codegenie --validate-config
# Reset to defaults
mv ~/.config/codegenie/config.yaml ~/.config/codegenie/config.yaml.bak
codegenie # Will create new default configCause: Trying to rollback to non-existent checkpoint
Solution:
# List available checkpoints
codegenie --list-checkpoints
# Rollback to specific checkpoint
codegenie --rollback checkpoint_idCause: Agent took too long to respond
Solution:
agents:
timeout: 300 # Increase timeout to 5 minutesEnable debug mode for detailed troubleshooting:
codegenie --debugDebug output includes:
- Model requests and responses
- Agent communication
- File operations
- Performance metrics
- Error stack traces
Check log files for detailed error information:
# Main log
tail -f ~/.codegenie/logs/codegenie.log
# Agent logs
tail -f ~/.codegenie/logs/agents.log
# Performance logs
tail -f ~/.codegenie/logs/performance.log
# Error logs
tail -f ~/.codegenie/logs/errors.logIf you can't resolve the issue:
-
Check documentation:
- User Guide: docs/USER_GUIDE.md
- API Reference: docs/API_REFERENCE.md
-
Search existing issues:
- GitHub Issues: https://github.com/your-org/codegenie/issues
-
Create a bug report:
- Include error messages
- Attach relevant logs
- Describe steps to reproduce
-
Join community:
- Discord: https://discord.gg/codegenie
- Forum: https://community.codegenie.dev
-
Contact support:
- Email: support@codegenie.dev
- Include debug logs and system info
When reporting issues, include:
# Generate system info
codegenie --system-info
# Output includes:
# - OS and version
# - Python version
# - CodeGenie version
# - Ollama version
# - Installed models
# - Configuration summaryIf CodeGenie is completely broken:
# 1. Stop all processes
pkill -f codegenie
# 2. Backup current state
cp -r ~/.codegenie ~/.codegenie.backup
# 3. Reset to defaults
rm -rf ~/.codegenie
rm -rf ~/.cache/codegenie
# 4. Reinstall
pip uninstall codegenie
pip install codegenie
# 5. Restart
codegenieOptimize CodeGenie performance:
# ~/.config/codegenie/config.yaml
# Use faster models for simple tasks
models:
default: "llama3.1:8b"
simple_tasks: "codellama:7b"
# Enable aggressive caching
cache:
enabled: true
aggressive: true
ttl: 7200
# Optimize execution
execution:
parallel_execution: true
max_concurrent_tasks: 4
# Reduce context
context:
max_history: 10
max_file_size: "1MB"
# Performance monitoring
monitoring:
enabled: true
metrics_interval: 60