Skip to content

Latest commit

 

History

History
767 lines (578 loc) · 12.1 KB

File metadata and controls

767 lines (578 loc) · 12.1 KB

CodeGenie Troubleshooting Guide

Table of Contents

  1. Installation Issues
  2. Runtime Errors
  3. Performance Problems
  4. Agent Issues
  5. Integration Problems
  6. Common Error Messages

Installation Issues

Ollama Not Found

Symptom:

Error: Ollama service not found or not running

Solutions:

  1. Check if Ollama is installed:
ollama --version
  1. Install Ollama if missing:
  1. Start Ollama service:
ollama serve
  1. Verify Ollama is running:
curl http://localhost:11434/api/tags

No Models Available

Symptom:

Error: No models found. Please install at least one model.

Solutions:

  1. List installed models:
ollama list
  1. Install recommended models:
ollama pull llama3.1:8b
ollama pull codellama:7b
  1. Verify model installation:
ollama list

Python Version Incompatibility

Symptom:

Error: Python 3.9 or higher required

Solutions:

  1. Check Python version:
python --version
  1. Install Python 3.9+:
  • Ubuntu/Debian: sudo apt install python3.9
  • macOS: brew install python@3.9
  • Windows: Download from python.org
  1. Use virtual environment with correct version:
python3.9 -m venv venv
source venv/bin/activate

Dependency Installation Failures

Symptom:

Error: Failed to install dependencies

Solutions:

  1. Update pip:
pip install --upgrade pip
  1. Install with verbose output:
pip install -e . -v
  1. Install system dependencies (Ubuntu/Debian):
sudo apt install python3-dev build-essential
  1. Install system dependencies (macOS):
brew install python@3.9
xcode-select --install

Runtime Errors

Model Loading Failures

Symptom:

Error: Failed to load model 'llama3.1:8b'

Solutions:

  1. Verify model exists:
ollama list
  1. Re-pull the model:
ollama pull llama3.1:8b
  1. Check Ollama logs:
journalctl -u ollama -f  # Linux
~/Library/Logs/Ollama/server.log  # macOS
  1. Try a different model:
codegenie --model codellama:7b

Memory Errors

Symptom:

Error: Out of memory
MemoryError: Unable to allocate array

Solutions:

  1. Use smaller models:
codegenie --model llama3.1:8b  # Instead of 70b
  1. Close other applications:
  • Free up RAM by closing unnecessary programs
  1. Increase swap space (Linux):
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
  1. Configure model parameters:
# ~/.config/codegenie/config.yaml
models:
  default: "llama3.1:8b"
  context_length: 2048  # Reduce from default 4096

Permission Errors

Symptom:

PermissionError: [Errno 13] Permission denied

Solutions:

  1. Check file permissions:
ls -la /path/to/project
  1. Fix ownership:
sudo chown -R $USER:$USER /path/to/project
  1. Run with appropriate permissions:
# Don't use sudo unless necessary
codegenie
  1. Check config directory permissions:
chmod 755 ~/.config/codegenie
chmod 644 ~/.config/codegenie/config.yaml

Connection Errors

Symptom:

ConnectionError: Failed to connect to Ollama service

Solutions:

  1. Check Ollama is running:
ps aux | grep ollama
  1. Verify port is accessible:
curl http://localhost:11434/api/tags
  1. Check firewall settings:
# Linux
sudo ufw status
sudo ufw allow 11434

# macOS
# System Preferences > Security & Privacy > Firewall
  1. Configure custom Ollama URL:
# ~/.config/codegenie/config.yaml
ollama:
  url: "http://localhost:11434"

Performance Problems

Slow Response Times

Symptom:

  • Responses take more than 30 seconds
  • Agent appears frozen

Solutions:

  1. Use faster models:
codegenie --model llama3.1:8b  # Faster than 70b
  1. Enable caching:
# ~/.config/codegenie/config.yaml
cache:
  enabled: true
  ttl: 3600
  1. Reduce context length:
models:
  context_length: 2048
  1. Check system resources:
top  # or htop
  1. Close resource-intensive applications

High Memory Usage

Symptom:

  • System becomes slow
  • Swap usage increases significantly

Solutions:

  1. Monitor memory usage:
codegenie --debug
# Check logs for memory usage
  1. Use smaller models:
models:
  default: "llama3.1:8b"
  code_generation: "codellama:7b"
  1. Limit concurrent operations:
execution:
  max_concurrent_tasks: 2
  1. Clear cache periodically:
rm -rf ~/.cache/codegenie

High CPU Usage

Symptom:

  • CPU at 100% constantly
  • System becomes unresponsive

Solutions:

  1. Limit CPU threads:
# ~/.config/codegenie/config.yaml
performance:
  max_threads: 4
  1. Use GPU acceleration (if available):
# Ensure Ollama uses GPU
ollama run llama3.1:8b --gpu
  1. Reduce parallel processing:
execution:
  parallel_execution: false

Agent Issues

Agent Not Responding

Symptom:

  • Agent doesn't respond to commands
  • Stuck on "Thinking..."

Solutions:

  1. Check agent status:
codegenie --status
  1. Restart agent:
# Press Ctrl+C to stop
codegenie
  1. Clear session state:
rm -rf ~/.codegenie/sessions/current
  1. Check logs:
tail -f ~/.codegenie/logs/codegenie.log

Incorrect Code Generation

Symptom:

  • Generated code doesn't work
  • Code doesn't match requirements

Solutions:

  1. Provide more context:
You: Create a user registration endpoint with email validation, 
password hashing using bcrypt, and PostgreSQL database storage
  1. Review and correct:
You: The password hashing is incorrect. Use bcrypt with salt rounds of 12
  1. Use specialized agents:
You: @developer Create the endpoint
You: @security Review the security
  1. Enable learning mode:
learning:
  learn_from_corrections: true

Agent Conflicts

Symptom:

Warning: Conflicting recommendations from agents

Solutions:

  1. Review conflict details:
You: Explain the conflict
  1. Choose preferred approach:
You: Use the Security Agent's recommendation
  1. Configure agent priorities:
agents:
  priority_order:
    - security
    - performance
    - developer

Autonomous Mode Issues

Symptom:

  • Autonomous execution fails
  • Unexpected behavior in autonomous mode

Solutions:

  1. Enable intervention points:
autonomous:
  intervention_points: true
  1. Reduce autonomous scope:
autonomous:
  max_steps: 10  # Limit number of steps
  1. Review execution plan first:
You: Show me the execution plan before starting
  1. Use manual mode for complex tasks:
You: /autonomous off

Integration Problems

IDE Integration Not Working

Symptom:

  • VS Code extension not responding
  • IntelliJ plugin errors

Solutions:

  1. Verify extension installation:
  • VS Code: Check Extensions panel
  • IntelliJ: Check Plugins settings
  1. Check CodeGenie service:
codegenie --service status
  1. Restart IDE:
  • Close and reopen your IDE
  1. Check extension logs:
  • VS Code: Developer Tools > Console
  • IntelliJ: Help > Show Log
  1. Reinstall extension:
# VS Code
code --uninstall-extension codegenie.vscode
code --install-extension codegenie.vscode

Git Integration Issues

Symptom:

  • Can't commit changes
  • Git operations fail

Solutions:

  1. Verify Git installation:
git --version
  1. Check Git configuration:
git config --list
  1. Configure Git credentials:
git config --global user.name "Your Name"
git config --global user.email "your@email.com"
  1. Check repository status:
git status

CI/CD Integration Failures

Symptom:

  • GitHub Actions failing
  • Jenkins builds not triggered

Solutions:

  1. Verify webhook configuration:
curl -X GET https://api.github.com/repos/owner/repo/hooks \
  -H "Authorization: token YOUR_TOKEN"
  1. Check API credentials:
integrations:
  github:
    token: "your_token"
    webhook_secret: "your_secret"
  1. Review CI/CD logs:
  • GitHub: Actions tab
  • Jenkins: Build console output
  1. Test webhook manually:
curl -X POST https://your-codegenie-instance/webhook \
  -H "Content-Type: application/json" \
  -d '{"event": "push", "repository": "owner/repo"}'

Common Error Messages

"Context length exceeded"

Cause: Too much context for the model to handle

Solution:

models:
  context_length: 2048  # Reduce context
  
context:
  max_history: 10  # Limit conversation history

"Rate limit exceeded"

Cause: Too many requests to Ollama

Solution:

rate_limiting:
  enabled: true
  requests_per_minute: 30

"Model not found"

Cause: Specified model not installed

Solution:

ollama pull llama3.1:8b
# Or specify different model
codegenie --model codellama:7b

"Invalid configuration"

Cause: Syntax error in config file

Solution:

# Validate config
codegenie --validate-config

# Reset to defaults
mv ~/.config/codegenie/config.yaml ~/.config/codegenie/config.yaml.bak
codegenie  # Will create new default config

"Checkpoint not found"

Cause: Trying to rollback to non-existent checkpoint

Solution:

# List available checkpoints
codegenie --list-checkpoints

# Rollback to specific checkpoint
codegenie --rollback checkpoint_id

"Agent timeout"

Cause: Agent took too long to respond

Solution:

agents:
  timeout: 300  # Increase timeout to 5 minutes

Debug Mode

Enable debug mode for detailed troubleshooting:

codegenie --debug

Debug output includes:

  • Model requests and responses
  • Agent communication
  • File operations
  • Performance metrics
  • Error stack traces

Log Files

Check log files for detailed error information:

# Main log
tail -f ~/.codegenie/logs/codegenie.log

# Agent logs
tail -f ~/.codegenie/logs/agents.log

# Performance logs
tail -f ~/.codegenie/logs/performance.log

# Error logs
tail -f ~/.codegenie/logs/errors.log

Getting Help

If you can't resolve the issue:

  1. Check documentation:

    • User Guide: docs/USER_GUIDE.md
    • API Reference: docs/API_REFERENCE.md
  2. Search existing issues:

  3. Create a bug report:

    • Include error messages
    • Attach relevant logs
    • Describe steps to reproduce
  4. Join community:

  5. Contact support:

System Information

When reporting issues, include:

# Generate system info
codegenie --system-info

# Output includes:
# - OS and version
# - Python version
# - CodeGenie version
# - Ollama version
# - Installed models
# - Configuration summary

Emergency Recovery

If CodeGenie is completely broken:

# 1. Stop all processes
pkill -f codegenie

# 2. Backup current state
cp -r ~/.codegenie ~/.codegenie.backup

# 3. Reset to defaults
rm -rf ~/.codegenie
rm -rf ~/.cache/codegenie

# 4. Reinstall
pip uninstall codegenie
pip install codegenie

# 5. Restart
codegenie

Performance Tuning

Optimize CodeGenie performance:

# ~/.config/codegenie/config.yaml

# Use faster models for simple tasks
models:
  default: "llama3.1:8b"
  simple_tasks: "codellama:7b"
  
# Enable aggressive caching
cache:
  enabled: true
  aggressive: true
  ttl: 7200
  
# Optimize execution
execution:
  parallel_execution: true
  max_concurrent_tasks: 4
  
# Reduce context
context:
  max_history: 10
  max_file_size: "1MB"
  
# Performance monitoring
monitoring:
  enabled: true
  metrics_interval: 60