CodeGenie is an advanced AI coding agent that helps developers build software faster and better. It uses local AI models (via Ollama) to provide intelligent code generation, autonomous development workflows, multi-agent collaboration, and comprehensive code intelligence—all while maintaining complete privacy.
CodeGenie stands out with:
- Complete Privacy: Runs locally with Ollama, no data sent to cloud
- Autonomous Workflows: Can complete complex multi-step tasks independently
- Multi-Agent System: Specialized agents for architecture, security, performance, etc.
- Advanced Code Intelligence: Deep understanding of your codebase
- Learning System: Adapts to your coding style and preferences
- Open Source: Free and customizable
No! CodeGenie runs completely offline once you have:
- Ollama installed
- AI models downloaded
- CodeGenie installed
You only need internet to initially download models and install CodeGenie.
No. All processing happens locally on your machine. Your code never leaves your computer.
Minimum:
- Python 3.9+
- 8GB RAM
- 10GB free disk space
- Ollama installed
Recommended:
- Python 3.10+
- 16GB+ RAM
- 50GB free disk space (for multiple models)
- GPU with 8GB+ VRAM (optional, for faster inference)
For most users:
llama3.1:8b- General purpose, good balancecodellama:7b- Code-specific tasks
For better performance (requires more RAM):
llama3.1:70b- Complex reasoningdeepseek-coder:33b- Advanced code generation
For limited resources:
llama3.1:7b- Smaller, fastercodellama:7b- Code-specific, efficient
pip install --upgrade codegenieYes! CodeGenie works with any project. Just navigate to your project directory and run:
cd /path/to/your/project
codegenie# In your project directory
codegenie
# Or specify a path
codegenie /path/to/project
# With specific model
codegenie --model llama3.1:70bAlmost anything related to software development:
- Generate code
- Debug errors
- Refactor code
- Write tests
- Create documentation
- Design architecture
- Review security
- Optimize performance
- Explain code
- And much more!
You: /autonomous on
You: Build a complete REST API with authentication
CodeGenie will break down the task and execute it step-by-step with minimal supervision.
Yes! Press Ctrl+C at any time to pause execution. You can then:
- Review what's been done
- Make changes
- Continue execution
- Rollback changes
Prefix your request with the agent name:
You: @architect Design the system architecture
You: @security Review this code for vulnerabilities
You: @performance Optimize these database queries
You: @documentation Generate API docs
You: /undo
Or rollback to a specific checkpoint:
You: /rollback checkpoint_id
CodeGenie supports all major programming languages:
- Python, JavaScript, TypeScript
- Java, C#, C++, C
- Go, Rust, Ruby, PHP
- Swift, Kotlin, Scala
- And many more!
Yes! CodeGenie understands popular frameworks:
- Web: FastAPI, Django, Flask, Express, React, Vue, Angular
- Mobile: React Native, Flutter, Swift UI
- Desktop: Electron, Qt, Tkinter
- Data: Pandas, NumPy, TensorFlow, PyTorch
- And many more!
Yes! CodeGenie can:
- Generate unit tests
- Create integration tests
- Write end-to-end tests
- Generate test data
- Create test fixtures
- Run tests and fix failures
Absolutely! CodeGenie can:
- Analyze error messages
- Find root causes
- Suggest fixes
- Implement fixes
- Verify fixes work
- Explain what went wrong
Yes! CodeGenie:
- Analyzes your project structure
- Understands your patterns and conventions
- Learns your coding style
- Maintains context across sessions
- Tracks project evolution
Yes! CodeGenie can:
- Design database schemas
- Create migrations
- Optimize queries
- Add indexes
- Design relationships
- Suggest improvements
Common causes:
- Large model: Try a smaller model (8b instead of 70b)
- Limited RAM: Close other applications
- No GPU: CPU inference is slower
- Large context: Reduce context length in config
- Use smaller models for simple tasks
- Enable caching in configuration
- Use GPU if available
- Reduce context length
- Close unnecessary applications
- 8GB: Can run 7b-8b models (basic usage)
- 16GB: Can run 13b models comfortably
- 32GB+: Can run 70b models
Yes! If you have an NVIDIA GPU:
- Install CUDA toolkit
- Ollama will automatically use GPU
- Much faster inference (5-10x)
- 7b models: ~4GB
- 8b models: ~5GB
- 13b models: ~8GB
- 33b models: ~20GB
- 70b models: ~40GB
Global config: ~/.config/codegenie/config.yaml
Project config: .codegenie.yaml in project root
Edit ~/.config/codegenie/config.yaml:
models:
default: "llama3.1:8b"Yes! You can customize:
- Model preferences
- Coding style preferences
- Autonomous mode settings
- Agent behavior
- UI preferences
- And much more!
See the User Guide for details.
CodeGenie learns automatically, but you can also:
- Provide feedback:
You: Use type hints for all functions
You: Prefer async/await over callbacks
- Configure preferences:
coding_style:
type_hints: true
async_preferred: true
docstring_style: "google"- Correct mistakes:
You: Use argon2 instead of bcrypt for passwords
CodeGenie will remember and apply your preferences.
- Check Ollama is running:
ollama list - Check Python version:
python --version(need 3.9+) - Reinstall:
pip uninstall codegenie && pip install codegenie - Check logs:
~/.codegenie/logs/codegenie.log
# List installed models
ollama list
# Install missing model
ollama pull llama3.1:8b- Use smaller model:
codegenie --model llama3.1:8b - Close other applications
- Reduce context length in config
- Restart your computer
- Provide more context:
You: This is a FastAPI project using PostgreSQL. Create a user endpoint.
- Review and correct:
You: The validation is incorrect. Email should use regex pattern.
- Ask for explanation:
You: Explain why you chose this approach
- Provide explicit instructions:
You: Use PostgreSQL, not SQLite. Use bcrypt for passwords.
- Use specialized agents:
You: @security Review this and fix any issues
- Enable intervention points:
autonomous:
intervention_points: trueYes! CodeGenie:
- Runs completely locally
- Never sends code to external servers
- Stores data only on your machine
- Encrypts sensitive data
- Configuration:
~/.config/codegenie/ - Cache:
~/.cache/codegenie/ - Logs:
~/.codegenie/logs/ - Session data:
~/.codegenie/sessions/
Yes:
# Delete all CodeGenie data
rm -rf ~/.config/codegenie
rm -rf ~/.cache/codegenie
rm -rf ~/.codegenieNo. CodeGenie does not collect any usage data or telemetry by default. You can optionally enable anonymous usage statistics to help improve CodeGenie, but this is opt-in.
Yes! See the Advanced Guide for details.
Yes! CodeGenie provides:
- VS Code extension
- IntelliJ plugin
- Vim plugin
- Language Server Protocol support
Yes! CodeGenie can:
- Review pull requests
- Run automated checks
- Generate reports
- Suggest improvements
Yes! CodeGenie supports:
- Shared knowledge bases
- Team configurations
- Collaborative workflows
- Code review automation
Yes! CodeGenie has a plugin system. See Plugin Development Guide.
Yes! CodeGenie is open source and free to use under the MIT license.
Yes! The MIT license allows commercial use.
- Documentation: Check docs/ directory
- GitHub Issues: Report bugs or request features
- Community Forum: Ask questions
- Discord: Chat with other users
- Email: support@codegenie.dev
We welcome contributions!
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
See CONTRIBUTING.md for details.
GitHub Issues: https://github.com/your-org/codegenie/issues
Please include:
- Error message
- Steps to reproduce
- System information
- Relevant logs
| Feature | CodeGenie | GitHub Copilot |
|---|---|---|
| Privacy | ✅ Local | ❌ Cloud |
| Cost | ✅ Free | ❌ Paid |
| Autonomous Mode | ✅ Yes | ❌ No |
| Multi-Agent | ✅ Yes | ❌ No |
| Code Intelligence | ✅ Advanced | |
| Learning | ✅ Adaptive |
| Feature | CodeGenie | ChatGPT/Claude |
|---|---|---|
| Code Execution | ✅ Yes | ❌ No |
| Project Context | ✅ Full | |
| File Operations | ✅ Yes | ❌ No |
| Autonomous Workflows | ✅ Yes | ❌ No |
| Privacy | ✅ Local | ❌ Cloud |
| Specialized Agents | ✅ Yes | ❌ No |
| Feature | CodeGenie | Cursor |
|---|---|---|
| Privacy | ✅ Local | ❌ Cloud |
| Cost | ✅ Free | ❌ Paid |
| Multi-Agent | ✅ Yes | ❌ No |
| Autonomous Mode | ✅ Yes | |
| IDE Integration | ✅ Multiple | ✅ Built-in |
- Be specific: Provide clear, detailed requirements
- Provide context: Mention framework, language, constraints
- Iterate: Refine through conversation
- Use agents: Leverage specialized agents
- Review: Always review generated code
- Provide feedback: Help CodeGenie learn
Good:
Create a FastAPI endpoint for user registration with email validation,
password hashing using bcrypt, and PostgreSQL storage. Include input
validation and error handling.
Less effective:
Make a user endpoint
- Start small: Begin with simple tasks
- Build incrementally: Add features one at a time
- Test frequently: Verify each change works
- Use autonomous mode: For complex, well-defined tasks
- Review regularly: Check generated code
- Provide feedback: Correct mistakes immediately
Good for:
- Well-defined features
- Repetitive tasks
- Standard implementations
- Complete workflows
Not ideal for:
- Exploratory work
- Novel solutions
- Critical systems (without review)
- Learning new concepts
- Check the User Guide
- Read the Tutorials
- Visit the Community Forum
- Join our Discord
- Email us: support@codegenie.dev