Thank you for your interest in contributing! This document covers development setup, testing, and our contribution workflow.
# Clone and install
git clone https://github.com/christopher-igweze/python-odoo-mcp.git
cd python-odoo-mcp
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txtpython -m src.server
# Server starts on http://localhost:3000docker-compose up --build
# Server starts on http://localhost:3000The project includes comprehensive unit and integration tests using pytest:
# Run all tests
pytest tests/
# Run with coverage report
pytest tests/ --cov=src --cov-report=html --cov-report=term-missing
# Run specific test file
pytest tests/unit/test_encryption.py -v
# Run tests matching pattern
pytest -k "scope_validator" -v
# Run with markers
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m slow # Slow/e2e teststests/
├── unit/ # Unit tests (no external dependencies)
│ ├── test_config.py # Config management & encryption key validation
│ ├── test_encryption.py # Credential encryption/decryption
│ ├── test_scope_validator.py # Scope parsing and permission checking
│ ├── test_odoo_client.py # OdooClient read/write/delete operations
│ ├── test_connection_pool.py # Connection pooling and caching
│ ├── test_connection_manager.py # Connection authentication and pooling
│ ├── test_header_parser.py # X-Auth-Credentials header parsing
│ └── test_tools.py # Tool implementations with mocking
├── integration/ # Integration tests (with app instance)
│ ├── test_auth_endpoints.py # /auth/generate and /auth/validate
│ ├── test_health.py # Health check endpoints
│ └── test_tools_endpoints.py # /tools/list and /tools/call
└── conftest.py # Shared pytest fixtures
Current Status:
- Overall coverage: 75.37% ✅ (target: 75%+)
- 142 tests passing
- High coverage areas:
- Authentication & encryption: 89%
- Scope validation: 89%
- Connection pool: 100%
- Header parser: 100%
- Lower coverage areas:
- Tool error paths: 82%
- Odoo client integration: 88%
- Server endpoints: 30% (requires live Odoo)
Coverage Monitoring:
This project uses Codecov for continuous coverage tracking:
- View detailed coverage dashboard: https://codecov.io/gh/christopher-igweze/python-odoo-mcp
- Coverage reports generated automatically on every push via GitHub Actions
- Pull requests show coverage impact
Understanding Coverage Reports:
- Line Coverage - Percentage of source code lines executed during tests
- Branch Coverage - All code paths (if/else) tested
- Uncovered Lines - Listed with file paths for targeting improvement
- Trending - See if coverage increases or decreases with each commit
Generate Local Reports:
# Run tests with coverage
pytest tests/ --cov=src --cov-report=html --cov-report=term-missing
# View in browser
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linuxcurl http://localhost:3000/healthcurl -X POST http://localhost:3000/tools/listWe provide a test script for quick integration testing:
./test_server.shWe follow the micro-commit philosophy - each commit should be a single, focused change:
# ✅ GOOD: One focused change
git commit -m "🔧 fix(scope): handle wildcard permission override"
# ✅ GOOD: Another focused change
git commit -m "✨ feat(tools): add search_count operation"
# ❌ AVOID: Multiple unrelated changes
git commit -m "🔧 fix multiple bugs and refactor"Use the glassbear emoji + conventional commit format:
{emoji} {type}({scope}): {description}
Emoji Guide:
- ✨
feat- New feature - 🐛
fix- Bug fix - 🧪
test- Tests or test improvements - 📝
docs- Documentation changes - 🔧
chore- Configuration, dependencies, non-code changes - 🎨
style- Code style/formatting (no logic change) - ♻️
refactor- Code refactoring (no feature/fix)
Examples:
✨ feat(scope): add support for explicit permission denial
🐛 fix(connection): handle disconnection in pool
🧪 test(tools): add comprehensive error handling tests
📝 docs(readme): update n8n integration guide
🔧 chore(deps): upgrade cryptography library
# Check formatting
black --check src tests
# Format code
isort src tests
black src tests
# All together
./test_server.sh # runs checks before tests-
Branch from main:
git checkout main git pull origin main git checkout -b feature/your-feature-name
-
Make focused commits:
git commit -m "✨ feat(feature): implement thing" git commit -m "🧪 test(feature): add tests"
-
Push and create PR:
git push origin feature/your-feature-name
-
PR should include:
- Clear description of what changed and why
- Link to any related issues
- Test coverage for new code
- Updated documentation if needed
When implementing new features:
-
Write tests alongside implementation
- Don't write code first, add tests later
- Use TDD approach when possible
-
Aim for 80%+ coverage on new code
- Run
pytest --covto see your coverage - Target overall 75%+ coverage maintenance
- Run
-
Before pushing:
pytest tests/ --cov=src --cov-report=term-missing # Check that your new code is tested -
After pushing:
- Codecov will comment on your PR with coverage impact
- Address any coverage regressions before merging
- Create an issue describing the feature
- Discuss API design if adding endpoints
- Consider test approach before coding
# Feature branch
git checkout -b feature/my-new-feature
# Write test first (TDD approach)
# File: tests/unit/test_my_feature.py
class TestMyFeature:
def test_something(self):
assert True
# Implement the feature
# File: src/my_module.py
# Run tests to verify
pytest tests/unit/test_my_feature.py -v
# Check coverage
pytest --cov=src tests/unit/test_my_feature.py# Update README if adding public features
# Update docstrings in code
git commit -m "📝 docs(readme): document new feature"Include in PR description:
- What does this add?
- Why was it needed?
- How does someone use it?
- Any breaking changes?
# Set environment variable
export LOG_LEVEL=DEBUG
# Run server
python -m src.server# Test only authentication
pytest tests/unit/test_encryption.py -v
# Test with print statements
pytest tests/unit/test_something.py -v -s# In your test or code
import pdb; pdb.set_trace()
# Then run pytest with -s flag
pytest tests/ -sSolution: Make sure you're running pytest from the repo root:
cd /path/to/python-odoo-mcp
pytest tests/Solution: Make sure pytest-cov is installed:
pip install pytest-covSolution: Kill the process or use different port:
# Kill process using port 3000
lsof -i :3000
kill -9 <PID>
# Or use different port
PORT=3001 python -m src.serverWe track performance metrics. To test local performance:
# Time test execution
time pytest tests/integration/
# Profile specific test
pytest tests/integration/test_tools_endpoints.py --profile
# Check connection pool efficiency
pytest tests/unit/test_connection_pool.py -v -s- Questions? Open an issue with
[question]label - Bug reports? Include: Python version, error message, steps to reproduce
- Feature requests? Describe use case and expected behavior
By contributing, you agree that your contributions will be licensed under the MIT License.
Thank you for contributing to make Python Odoo MCP Server better! 🙏