This guide covers the comprehensive end-to-end testing suite for all MCP (Model Context Protocol) tools in the GitHub Project Manager.
The E2E test suite provides comprehensive testing for:
- 40+ GitHub Project Management Tools - Complete CRUD operations for projects, milestones, issues, sprints, etc.
- 8 AI Task Management Tools - PRD generation, task parsing, complexity analysis, etc.
- Complex Workflow Integration - Multi-tool workflows and real-world scenarios
- Real API Testing - Optional testing with actual GitHub and AI APIs
src/__tests__/e2e/tools/
├── github-project-tools.e2e.ts # GitHub project management tools
├── ai-task-tools.e2e.ts # AI-powered task management tools
├── tool-integration-workflows.e2e.ts # Complex multi-tool workflows
└── utils/
└── MCPToolTestUtils.ts # Test utilities and helpers
Tests all GitHub-related MCP tools:
Project Tools:
create_project,list_projects,get_project,update_project,delete_projectcreate_project_field,list_project_fields,update_project_fieldcreate_project_view,list_project_views,update_project_viewadd_project_item,remove_project_item,list_project_itemsset_field_value,get_field_value
Milestone Tools:
create_milestone,list_milestones,update_milestone,delete_milestone
Issue Tools:
create_issue,list_issues,get_issue,update_issue
Sprint Tools:
create_sprint,list_sprints,get_current_sprint,update_sprintadd_issues_to_sprint,remove_issues_from_sprint
Roadmap and Planning Tools:
create_roadmap,plan_sprint,get_milestone_metrics,get_sprint_metricsget_overdue_milestones,get_upcoming_milestones
Label Tools:
create_label,list_labels
Tests all AI-powered MCP tools:
PRD Generation:
generate_prd- Generate Product Requirements Documents from ideasenhance_prd- Enhance existing PRDs with additional detailsparse_prd- Parse PRDs and generate tasks
Task Management:
get_next_task- Get task recommendations for teamsanalyze_task_complexity- Analyze and score task complexityexpand_task- Break down tasks into subtasks
Feature Management:
add_feature- Add new features to existing projects
Requirements Traceability:
create_traceability_matrix- Create comprehensive traceability matrices
Tests complex workflows combining multiple tools:
Complete Project Setup Workflow:
- Generate PRD from project idea
- Create GitHub project
- Parse PRD to generate tasks
- Create milestones and issues
- Plan sprints with task assignments
AI-Enhanced Project Management:
- Enhance PRDs with technical details
- Add new features dynamically
- Generate comprehensive traceability matrices
- Optimize task recommendations
Metrics and Monitoring:
- Track milestone progress
- Monitor sprint performance
- Identify overdue items
- Generate team recommendations
# Run all E2E tool tests with mocked APIs
npm run test:e2e:tools
# Run specific test categories
npm run test:e2e:tools:github # GitHub tools only
npm run test:e2e:tools:ai # AI tools only
npm run test:e2e:tools:workflows # Integration workflows only# Run all E2E tool tests with real APIs
npm run test:e2e:tools:real
# Run specific categories with real APIs
npm run test:e2e:tools:real:github
npm run test:e2e:tools:real:ai
npm run test:e2e:tools:real:workflows
# Run complete test suite (unit + integration + E2E)
npm run test:all:realFor GitHub API Testing:
GITHUB_TOKEN=ghp_your_github_token
GITHUB_OWNER=your-github-username
GITHUB_REPO=your-test-repositoryFor AI API Testing:
# At least one AI API key is required
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
OPENAI_API_KEY=sk-your-openai-key
GOOGLE_API_KEY=your-google-ai-key
PERPLEXITY_API_KEY=pplx-your-perplexity-key
# AI model configuration (optional)
AI_MAIN_MODEL=claude-3-5-sonnet-20241022
AI_RESEARCH_MODEL=perplexity-llama-3.1-sonar-large-128k-online
AI_FALLBACK_MODEL=gpt-4o
AI_PRD_MODEL=claude-3-5-sonnet-20241022For Real API Testing:
E2E_REAL_API=true # Enable real API callsFor real API testing, your GitHub token needs these permissions:
repo(full repository access)project(full project access)workflow(workflow access)write:org(organization write access)admin:org(organization admin access)
- ✅ 40+ GitHub Tools - Complete CRUD operations
- ✅ 8 AI Tools - Full AI workflow testing
- ✅ Schema Validation - Argument validation for all tools
- ✅ Error Handling - Graceful error handling and recovery
- ✅ Real API Integration - Optional real API testing
- ✅ Tool Registration Validation - Verify all tools are properly registered
- ✅ Schema Compliance - Validate tool schemas match MCP specification
- ✅ Response Format Validation - Ensure responses follow expected formats
- ✅ Workflow Integration - Test complex multi-tool workflows
- ✅ Performance Testing - Monitor tool execution performance
- ✅ Credential Management - Graceful handling of missing credentials
The MCPToolTestUtils class provides:
- Server Lifecycle Management - Start/stop MCP server for testing
- Tool Execution - Call tools through actual MCP interface
- Response Validation - Validate tool responses and formats
- Error Testing - Test tool validation and error handling
- Test Data Generation - Generate realistic test data
- Credential Detection - Skip tests when credentials are missing
- Test Timeout: 60 seconds for comprehensive E2E tests
- Concurrency: Sequential execution to avoid conflicts
- Coverage: Disabled for E2E tests (focused on integration)
- Reporting: JUnit XML reports for CI/CD integration
- Mock Mode: Default - uses mocked APIs for fast, reliable testing
- Real API Mode: Optional - uses actual GitHub and AI APIs
- Credential Detection: Automatically skips tests when credentials are missing
- Graceful Degradation: Continues testing even when some services are unavailable
- Start with Mock Tests: Always run mock tests first to verify basic functionality
- Use Real APIs Sparingly: Only use real API tests when necessary
- Set Up Test Repository: Use a dedicated test repository for real API tests
- Monitor Rate Limits: Be aware of GitHub API rate limits during real API testing
- Mock Tests in CI: Run mock tests in all CI builds
- Real API Tests Nightly: Run real API tests on a schedule
- Credential Management: Use secure environment variable management
- Test Reporting: Leverage JUnit XML reports for test result tracking
- Check Credentials: Verify all required environment variables are set
- Review Logs: Check stderr output for detailed error messages
- Test Individual Tools: Use specific test patterns to isolate issues
- Validate Server Build: Ensure
npm run buildcompletes successfully
// Test a specific tool
const utils = new MCPToolTestUtils();
await utils.startServer();
const response = await utils.callTool('create_project', {
title: 'Test Project',
visibility: 'private'
});
expect(response.id).toBeDefined();
await utils.stopServer();// Test complete workflow
const prdResponse = await utils.callTool('generate_prd', { /* args */ });
const parseResponse = await utils.callTool('parse_prd', {
prdContent: MCPToolTestUtils.extractContent(prdResponse)
});
const projectResponse = await utils.callTool('create_project', { /* args */ });Server Build Errors:
npm run build # Ensure server builds successfullyMissing Dependencies:
npm install jest-junit # Install test reporting dependencyPermission Errors:
- Verify GitHub token has required permissions
- Check repository access and organization membership
API Rate Limits:
- Use mock tests for frequent testing
- Implement delays between real API calls
- Monitor GitHub API rate limit headers
- Check Test Logs: Review detailed error messages in test output
- Validate Environment: Ensure all required environment variables are set
- Test Individual Components: Use specific test patterns to isolate issues
- Review Documentation: Check tool-specific documentation for requirements
When adding new tools or modifying existing ones:
- Add Tool Tests: Create comprehensive tests for new tools
- Update Workflows: Include new tools in integration workflows
- Validate Schemas: Ensure tool schemas are properly tested
- Document Changes: Update this guide with new testing procedures
- Test Both Modes: Verify tests work in both mock and real API modes