| applyTo | **/test/** |
|---|
This guide provides comprehensive instructions for AI agents on the complete testing workflow: writing tests, running them, diagnosing failures, and fixing issues. Use this guide whenever working with test files or when users request testing tasks.
This guide covers the full testing lifecycle:
- 📝 Writing Tests - Create comprehensive test suites
▶️ Running Tests - Execute tests using VS Code tools- 🔍 Diagnosing Issues - Analyze failures and errors
- 🛠️ Fixing Problems - Resolve compilation and runtime issues
- ✅ Validation - Ensure coverage and resilience
User Requests Testing:
- "Write tests for this function"
- "Run the tests"
- "Fix the failing tests"
- "Test this code"
- "Add test coverage"
File Context Triggers:
- Working in
**/test/**directories - Files ending in
.test.tsor.unit.test.ts - Test failures or compilation errors
- Coverage reports or test output analysis
When implementing tests as an AI agent, choose between two main types:
- Fast isolated testing - Mock all external dependencies
- Use for: Pure functions, business logic, data transformations
- Execute with:
runTeststool with specific file patterns - Mock everything - VS Code APIs automatically mocked via
/src/test/unittests.ts
- Full VS Code integration - Real environment with actual APIs
- Use for: Command registration, UI interactions, extension lifecycle
- Execute with: VS Code launch configurations or
runTeststool - Slower but comprehensive - Tests complete user workflows
Use the runTests tool to execute tests programmatically:
// Run specific test files
await runTests({
files: ['/absolute/path/to/test.unit.test.ts'],
mode: 'run',
});
// Run tests with coverage
await runTests({
files: ['/absolute/path/to/test.unit.test.ts'],
mode: 'coverage',
coverageFiles: ['/absolute/path/to/source.ts'],
});
// Run specific test names
await runTests({
files: ['/absolute/path/to/test.unit.test.ts'],
testNames: ['should handle edge case', 'should validate input'],
});Before running tests, ensure compilation:
// Start watch mode for auto-compilation
await run_in_terminal({
command: 'npm run watch-tests',
isBackground: true,
explanation: 'Start test compilation in watch mode',
});
// Or compile manually
await run_in_terminal({
command: 'npm run compile-tests',
isBackground: false,
explanation: 'Compile TypeScript test files',
});For targeted test runs when runTests tool is unavailable:
// Run specific test suite
await run_in_terminal({
command: 'npm run unittest -- --grep "Suite Name"',
isBackground: false,
explanation: 'Run targeted unit tests',
});Compilation Errors:
// Missing imports
if (error.includes('Cannot find module')) {
await addMissingImports(testFile);
}
// Type mismatches
if (error.includes("Type '" && error.includes("' is not assignable"))) {
await fixTypeIssues(testFile);
}Runtime Errors:
// Mock setup issues
if (error.includes('stub') || error.includes('mock')) {
await fixMockConfiguration(testFile);
}
// Assertion failures
if (error.includes('AssertionError')) {
await analyzeAssertionFailure(error);
}interface TestFailureAnalysis {
type: 'compilation' | 'runtime' | 'assertion' | 'timeout';
message: string;
location: { file: string; line: number; col: number };
suggestedFix: string;
}
function analyzeFailure(failure: TestFailure): TestFailureAnalysis {
if (failure.message.includes('Cannot find module')) {
return {
type: 'compilation',
message: failure.message,
location: failure.location,
suggestedFix: 'Add missing import statement',
};
}
// ... other failure patterns
}Choose Unit Tests (*.unit.test.ts) when analyzing:
- Functions with clear inputs/outputs and no VS Code API dependencies
- Data transformation, parsing, or utility functions
- Business logic that can be isolated with mocks
- Error handling scenarios with predictable inputs
Choose Extension Tests (*.test.ts) when analyzing:
- Functions that register VS Code commands or use
vscode.*APIs - UI components, tree views, or command palette interactions
- File system operations requiring workspace context
- Extension lifecycle events (activation, deactivation)
Agent Implementation Pattern:
function determineTestType(functionCode: string): 'unit' | 'extension' {
if (
functionCode.includes('vscode.') ||
functionCode.includes('commands.register') ||
functionCode.includes('window.') ||
functionCode.includes('workspace.')
) {
return 'extension';
}
return 'unit';
}As an AI agent, analyze the target function systematically:
interface FunctionAnalysis {
name: string;
inputs: string[]; // Parameter types and names
outputs: string; // Return type
dependencies: string[]; // External modules/APIs used
sideEffects: string[]; // Logging, file system, network calls
errorPaths: string[]; // Exception scenarios
testType: 'unit' | 'extension';
}- Read function source using
read_filetool - Identify imports - look for
vscode.*,child_process,fs, etc. - Map data flow - trace inputs through transformations to outputs
- Catalog dependencies - external calls that need mocking
- Document side effects - logging, file operations, state changes
// Mock VS Code APIs - handled automatically by unittests.ts
import * as sinon from 'sinon';
import * as workspaceApis from '../../common/workspace.apis'; // Wrapper functions
// Stub wrapper functions, not VS Code APIs directly
const mockGetConfiguration = sinon.stub(workspaceApis, 'getConfiguration');// Use real VS Code APIs
import * as vscode from 'vscode';
// Real VS Code APIs available - no mocking needed
const config = vscode.workspace.getConfiguration('python');Based on function analysis, automatically generate comprehensive test scenarios:
interface TestScenario {
category: 'happy-path' | 'edge-case' | 'error-handling' | 'side-effects';
description: string;
inputs: Record<string, any>;
expectedOutput?: any;
expectedSideEffects?: string[];
shouldThrow?: boolean;
}- Happy Path: Normal execution with typical inputs
- Edge Cases: Boundary conditions, empty/null inputs, unusual but valid data
- Error Scenarios: Invalid inputs, dependency failures, exception paths
- Side Effects: Verify logging calls, file operations, state changes
function generateTestScenarios(analysis: FunctionAnalysis): TestScenario[] {
const scenarios: TestScenario[] = [];
// Generate happy path for each input combination
scenarios.push(...generateHappyPathScenarios(analysis));
// Generate edge cases for boundary conditions
scenarios.push(...generateEdgeCaseScenarios(analysis));
// Generate error scenarios for each dependency
scenarios.push(...generateErrorScenarios(analysis));
return scenarios;
}- ✅ Happy path scenarios - normal expected usage
- ✅ Alternative paths - different configuration combinations
- ✅ Integration scenarios - multiple features working together
- 🔸 Boundary conditions - empty inputs, missing data
- 🔸 Error scenarios - network failures, permission errors
- 🔸 Data validation - invalid inputs, type mismatches
- ✅ Fresh install - clean slate
- ✅ Existing user - migration scenarios
- ✅ Power user - complex configurations
- 🔸 Error recovery - graceful degradation
## Test Categories
### 1. Configuration Migration Tests
- No legacy settings exist
- Legacy settings already migrated
- Fresh migration needed
- Partial migration required
- Migration failures
### 2. Configuration Source Tests
- Global search paths
- Workspace search paths
- Settings precedence
- Configuration errors
### 3. Path Resolution Tests
- Absolute vs relative paths
- Workspace folder resolution
- Path validation and filtering
### 4. Integration Scenarios
- Combined configurations
- Deduplication logic
- Error handling flows// 1. Imports - group logically
import assert from 'node:assert';
import * as sinon from 'sinon';
import { Uri } from 'vscode';
import * as logging from '../../../common/logging';
import * as pathUtils from '../../../common/utils/pathUtils';
import * as workspaceApis from '../../../common/workspace.apis';
// 2. Function under test
import { getAllExtraSearchPaths } from '../../../managers/common/nativePythonFinder';
// 3. Mock interfaces
interface MockWorkspaceConfig {
get: sinon.SinonStub;
inspect: sinon.SinonStub;
update: sinon.SinonStub;
}suite('Function Integration Tests', () => {
// 1. Declare all mocks
let mockGetConfiguration: sinon.SinonStub;
let mockGetWorkspaceFolders: sinon.SinonStub;
let mockTraceLog: sinon.SinonStub;
let mockTraceError: sinon.SinonStub;
let mockTraceWarn: sinon.SinonStub;
// 2. Mock complex objects
let pythonConfig: MockWorkspaceConfig;
let envConfig: MockWorkspaceConfig;
setup(() => {
// 3. Initialize all mocks
mockGetConfiguration = sinon.stub(workspaceApis, 'getConfiguration');
mockGetWorkspaceFolders = sinon.stub(workspaceApis, 'getWorkspaceFolders');
mockTraceLog = sinon.stub(logging, 'traceLog');
mockTraceError = sinon.stub(logging, 'traceError');
mockTraceWarn = sinon.stub(logging, 'traceWarn');
// 4. Set up default behaviors
mockGetWorkspaceFolders.returns(undefined);
// 5. Create mock configuration objects
pythonConfig = {
get: sinon.stub(),
inspect: sinon.stub(),
update: sinon.stub(),
};
envConfig = {
get: sinon.stub(),
inspect: sinon.stub(),
update: sinon.stub(),
};
});
teardown(() => {
sinon.restore(); // Always clean up!
});
});test('Description of what this tests', async () => {
// Mock → Clear description of the scenario
pythonConfig.inspect.withArgs('venvPath').returns({ globalValue: '/path' });
envConfig.inspect.withArgs('globalSearchPaths').returns({ globalValue: [] });
mockGetWorkspaceFolders.returns([{ uri: Uri.file('/workspace') }]);// Run
const result = await getAllExtraSearchPaths(); // Assert - Use set-based comparison for order-agnostic testing
const expected = new Set(['/expected', '/paths']);
const actual = new Set(result);
assert.strictEqual(actual.size, expected.size, 'Should have correct number of paths');
assert.deepStrictEqual(actual, expected, 'Should contain exactly the expected paths');
// Verify side effects
assert(mockTraceLog.calledWith(sinon.match(/completion/i)), 'Should log completion');
});// ❌ Brittle - depends on order
assert.deepStrictEqual(result, ['/path1', '/path2', '/path3']);
// ✅ Resilient - order doesn't matter
const expected = new Set(['/path1', '/path2', '/path3']);
const actual = new Set(result);
assert.strictEqual(actual.size, expected.size, 'Should have correct number of paths');
assert.deepStrictEqual(actual, expected, 'Should contain exactly the expected paths');// ❌ Brittle - exact text matching
assert(mockTraceError.calledWith('Error during legacy python settings migration:'));
// ✅ Resilient - pattern matching
assert(mockTraceError.calledWith(sinon.match.string, sinon.match.instanceOf(Error)), 'Should log migration error');
// ✅ Resilient - key terms with regex
assert(mockTraceError.calledWith(sinon.match(/migration.*error/i)), 'Should log migration error');// For functions that call the same mock multiple times
envConfig.inspect.withArgs('globalSearchPaths').returns({ globalValue: [] });
envConfig.inspect
.withArgs('globalSearchPaths')
.onSecondCall()
.returns({
globalValue: ['/migrated/paths'],
});- Test different setting combinations
- Test setting precedence (workspace > user > default)
- Test configuration errors and recovery
- Test how data moves through the system
- Test transformations (path resolution, filtering)
- Test state changes (migrations, updates)
- Test graceful degradation
- Test error logging
- Test fallback behaviors
- Test multiple features together
- Test real-world scenarios
- Test edge case combinations
- Clear naming - test names describe the scenario and expected outcome
- Good coverage - main flows, edge cases, error scenarios
- Resilient assertions - won't break due to minor changes
- Readable structure - follows Mock → Run → Assert pattern
- Isolated tests - each test is independent
- Fast execution - tests run quickly with proper mocking
- ❌ Testing implementation details instead of behavior
- ❌ Brittle assertions that break on cosmetic changes
- ❌ Order-dependent tests that fail due to processing changes
- ❌ Tests that don't clean up mocks properly
- ❌ Overly complex test setup that's hard to understand
- Always use dynamic path construction with Node.js
pathmodule when testing functions that resolve paths against workspace folders to ensure cross-platform compatibility (1) - Use
runTeststool for programmatic test execution rather than terminal commands for better integration and result parsing (1) - Mock wrapper functions (e.g.,
workspaceApis.getConfiguration()) instead of VS Code APIs directly to avoid stubbing issues (2) - Start compilation with
npm run watch-testsbefore test execution to ensure TypeScript files are built (1) - Use
sinon.match()patterns for resilient assertions that don't break on minor output changes (2) - Fix test issues iteratively - run tests, analyze failures, apply fixes, repeat until passing (1)
- When fixing mock environment creation, use
nullto truly omit properties rather thanundefined(1) - Always recompile TypeScript after making import/export changes before running tests, as stubs won't work if they're applied to old compiled JavaScript that doesn't have the updated imports (2)
- Create proxy abstraction functions for Node.js APIs like
cp.spawnto enable clean testing - use function overloads to preserve Node.js's intelligent typing while making the functions mockable (1) - When unit tests fail with VS Code API errors like
TypeError: X is not a constructororCannot read properties of undefined (reading 'Y'), check if VS Code APIs are properly mocked in/src/test/unittests.ts- add missing Task-related APIs (Task,TaskScope,ShellExecution,TaskRevealKind,TaskPanelKind) and namespace mocks (tasks) following the existing pattern ofmockedVSCode.X = vscodeMocks.vscMockExtHostedTypes.X(1) - Create minimal mock objects with only required methods and use TypeScript type assertions (e.g., mockApi as PythonEnvironmentApi) to satisfy interface requirements instead of implementing all interface methods when only specific methods are needed for the test (1)