The test_all_examples.py script automatically runs all example scripts in the course and reports which ones succeed and which fail. This helps you quickly identify any issues across the entire codebase without manually running each example.
# Run all examples (default 60s timeout per script)
python3 test_all_examples.py
# Run all examples and continue even if some fail
python3 test_all_examples.py --continue
# Run only a specific module
python3 test_all_examples.py --module module1_fundamentals-
No arguments: Run all examples with default settings
python3 test_all_examples.py
-
--module MODULE: Run only examples from a specific modulepython3 test_all_examples.py --module module5_error_correction
-
--timeout SECONDS: Set maximum execution time per script (default: 60)python3 test_all_examples.py --timeout 120
-
--verbose, -v: Show detailed output from each scriptpython3 test_all_examples.py --verbose
-
--continue, -c: Continue testing even if a test failspython3 test_all_examples.py --continue
-
--save-logs: Save logs for all tests (not just failures)python3 test_all_examples.py --save-logs
-
--quick: Run quick tests only (30s timeout, useful for rapid iteration)python3 test_all_examples.py --quick
-
Test Everything Before Commit
python3 test_all_examples.py --continue
This runs all tests and shows you all failures at once.
-
Debug a Specific Module
python3 test_all_examples.py --module module3_programming --verbose
This shows detailed output for debugging.
-
Quick Sanity Check
python3 test_all_examples.py --quick --continue
Fast check to see if anything is obviously broken.
-
Test After Dependency Update
python3 test_all_examples.py --timeout 120 --continue --save-logs
Comprehensive test with longer timeout and full logging.
The script shows:
- Progress bar with current test number
- Real-time pass/fail status with colored indicators
- ✓ (green) = passed
- ✗ (red) = failed
- Execution time for each test
- Brief error messages for failures
Example:
Testing module1_fundamentals (8 examples)
--------------------------------------------------------------------------------
[1/54] ✓ 01_classical_vs_quantum_bits.py (2.34s)
[2/54] ✓ 02_quantum_gates_circuits.py (1.87s)
[3/54] ✗ 03_superposition_measurement.py (0.45s)
Error: ModuleNotFoundError: No module named 'qiskit'
Full log: test_logs/module1_fundamentals_03_superposition_measurement.log
After all tests complete, you'll see:
- Total tests run
- Number passed/failed
- Pass rate percentage
- Total execution time
- Fastest and slowest tests
- List of failed tests with log file locations
-
test_logs/directory: Contains detailed logs for failed tests- Format:
{module_name}_{script_name}.log - Includes full stdout and stderr output
- Automatically created only for failures (unless
--save-logsis used)
- Format:
-
test_results.json: Machine-readable test results- JSON format with all test results
- Useful for CI/CD integration or further analysis
- Includes timestamps, execution times, and pass/fail status
0: All tests passed ✅1: One or more tests failed ❌
This makes it easy to use in CI/CD pipelines:
python3 test_all_examples.py && echo "All tests passed!" || echo "Tests failed!"-
Missing Dependencies
Error: ModuleNotFoundError: No module named 'qiskit'Solution: Install dependencies
pip3 install -r requirements.txt
-
Timeout Errors
Error: TIMEOUT: Script exceeded 60s time limitSolution: Increase timeout or use
--quickto skip intensive testspython3 test_all_examples.py --timeout 120
-
Hardware/API Errors
Error: IBMAccountError: Could not connect to IBM QuantumSolution: Some examples require API keys or specific hardware access. These can be skipped or run separately.
-
Import Errors
Error: ImportError: cannot import name 'X' from 'qiskit'Solution: Version mismatch - update dependencies
pip3 install --upgrade -r requirements.txt
-
Before Pushing Code: Always run with
--continueto see all failurespython3 test_all_examples.py --continue
-
When Debugging: Use
--verboseto see full outputpython3 test_all_examples.py --module module1_fundamentals --verbose
-
For CI/CD: Use longer timeout and continue on errors
python3 test_all_examples.py --timeout 180 --continue --save-logs
-
Quick Iteration: Use
--quickduring developmentpython3 test_all_examples.py --quick --module module1_fundamentals
-
Check Logs: When a test fails, check the log file for details
cat test_logs/module1_fundamentals_01_classical_vs_quantum_bits.log
The test runner automatically discovers examples in these modules:
module1_fundamentals/- Basic quantum computing conceptsmodule2_mathematics/- Mathematical foundationsmodule3_programming/- Quantum programmingmodule4_algorithms/- Quantum algorithmsmodule5_error_correction/- Error correction and mitigationmodule6_machine_learning/- Quantum machine learningmodule7_hardware/- Hardware integrationmodule8_applications/- Real-world applications
The utils/ directory is automatically excluded from testing.
If all tests timeout, you might need to:
- Check if your system is under heavy load
- Increase the timeout significantly:
--timeout 300 - Run modules separately to identify the problematic one
If you get "Permission denied":
chmod +x test_all_examples.pySome quantum simulations require significant memory. If you run out:
- Close other applications
- Use
--quickmode - Run modules one at a time
- Skip intensive examples (like large VQE or deep circuits)
If you don't see colors in the output:
- Your terminal might not support ANSI colors
- Try a different terminal (most modern terminals support colors)
- The functionality still works; it's just less pretty!
Example GitHub Actions workflow:
name: Test Quantum Examples
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: pip install -r examples/requirements.txt
- name: Run tests
run: python3 examples/test_all_examples.py --continue --timeout 180
- name: Upload logs on failure
if: failure()
uses: actions/upload-artifact@v2
with:
name: test-logs
path: examples/test_logs/When adding new examples:
- Place them in the appropriate
moduleX_*/directory - Name them with a number prefix:
01_example_name.py - Ensure they run without user interaction
- Run the test suite before committing
- Add appropriate error handling
The test runner will automatically discover and test new examples!
If you encounter issues with the test runner itself (not the examples):
- Check this guide for solutions
- Review the test logs in
test_logs/ - Check the JSON results in
test_results.json - Run with
--verbosefor more details
Happy testing! 🚀