All notable changes to neat-python will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Canonical fitness sharing option via
fitness_sharing = canonicalin[DefaultReproduction]config section. Defaultnormalizedpreserves existing behavior. (NEAT paper compliance) - Interspecies crossover via
interspecies_crossover_probparameter in[DefaultReproduction]. Default 0.0 (disabled). (NEAT paper compliance) - Dynamic compatibility threshold via
target_num_speciesin[DefaultSpeciesSet]. Defaultnone(static threshold). Related parameters:threshold_adjust_rate,threshold_min,threshold_max. (NEAT paper compliance) - Separate excess gene coefficient via
compatibility_excess_coefficientin[DefaultGenome]. Defaults tocompatibility_disjoint_coefficient(set toauto). (NEAT paper compliance) - Configurable node gene distance via
compatibility_include_node_genesin[DefaultGenome]. Default True (current behavior). Set to False for canonical NEAT distance formula. (NEAT paper compliance) - Configurable enabled-state penalty via
compatibility_enable_penaltyin[DefaultGenome]. Default 1.0 (current behavior). Set to 0.0 for canonical NEAT distance formula. (NEAT paper compliance) - Canonical spawn allocation via
spawn_method = proportionalin[DefaultReproduction]. Defaultsmoothed(current behavior). (NEAT paper compliance) - GPU-accelerated evaluation for CTRNN and Izhikevich spiking networks via optional CuPy dependency
GPUCTRNNEvaluatorandGPUIZNNEvaluatorinneat.gpu.evaluator- Batch-evaluates entire populations on GPU using padded tensor operations
- Install with
pip install 'neat-python[gpu]' - Custom CUDA kernel supporting 11 activation functions
- Requires sum aggregation; other aggregation functions raise
ValueError import neatnever loads CuPy; all GPU imports are lazy- Benchmark script in
benchmarks/gpu_benchmark.py - GPU comparison examples in
examples/signal-tracking-gpu/andexamples/spike-timing-gpu/
- Distance function now matches genes by innovation number, consistent with crossover behavior. Previously used tuple keys (endpoint pairs). This affects speciation when the same connection endpoints receive different innovation numbers in different generations (uncommon but possible). (NEAT paper compliance)
- Dangling nodes are now pruned after
mutate_delete_nodeandmutate_delete_connection. Hidden nodes that become disconnected from all outputs are automatically removed along with their connections. This reduces structural bloat in long evolution runs. - CTRNN integration method changed from forward Euler to exponential Euler (ETD1)
- Integrates the linear decay term
-y/tauexactly - Unconditionally stable regardless of
dt/tauratio (forward Euler requireddt < 2*tau) - Same per-step cost (one
math.expcall per node) - Numerical results differ from previous versions for the same
dt; both methods converge to the same continuous solution asdtdecreases - The
time_constant_min_valueconstraint is relaxed: values well below the integration timestep are now safe
- Integrates the linear decay term
- Checkpoint timing moved from
end_generationtopost_evaluate. Checkpoints now save the evaluated population (with fitness values) rather than the unevaluated post-reproduction population. Restoring a checkpoint no longer re-runs the last generation's fitness evaluation. Checkpoint fileNnow means "generation N has been evaluated." Old 5-tuple checkpoint files are still loadable for backward compatibility. - Reporter species output moved from
end_generationtopost_evaluate. TheStdOutReporterspecies detail table now appears alongside fitness statistics for the same generation, eliminating the previous mismatch where species sizes from the next generation were printed under the current generation's banner.
fitness_criterion = minnow works correctly. Previously, only the termination check honored this setting. Best-genome tracking, stagnation detection, elite selection, crossover parent selection, spawn allocation, and statistics reporting all hardcoded "higher is better." All fitness comparisons now use direction-aware methods on theConfigobject (is_better_fitness,meets_threshold,worst_fitness).- 75% disable rule now matches the NEAT paper specification (Stanley & Miikkulainen, 2002, p. 111). Previously, the rule was applied after random attribute inheritance, producing an effective ~87.5% disable rate. Now correctly produces 75%. This may affect evolution dynamics in existing configurations.
- Two double-buffer bugs in CTRNN advance method fixed. Incorrect buffer swapping could cause state corruption during multi-step CTRNN evaluation.
- Aggregation validation for builtins and callables fixed.
-
Reproducibility Support: Evolution can now be made deterministic by setting a random seed
- Optional
seedparameter in[NEAT]config section - Optional
seedparameter inPopulation.__init__() - Optional
seedparameter inParallelEvaluator.__init__()for reproducible parallel evaluation - Per-genome deterministic seeding in parallel mode (seed + genome.key)
- Comprehensive documentation in
docs/reproducibility.rst - Complete test coverage in
tests/test_reproducibility.py(9 tests) - Fully backward compatible: Existing code works without changes
- Seed parameter controls Python's
randommodule - Checkpoint system already preserved random state (unchanged)
- All changes are fully backward compatible
- Optional
-
Reproducibility Examples: Demonstration scripts for reproducible NEAT evolution
- Serial reproducibility example:
examples/xor/evolve-feedforward-reproducible.py- Tests reproducibility (same seed → identical results)
- Tests seed effect (different seeds → different evolution)
- Tests backward compatibility (no seed parameter works)
- Parallel reproducibility example:
examples/parallel-reproducible/evolve-parallel.py- Demonstrates parallel evaluation with reproducibilityconfig-parallel- Configuration file with seed parameterREADME.md- Comprehensive documentation with best practices and troubleshooting- Tests parallel reproducibility (same seed + multiple workers → identical results)
- Tests worker count independence (results consistent across worker counts)
- Serial reproducibility example:
-
New Example: Inverted Double Pendulum example using Gymnasium
- Complete example in
examples/inverted-double-pendulum/ - Uses
InvertedDoublePendulum-v5environment (MuJoCo-based) - Demonstrates continuous control with 9-dimensional observation space
- Includes evolution script with parallel evaluation
- Includes test/visualization script for trained controllers
- Full documentation in example README with usage tips
- Complete example in
-
New Example: Better Lunar Lander example
- Switch to pyproject.toml instead of setup.py.
- Dropped support for Python 3.6 and 3.7; neat-python now requires Python 3.8 or newer.
- Modernized internal implementation in
neat/andexamples/to use Python 3 features such as f-strings, comprehensions, dataclasses for internal helpers, and type hints, without changing the public API.
- Add-node mutation bias behavior: Newly inserted nodes created by
mutate_add_nodenow start with zero bias so that splitting a connection is as neutral as possible with respect to the original signal flow. This makes the structural mutation less disruptive while preserving the existing weight-preserving semantics (incoming weight 1.0, outgoing weight equal to the original connection). - Checkpoint Generation Semantics: Clarified and corrected how checkpoint generation numbers are labeled and interpreted. (Note: checkpoint timing was further improved in v2.1 to save after evaluation rather than after reproduction, eliminating wasted work on restore.)
- Population Size Drift: Fixed small mismatches between actual population size and configured
pop_sizeDefaultReproduction.reproduce()now strictly enforceslen(population) == config.pop_sizefor every non-extinction generation- New
_adjust_spawn_exacthelper adjusts per-species spawn counts aftercompute_spawn()to correct rounding/clamping drift - When adding individuals, extra genomes are allocated to smaller species first; when removing, genomes are taken from larger species first
- Per-species minima (
min_species_sizeand elitism) are always respected; invalid configurations (e.g.,pop_size < num_species * min_species_size) raise a clear error - New tests in
tests/test_reproduction.pyensureDefaultReproduction.reproduce()preserves exact population size over multiple generations
- Orphaned Nodes Bug: Fixed silent failure when nodes have no incoming connections after deletion mutations
feed_forward_layers()now correctly handles orphaned nodes (nodes with no incoming connections)- Orphaned nodes are treated as "bias neurons" that output
activation(bias)independent of inputs - Placed in first evaluation layer as they are always ready
required_for_output()now includes orphaned nodes that feed into outputs- Aggregation functions (
max,min,maxabs,mean,median) now handle empty inputs (return 0.0) - Comprehensive test coverage in
tests/test_graphs.py(5 new tests) andtests/test_nn.py(integration test)
-
Network Export: JSON export capability for all network types (FeedForwardNetwork, RecurrentNetwork, CTRNN, IZNN)
- New
neat.exportmodule withexport_network_json()function - Framework-agnostic JSON format designed for conversion to ONNX, TensorFlow, PyTorch, etc.
- Comprehensive format documentation in
docs/network-json-format.md - Built-in function detection (activation/aggregation) vs. custom functions
- Metadata support (fitness, generation, genome_id, custom fields)
- Example demonstrating export workflow in
examples/export/export_example.py - Full test suite in
tests/test_export.py - No additional dependencies required (uses only Python standard library)
- New
-
Innovation Number Tracking: Full implementation of innovation numbers as described in the original NEAT paper (Stanley & Miikkulainen, 2002, Section 3.2)
- Global innovation counter that increments across all generations
- Same-generation deduplication of identical mutations
- Innovation-based gene matching during crossover
- Proper historical marking of genes for speciation
-
New
InnovationTrackerclass inneat/innovation.py -
Comprehensive unit tests in
tests/test_innovation.py(19 tests) -
Integration tests in
tests/test_innovation_integration.py(6 tests) -
Innovation tracking documentation in
docs/innovation_numbers.rst
- BREAKING:
DefaultConnectionGene.__init__()now requires mandatoryinnovationparameter - BREAKING: All connection gene creation must include innovation numbers
- BREAKING: Crossover now matches genes primarily by innovation number, not tuple keys
- BREAKING: Old checkpoints from pre-1.0 versions are incompatible with 1.0.0+
DefaultGenome.configure_crossover()updated to match genes by innovation number per NEAT paper Figure 4- All genome initialization methods assign innovation numbers to connections
DefaultReproductionnow creates and manages anInnovationTrackerinstance- Checkpoint format updated to preserve innovation tracker state
ParallelEvaluatornow implements context manager protocol (__enter__/__exit__) for proper resource cleanup- Improved resource management in
ParallelEvaluatorto prevent multiprocessing pool leaks - Fixed
ParallelEvaluator.__del__()to properly clean up resources without callingterminate()unnecessarily
- BREAKING:
ThreadedEvaluatorhas been removed- Reason: Minimal utility due to Python's Global Interpreter Lock (GIL)
- Had implementation issues including unreliable cleanup and potential deadlocks
- Migration: Use
ParallelEvaluatorfor CPU-bound tasks
- BREAKING:
DistributedEvaluatorhas been removed- Reason: Marked as beta/unstable with known reliability issues
- Overly complex implementation (574 lines) with fragile error handling
- Migration: Use established frameworks like Ray or Dask for distributed computing
- Removed
neat/threaded.pymodule - Removed
neat/distributed.pymodule - Removed example files:
examples/xor/evolve-feedforward-threaded.pyandexamples/xor/evolve-feedforward-distributed.py - Removed test files:
tests/test_distributed.pyandtests/test_xor_example_distributed.py
- Context manager support for
ParallelEvaluator- recommended usage pattern ParallelEvaluator.close()method for explicit resource cleanup- New tests for
ParallelEvaluatorcontext manager functionality MIGRATION.mdguide for users migrating from removed evaluators
- See MIGRATION.md for detailed guidance on updating existing code
ParallelEvaluatorremains fully backward compatible but context manager usage is recommended
Note: Changelog started with version 1.0. For changes prior to 1.0, please see git history.
For the complete migration guide, see MIGRATION.md