Skip to content

Commit 59f2a4c

Browse files
Update documentation with migration instructions.
1 parent b6f10e8 commit 59f2a4c

File tree

3 files changed

+269
-4
lines changed

3 files changed

+269
-4
lines changed

docs/conf.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,13 +43,13 @@
4343
]
4444

4545
extlinks = {'pytypes': ('https://docs.python.org/3.5/library/stdtypes.html#%s',
46-
''),
46+
None),
4747
'pygloss': ('https://docs.python.org/3.5/glossary.html#term-%s',
48-
''),
48+
None),
4949
'datamodel': ('https://docs.python.org/3.5/reference/datamodel.html#%s',
50-
''),
50+
None),
5151
'pylib': ('https://docs.python.org/3.5/library/%s',
52-
'')}
52+
None)}
5353

5454
# Add any paths that contain templates here, relative to this directory.
5555
templates_path = ['_templates']

docs/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ Contents:
2929

3030
neat_overview
3131
installation
32+
migration
3233
config_file
3334
xor_example
3435
customization

docs/migration.rst

Lines changed: 264 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,264 @@
1+
Migration Guide for neat-python 1.0
2+
====================================
3+
4+
This guide helps you migrate from neat-python 0.93 to 1.0, which includes breaking changes to the parallel evaluation APIs.
5+
6+
Overview of Changes
7+
-------------------
8+
9+
Removed Components
10+
~~~~~~~~~~~~~~~~~~
11+
12+
- **ThreadedEvaluator** - Removed due to minimal utility (Python GIL) and implementation issues
13+
- **DistributedEvaluator** - Removed due to instability and complexity
14+
15+
Improved Components
16+
~~~~~~~~~~~~~~~~~~~
17+
18+
- **ParallelEvaluator** - Now supports context manager protocol for proper resource cleanup
19+
20+
ThreadedEvaluator (Removed)
21+
----------------------------
22+
23+
Why Was It Removed?
24+
~~~~~~~~~~~~~~~~~~~
25+
26+
The ``ThreadedEvaluator`` provided minimal benefit for most use cases:
27+
28+
- Python's Global Interpreter Lock (GIL) prevents true parallel execution of CPU-bound code
29+
- Only beneficial for I/O-bound fitness functions (rare in neural network evolution)
30+
- Had implementation issues including unreliable cleanup and potential deadlocks
31+
- No timeout on output queue operations could cause indefinite hangs
32+
33+
Migration Path
34+
~~~~~~~~~~~~~~
35+
36+
**For CPU-bound fitness evaluation (most common):**
37+
38+
Use ``ParallelEvaluator`` instead, which uses process-based parallelism to bypass the GIL:
39+
40+
.. code-block:: python
41+
42+
# Old code (ThreadedEvaluator)
43+
import neat
44+
45+
evaluator = neat.ThreadedEvaluator(4, eval_genome)
46+
winner = population.run(evaluator.evaluate, 300)
47+
evaluator.stop() # Manual cleanup
48+
49+
# New code (ParallelEvaluator with context manager)
50+
import neat
51+
import multiprocessing
52+
53+
with neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome) as evaluator:
54+
winner = population.run(evaluator.evaluate, 300)
55+
# Automatic cleanup on context exit
56+
57+
**For I/O-bound fitness evaluation (uncommon):**
58+
59+
Consider using Python's ``asyncio`` for truly I/O-bound operations, or still use ``ParallelEvaluator`` which works well for both CPU and I/O-bound tasks.
60+
61+
DistributedEvaluator (Removed)
62+
-------------------------------
63+
64+
Why Was It Removed?
65+
~~~~~~~~~~~~~~~~~~~
66+
67+
The ``DistributedEvaluator`` had several fundamental problems:
68+
69+
- Marked as **beta/unstable** in the documentation since its introduction
70+
- Used ``multiprocessing.managers`` which is notoriously unreliable across networks
71+
- Integration tests were skipped due to pickling and reliability issues
72+
- 574 lines of complex, fragile code with extensive error handling
73+
- Better alternatives exist for distributed computing
74+
75+
Migration Path
76+
~~~~~~~~~~~~~~
77+
78+
Option 1: Single-machine parallelism (simplest)
79+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
80+
81+
If you were using ``DistributedEvaluator`` on a single machine, migrate to ``ParallelEvaluator``:
82+
83+
.. code-block:: python
84+
85+
# Old code (DistributedEvaluator - single machine)
86+
import neat
87+
88+
de = neat.DistributedEvaluator(
89+
('localhost', 8022),
90+
authkey=b'password',
91+
eval_function=eval_genome,
92+
mode=neat.distributed.MODE_PRIMARY
93+
)
94+
de.start()
95+
winner = population.run(de.evaluate, 300)
96+
de.stop()
97+
98+
# New code (ParallelEvaluator)
99+
import neat
100+
import multiprocessing
101+
102+
with neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome) as evaluator:
103+
winner = population.run(evaluator.evaluate, 300)
104+
105+
Option 2: Multi-machine distributed computing (recommended for large-scale)
106+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
107+
108+
Use established distributed computing frameworks like **Ray** or **Dask**.
109+
110+
Using Ray (recommended)
111+
"""""""""""""""""""""""
112+
113+
.. code-block:: python
114+
115+
import neat
116+
import ray
117+
118+
# Initialize Ray
119+
ray.init(address='auto') # or ray.init() for local cluster
120+
121+
@ray.remote
122+
def eval_genome_remote(genome, config):
123+
"""Fitness evaluation function wrapped for Ray."""
124+
net = neat.nn.FeedForwardNetwork.create(genome, config)
125+
# Your fitness evaluation logic here
126+
return fitness_value
127+
128+
def eval_genomes_distributed(genomes, config):
129+
"""Fitness function that distributes work via Ray."""
130+
# Submit all evaluation tasks
131+
futures = [eval_genome_remote.remote(genome, config)
132+
for genome_id, genome in genomes]
133+
134+
# Gather results
135+
results = ray.get(futures)
136+
137+
# Assign fitness values
138+
for (genome_id, genome), fitness in zip(genomes, results):
139+
genome.fitness = fitness
140+
141+
# Use with NEAT
142+
population = neat.Population(config)
143+
winner = population.run(eval_genomes_distributed, 300)
144+
145+
Using Dask
146+
""""""""""
147+
148+
.. code-block:: python
149+
150+
import neat
151+
from dask.distributed import Client
152+
153+
# Connect to Dask cluster
154+
client = Client('scheduler-address:8786')
155+
156+
def eval_genome_dask(genome, config):
157+
"""Fitness evaluation function."""
158+
net = neat.nn.FeedForwardNetwork.create(genome, config)
159+
# Your fitness evaluation logic here
160+
return fitness_value
161+
162+
def eval_genomes_distributed(genomes, config):
163+
"""Fitness function that distributes work via Dask."""
164+
# Submit all evaluation tasks
165+
futures = [client.submit(eval_genome_dask, genome, config)
166+
for genome_id, genome in genomes]
167+
168+
# Gather results
169+
results = client.gather(futures)
170+
171+
# Assign fitness values
172+
for (genome_id, genome), fitness in zip(genomes, results):
173+
genome.fitness = fitness
174+
175+
# Use with NEAT
176+
population = neat.Population(config)
177+
winner = population.run(eval_genomes_distributed, 300)
178+
179+
Option 3: Custom solution
180+
^^^^^^^^^^^^^^^^^^^^^^^^^^
181+
182+
You can implement your own distributed evaluation using:
183+
184+
- Message queues (RabbitMQ, Redis, AWS SQS)
185+
- Task queues (Celery)
186+
- Cloud functions (AWS Lambda, Google Cloud Functions)
187+
188+
ParallelEvaluator Improvements
189+
-------------------------------
190+
191+
The ``ParallelEvaluator`` has been improved with proper resource management and context manager support.
192+
193+
Context Manager Pattern (Recommended)
194+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
195+
196+
**New recommended usage:**
197+
198+
.. code-block:: python
199+
200+
import neat
201+
import multiprocessing
202+
203+
with neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome) as evaluator:
204+
winner = population.run(evaluator.evaluate, 300)
205+
# Pool is automatically cleaned up when exiting the context
206+
207+
**Benefits:**
208+
209+
- Guaranteed cleanup of multiprocessing pool
210+
- No risk of zombie processes
211+
- Cleaner, more Pythonic code
212+
- Exception-safe resource management
213+
214+
Backward Compatibility
215+
~~~~~~~~~~~~~~~~~~~~~~
216+
217+
**Old usage still works:**
218+
219+
.. code-block:: python
220+
221+
import neat
222+
import multiprocessing
223+
224+
evaluator = neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome)
225+
winner = population.run(evaluator.evaluate, 300)
226+
# Pool will be cleaned up by __del__, but context manager is preferred
227+
228+
While the old pattern still functions, we **strongly recommend** migrating to the context manager pattern for better resource management.
229+
230+
Explicit Cleanup
231+
~~~~~~~~~~~~~~~~
232+
233+
If you need explicit control over cleanup:
234+
235+
.. code-block:: python
236+
237+
evaluator = neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome)
238+
try:
239+
winner = population.run(evaluator.evaluate, 300)
240+
finally:
241+
evaluator.close() # Explicit cleanup
242+
243+
Additional Resources
244+
--------------------
245+
246+
- **Ray Documentation**: https://docs.ray.io/
247+
- **Dask Documentation**: https://docs.dask.org/
248+
- **neat-python Documentation**: http://neat-python.readthedocs.io/
249+
- **GitHub Repository**: https://github.com/CodeReclaimers/neat-python
250+
251+
Getting Help
252+
------------
253+
254+
If you encounter issues during migration:
255+
256+
1. Check the `GitHub Issues <https://github.com/CodeReclaimers/neat-python/issues>`_ for similar problems
257+
2. Review the updated `documentation <http://neat-python.readthedocs.io/>`_
258+
3. Open a new issue with details about your migration challenge
259+
260+
Version Information
261+
-------------------
262+
263+
- This guide applies to migration from neat-python 0.93 → 1.0
264+
- Last updated: 2025-11-09

0 commit comments

Comments
 (0)