Skip to content

Commit 7344a5e

Browse files
Donglai Weiclaude
andcommitted
Add decoding features to README: waterz decoder, dust merge, Optuna tuning, auto-logging
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 482a8a4 commit 7344a5e

1 file changed

Lines changed: 38 additions & 0 deletions

File tree

README.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,12 @@ just test lucchi++ outputs/lucchi++/$EXPERIMENT_DATE/checkpoints/best.ckpt
159159
- **Efficient Data Loading:** Pre-loaded caching, MONAI transforms
160160
- **Gradient Accumulation:** Train with large effective batch sizes
161161

162+
### 🧩 Decoding & Post-Processing
163+
- **Waterz Decoder:** Watershed + hierarchical agglomeration on affinity graphs with tunable scoring functions (`aff50_his256`, `aff85_his256`, etc.)
164+
- **Dust Merge:** Zwatershed-style size+affinity cleanup — merges small fragments into their best-affinity neighbor instead of dropping to background (C++/Cython)
165+
- **Optuna Tuning:** Batch threshold sweep in a single waterz call (watershed computed once) for efficient hyperparameter search
166+
- **Experiment Auto-Logging:** Every decode+eval run auto-appends parameters and metrics to `decode_experiments.tsv`
167+
162168
### 📊 Monitoring & Logging
163169
- **TensorBoard:** Training curves, images, metrics
164170
- **Weights & Biases:** Experiment tracking (optional)
@@ -221,6 +227,38 @@ python scripts/main.py --config my_config.yaml data.dataloader.batch_size=4 opti
221227

222228
---
223229

230+
## Example: Decode Predictions
231+
232+
Configure instance segmentation decoding in your YAML config:
233+
234+
```yaml
235+
inference:
236+
decoding:
237+
- name: decode_waterz
238+
kwargs:
239+
thresholds: 0.4 # agglomeration threshold
240+
merge_function: aff85_his256 # 85th percentile affinity scoring
241+
aff_threshold: [0.1, 0.99] # watershed low/high thresholds
242+
channel_order: xyz # affinity channel order (auto-transpose to zyx)
243+
dust_merge_size: 100 # merge dust < 100 voxels into best neighbor
244+
dust_merge_affinity: 0.0 # min affinity for dust merge
245+
dust_remove_size: 50 # remove remaining orphans < 50 voxels
246+
```
247+
248+
Run inference with decoding:
249+
```bash
250+
python scripts/main.py --config tutorials/neuron_snemi.yaml --mode test --checkpoint path/to/best.ckpt
251+
```
252+
253+
Tune decode parameters with Optuna:
254+
```bash
255+
python scripts/main.py --config tutorials/neuron_snemi.yaml --mode tune --checkpoint path/to/best.ckpt
256+
```
257+
258+
Results are auto-logged to `outputs/<experiment>/results/decode_experiments.tsv` with all parameters and metrics.
259+
260+
---
261+
224262
## Supported Models
225263

226264
### MONAI Models

0 commit comments

Comments
 (0)