|
1 | | -EyeFlow is a quantitative analysis platform for retinal hemodynamics using Doppler holography. It combines HoloDoppler outputs with DopplerView-derived image enhancement, vascular segmentation, and topology inference to generate structured, reproducible, and analysis-ready measurements of retinal blood flow. |
| 1 | +# AngioEye |
| 2 | + |
| 3 | +AngioEye is the cohort-analysis engine for retinal Doppler holography. It browses EyeFlow .h5 outputs, reads per-segment metrics, applies QC, compares models, and aggregates results at eye/cohort level (including artery–vein summaries) to help design biomarkers. It exports clean CSV reports for stats, figures, and clinical models. |
| 4 | + |
| 5 | +--- |
| 6 | + |
| 7 | +## Setup |
| 8 | + |
| 9 | +### Prerequisites |
| 10 | + |
| 11 | +- Python 3.10 or higher. |
| 12 | +- It is highly recommended to use a virtual environment. |
| 13 | + |
| 14 | +This project uses a `pyproject.toml` to describe all requirements needed. To start using it, **it is better to use a Python virtual environment (venv)**. |
| 15 | + |
| 16 | +```sh |
| 17 | +# Creates the venv |
| 18 | +python -m venv .venv |
| 19 | + |
| 20 | +# To enter the venv |
| 21 | +# If you are using Windows PowerShell, you might need to activate the "Exceution" policy |
| 22 | +./.venv/Scripts/activate |
| 23 | +``` |
| 24 | + |
| 25 | +You can easily exit it with the command |
| 26 | + |
| 27 | +```sh |
| 28 | +deactivate |
| 29 | +``` |
| 30 | + |
| 31 | +### 1. Basic Installation (User) |
| 32 | + |
| 33 | +```sh |
| 34 | +pip install -e . |
| 35 | + |
| 36 | +# Installs pipeline-specific dependencies (optional) |
| 37 | +pip install -e ".[pipelines]" |
| 38 | + |
| 39 | +# Installs postprocess-specific dependencies such as the graphics dashboard (optional) |
| 40 | +pip install -e ".[postprocess]" |
| 41 | +``` |
| 42 | + |
| 43 | +### 2. Development Setup (Contributor) |
| 44 | + |
| 45 | +```sh |
| 46 | +# Install all dependencies including dev tools (ruff, pre-commit, pyinstaller) |
| 47 | +pip install -e ".[dev,pipelines,postprocess]" |
| 48 | + |
| 49 | +# Initialize pre-commit hooks (optionnal) |
| 50 | +pre-commit install |
| 51 | +``` |
| 52 | + |
| 53 | +> [!NOTE] |
| 54 | +> The pre-commit is really usefull to run automatic checks before pushing code, reducing chances of ugly code being pushed. |
| 55 | +> |
| 56 | +> If a pre-commit hook fails, it will try to fix all needed files, **so you will need to add them again before recreating the commit**. |
| 57 | +
|
| 58 | +> [!TIP] |
| 59 | +> You can run the linter easily, once the `dev` dependencies are installed, with the command: |
| 60 | +> |
| 61 | +> ```sh |
| 62 | +> # To only run the checks |
| 63 | +> lint-tool |
| 64 | +> |
| 65 | +> # To let the linter try to fix as much as possible |
| 66 | +> lint-tool --fix |
| 67 | +> ``` |
| 68 | +
|
| 69 | +--- |
| 70 | +
|
| 71 | +## Usage |
| 72 | +
|
| 73 | +Launch the main application to process files interactively: |
| 74 | +
|
| 75 | +### GUI |
| 76 | +
|
| 77 | +The GUI handles batch processing for folders, single .h5/.hdf5 files, or .zip archives and lets you run multiple pipelines at once. Batch outputs preserve the input subfolder layout under the chosen output directory (one combined `.h5` per input file). |
| 78 | +
|
| 79 | +You can also select batch-level postprocess steps. These run after the selected pipelines finish and before optional zipping, so any generated dashboards, PNGs, or summaries are included in the final output folder or archive. |
| 80 | +
|
| 81 | +Use the Pipeline Library tab to select which pipelines run. Selection preferences are saved per user between app launches, including installed builds. |
| 82 | +Use the Postprocess Library tab the same way for postprocess steps. |
| 83 | +
|
| 84 | +```sh |
| 85 | +# Via the entry point |
| 86 | +angioeye |
| 87 | +
|
| 88 | +# Or via the script |
| 89 | +python src/angio_eye.py |
| 90 | +``` |
| 91 | +
|
| 92 | +When you run `angioeye` from inside the repository checkout, the launcher prefers the local `src/` tree so newly added or edited pipelines are picked up without needing a full reinstall. |
| 93 | +
|
| 94 | +Installed builds expose editable `pipelines/` and `postprocess/` folders next to `AngioEye.exe`; use the Library tabs' Open folder and Reload buttons to edit and refresh them. |
| 95 | +
|
| 96 | +### CLI |
| 97 | +
|
| 98 | +The CLI is designed for batch processing in headless environments or clusters. |
| 99 | +
|
| 100 | +```sh |
| 101 | +# Via the entry point |
| 102 | +angioeye-cli |
| 103 | +
|
| 104 | +# Or via the script |
| 105 | +python src/cli.py |
| 106 | +``` |
| 107 | +
|
| 108 | +--- |
| 109 | +
|
| 110 | +## Pipeline System |
| 111 | +
|
| 112 | +Pipelines are the heart of AngioEye. To add a new analysis, create a file in `src/pipelines/` with a class inheriting from `ProcessPipeline`. |
| 113 | +
|
| 114 | +To register it to the app, add the decorator `@register_pipeline`. You can define any needed imports inside, as well as some more info. |
| 115 | +
|
| 116 | +To see more complete examples, check out `src/pipelines/basic_stats.py` and `src/pipelines/dummy_heavy.py`. |
| 117 | +
|
| 118 | +### Simple Pipeline Structure |
| 119 | +
|
| 120 | +```python |
| 121 | +from pipelines import ProcessPipeline, ProcessResult, registerPipeline |
| 122 | +
|
| 123 | +@registerPipeline( |
| 124 | + name="My Analysis", |
| 125 | + description="Calculates a custom clinical metric.", |
| 126 | + required_deps=["torch>=2.2"], |
| 127 | +) |
| 128 | +class MyAnalysis(ProcessPipeline): |
| 129 | + def run(self, h5file): |
| 130 | + import torch |
| 131 | + # 1. Read data using h5py |
| 132 | + # 2. Perform calculations |
| 133 | + # 3. Return metrics |
| 134 | +
|
| 135 | + metrics={"peak_flow": 12.5} |
| 136 | +
|
| 137 | + # Optional attributes applied to the pipeline group. |
| 138 | + attrs = { |
| 139 | + "pipeline_version": "1.0", |
| 140 | + "author": "StaticExample" |
| 141 | + } |
| 142 | +
|
| 143 | + return ProcessResult( |
| 144 | + metrics=metrics, |
| 145 | + attrs=attrs |
| 146 | + ) |
| 147 | +``` |
| 148 | +
|
| 149 | +## Postprocess System |
| 150 | +
|
| 151 | +Postprocess steps are discovered from `src/postprocess/` in the same spirit as pipelines, but they run once per batch over the generated pipeline output folder. |
| 152 | +
|
| 153 | +Use `@registerPostprocess(...)` to declare: |
| 154 | +
|
| 155 | +- optional Python package dependencies with `required_deps` |
| 156 | +- required pipeline outputs with `required_pipelines` |
| 157 | +
|
| 158 | +### Simple Postprocess Structure |
| 159 | +
|
| 160 | +```python |
| 161 | +from postprocess.core.base import ( |
| 162 | + BatchPostprocess, |
| 163 | + PostprocessContext, |
| 164 | + PostprocessResult, |
| 165 | + registerPostprocess, |
| 166 | +) |
| 167 | +
|
| 168 | +
|
| 169 | +@registerPostprocess( |
| 170 | + name="My Batch Summary", |
| 171 | + description="Aggregate metrics across the generated batch outputs.", |
| 172 | + required_pipelines=["Basic Stats"], |
| 173 | +) |
| 174 | +class MyBatchSummary(BatchPostprocess): |
| 175 | + def run(self, context: PostprocessContext) -> PostprocessResult: |
| 176 | + report_path = context.output_dir / "my_batch_summary.json" |
| 177 | + report_path.write_text("{}", encoding="utf-8") |
| 178 | +
|
| 179 | + return PostprocessResult( |
| 180 | + summary="Generated my_batch_summary.json.", |
| 181 | + generated_paths=[str(report_path)], |
| 182 | + metadata={"file_count": len(context.processed_files)}, |
| 183 | + ) |
| 184 | +``` |
| 185 | +
|
| 186 | +Inside a postprocess, you can: |
| 187 | +
|
| 188 | +- read `context.output_dir` |
| 189 | +- read `context.processed_files` |
| 190 | +- read `context.selected_pipelines` |
| 191 | +- read `context.input_path` |
| 192 | +- read `context.zip_outputs` |
| 193 | +- write extra artifacts into `context.output_dir` before optional zipping |
| 194 | +- return a short `summary`, explicit `generated_paths`, and structured `metadata` |
| 195 | +
|
| 196 | +The included `Graphics Dashboard` postprocess shows the intended pattern: it consumes the `arterial_waveform_shape_metrics` output and generates a cohort dashboard plus PNG exports after the batch finishes. |
| 197 | +`Pipeline Metrics Manifest` is a lighter built-in example that writes a JSON inventory of the generated pipeline metric datasets for the batch. |
| 198 | +`Postprocess Tutorial` is the minimal reference example: it writes a single JSON file showing every `PostprocessContext` field and the `PostprocessResult` output format. |
0 commit comments