Skip to content

Commit 79c4966

Browse files
committed
fork angioeye
1 parent dc13a1c commit 79c4966

45 files changed

Lines changed: 15144 additions & 1 deletion

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.gitignore

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
# Byte-compiled / cache
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
6+
# Virtual environments
7+
.venv/
8+
venv/
9+
env/
10+
ENV/
11+
.env
12+
.env.*
13+
pip-wheel-metadata/
14+
15+
# Distribution / packaging
16+
build/
17+
dist/
18+
.eggs/
19+
*.egg-info/
20+
*.egg
21+
wheelhouse/
22+
23+
# Test / coverage
24+
.pytest_cache/
25+
.tox/
26+
.nox/
27+
.cache/
28+
.coverage
29+
.coverage.*
30+
htmlcov/
31+
nosetests.xml
32+
coverage.xml
33+
*.cover
34+
*.py,cover
35+
36+
# Type checking / linting
37+
.mypy_cache/
38+
.pytype/
39+
.pyre/
40+
.ruff_cache/
41+
42+
# Jupyter
43+
.ipynb_checkpoints/
44+
45+
# IDE / editor
46+
.vscode/
47+
.idea/
48+
*.code-workspace
49+
50+
# OS files
51+
.DS_Store
52+
Thumbs.db
53+
ehthumbs.db
54+
Desktop.ini
55+
56+
# Local data outputs
57+
*.h5
58+
*.hdf5
59+
*_result.h5
60+
*_result.csv
61+
*_metrics.csv
62+
*.zip
63+
process_result.csv
64+
pipelines.txt
65+
requirements-optional.txt

.pre-commit-config.yaml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
repos:
2+
- repo: https://github.com/pre-commit/pre-commit-hooks
3+
rev: v6.0.0
4+
hooks:
5+
- id: trailing-whitespace
6+
- id: end-of-file-fixer
7+
- id: check-yaml
8+
- id: check-toml
9+
- id: check-added-large-files
10+
11+
- repo: https://github.com/astral-sh/ruff-pre-commit
12+
rev: v0.14.14
13+
hooks:
14+
- id: ruff-check
15+
args: [--fix]
16+
- id: ruff-format

AngioEye.ico

51.1 KB
Binary file not shown.

AngioEye.spec

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# -*- mode: python ; coding: utf-8 -*-
2+
from PyInstaller.utils.hooks import collect_data_files
3+
from PyInstaller.utils.hooks import collect_submodules
4+
5+
datas = []
6+
hiddenimports = []
7+
datas += collect_data_files('pipelines')
8+
datas += collect_data_files('postprocess')
9+
datas += collect_data_files('sv_ttk')
10+
datas += collect_data_files('tkinterdnd2')
11+
datas += [('Angioeye_logo.png', '.')]
12+
datas += [('AngioEye.ico', '.')]
13+
datas += [('default_settings.json', '.')]
14+
datas += [('pyproject.toml', '.')]
15+
hiddenimports += collect_submodules('pipelines')
16+
hiddenimports += collect_submodules('postprocess')
17+
hiddenimports += collect_submodules('tkinterdnd2')
18+
hiddenimports += ['matplotlib.backends.backend_ps']
19+
20+
21+
a = Analysis(
22+
['src\\angio_eye.py'],
23+
pathex=['src'],
24+
binaries=[],
25+
datas=datas,
26+
hiddenimports=hiddenimports,
27+
hookspath=['hooks'],
28+
hooksconfig={},
29+
runtime_hooks=[],
30+
excludes=[],
31+
noarchive=False,
32+
optimize=0,
33+
)
34+
pyz = PYZ(a.pure)
35+
36+
exe = EXE(
37+
pyz,
38+
a.scripts,
39+
a.binaries,
40+
a.datas,
41+
[],
42+
name='AngioEye',
43+
debug=False,
44+
bootloader_ignore_signals=False,
45+
strip=False,
46+
upx=True,
47+
upx_exclude=[],
48+
runtime_tmpdir=None,
49+
console=False,
50+
disable_windowed_traceback=False,
51+
argv_emulation=False,
52+
target_arch=None,
53+
codesign_identity=None,
54+
entitlements_file=None,
55+
icon='AngioEye.ico',
56+
)

Angioeye_logo.png

64.7 KB
Loading

README.md

Lines changed: 198 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,198 @@
1-
EyeFlow is a quantitative analysis platform for retinal hemodynamics using Doppler holography. It combines HoloDoppler outputs with DopplerView-derived image enhancement, vascular segmentation, and topology inference to generate structured, reproducible, and analysis-ready measurements of retinal blood flow.
1+
# AngioEye
2+
3+
AngioEye is the cohort-analysis engine for retinal Doppler holography. It browses EyeFlow .h5 outputs, reads per-segment metrics, applies QC, compares models, and aggregates results at eye/cohort level (including artery–vein summaries) to help design biomarkers. It exports clean CSV reports for stats, figures, and clinical models.
4+
5+
---
6+
7+
## Setup
8+
9+
### Prerequisites
10+
11+
- Python 3.10 or higher.
12+
- It is highly recommended to use a virtual environment.
13+
14+
This project uses a `pyproject.toml` to describe all requirements needed. To start using it, **it is better to use a Python virtual environment (venv)**.
15+
16+
```sh
17+
# Creates the venv
18+
python -m venv .venv
19+
20+
# To enter the venv
21+
# If you are using Windows PowerShell, you might need to activate the "Exceution" policy
22+
./.venv/Scripts/activate
23+
```
24+
25+
You can easily exit it with the command
26+
27+
```sh
28+
deactivate
29+
```
30+
31+
### 1. Basic Installation (User)
32+
33+
```sh
34+
pip install -e .
35+
36+
# Installs pipeline-specific dependencies (optional)
37+
pip install -e ".[pipelines]"
38+
39+
# Installs postprocess-specific dependencies such as the graphics dashboard (optional)
40+
pip install -e ".[postprocess]"
41+
```
42+
43+
### 2. Development Setup (Contributor)
44+
45+
```sh
46+
# Install all dependencies including dev tools (ruff, pre-commit, pyinstaller)
47+
pip install -e ".[dev,pipelines,postprocess]"
48+
49+
# Initialize pre-commit hooks (optionnal)
50+
pre-commit install
51+
```
52+
53+
> [!NOTE]
54+
> The pre-commit is really usefull to run automatic checks before pushing code, reducing chances of ugly code being pushed.
55+
>
56+
> If a pre-commit hook fails, it will try to fix all needed files, **so you will need to add them again before recreating the commit**.
57+
58+
> [!TIP]
59+
> You can run the linter easily, once the `dev` dependencies are installed, with the command:
60+
>
61+
> ```sh
62+
> # To only run the checks
63+
> lint-tool
64+
>
65+
> # To let the linter try to fix as much as possible
66+
> lint-tool --fix
67+
> ```
68+
69+
---
70+
71+
## Usage
72+
73+
Launch the main application to process files interactively:
74+
75+
### GUI
76+
77+
The GUI handles batch processing for folders, single .h5/.hdf5 files, or .zip archives and lets you run multiple pipelines at once. Batch outputs preserve the input subfolder layout under the chosen output directory (one combined `.h5` per input file).
78+
79+
You can also select batch-level postprocess steps. These run after the selected pipelines finish and before optional zipping, so any generated dashboards, PNGs, or summaries are included in the final output folder or archive.
80+
81+
Use the Pipeline Library tab to select which pipelines run. Selection preferences are saved per user between app launches, including installed builds.
82+
Use the Postprocess Library tab the same way for postprocess steps.
83+
84+
```sh
85+
# Via the entry point
86+
angioeye
87+
88+
# Or via the script
89+
python src/angio_eye.py
90+
```
91+
92+
When you run `angioeye` from inside the repository checkout, the launcher prefers the local `src/` tree so newly added or edited pipelines are picked up without needing a full reinstall.
93+
94+
Installed builds expose editable `pipelines/` and `postprocess/` folders next to `AngioEye.exe`; use the Library tabs' Open folder and Reload buttons to edit and refresh them.
95+
96+
### CLI
97+
98+
The CLI is designed for batch processing in headless environments or clusters.
99+
100+
```sh
101+
# Via the entry point
102+
angioeye-cli
103+
104+
# Or via the script
105+
python src/cli.py
106+
```
107+
108+
---
109+
110+
## Pipeline System
111+
112+
Pipelines are the heart of AngioEye. To add a new analysis, create a file in `src/pipelines/` with a class inheriting from `ProcessPipeline`.
113+
114+
To register it to the app, add the decorator `@register_pipeline`. You can define any needed imports inside, as well as some more info.
115+
116+
To see more complete examples, check out `src/pipelines/basic_stats.py` and `src/pipelines/dummy_heavy.py`.
117+
118+
### Simple Pipeline Structure
119+
120+
```python
121+
from pipelines import ProcessPipeline, ProcessResult, registerPipeline
122+
123+
@registerPipeline(
124+
name="My Analysis",
125+
description="Calculates a custom clinical metric.",
126+
required_deps=["torch>=2.2"],
127+
)
128+
class MyAnalysis(ProcessPipeline):
129+
def run(self, h5file):
130+
import torch
131+
# 1. Read data using h5py
132+
# 2. Perform calculations
133+
# 3. Return metrics
134+
135+
metrics={"peak_flow": 12.5}
136+
137+
# Optional attributes applied to the pipeline group.
138+
attrs = {
139+
"pipeline_version": "1.0",
140+
"author": "StaticExample"
141+
}
142+
143+
return ProcessResult(
144+
metrics=metrics,
145+
attrs=attrs
146+
)
147+
```
148+
149+
## Postprocess System
150+
151+
Postprocess steps are discovered from `src/postprocess/` in the same spirit as pipelines, but they run once per batch over the generated pipeline output folder.
152+
153+
Use `@registerPostprocess(...)` to declare:
154+
155+
- optional Python package dependencies with `required_deps`
156+
- required pipeline outputs with `required_pipelines`
157+
158+
### Simple Postprocess Structure
159+
160+
```python
161+
from postprocess.core.base import (
162+
BatchPostprocess,
163+
PostprocessContext,
164+
PostprocessResult,
165+
registerPostprocess,
166+
)
167+
168+
169+
@registerPostprocess(
170+
name="My Batch Summary",
171+
description="Aggregate metrics across the generated batch outputs.",
172+
required_pipelines=["Basic Stats"],
173+
)
174+
class MyBatchSummary(BatchPostprocess):
175+
def run(self, context: PostprocessContext) -> PostprocessResult:
176+
report_path = context.output_dir / "my_batch_summary.json"
177+
report_path.write_text("{}", encoding="utf-8")
178+
179+
return PostprocessResult(
180+
summary="Generated my_batch_summary.json.",
181+
generated_paths=[str(report_path)],
182+
metadata={"file_count": len(context.processed_files)},
183+
)
184+
```
185+
186+
Inside a postprocess, you can:
187+
188+
- read `context.output_dir`
189+
- read `context.processed_files`
190+
- read `context.selected_pipelines`
191+
- read `context.input_path`
192+
- read `context.zip_outputs`
193+
- write extra artifacts into `context.output_dir` before optional zipping
194+
- return a short `summary`, explicit `generated_paths`, and structured `metadata`
195+
196+
The included `Graphics Dashboard` postprocess shows the intended pattern: it consumes the `arterial_waveform_shape_metrics` output and generates a cohort dashboard plus PNG exports after the batch finishes.
197+
`Pipeline Metrics Manifest` is a lighter built-in example that writes a JSON inventory of the generated pipeline metric datasets for the batch.
198+
`Postprocess Tutorial` is the minimal reference example: it writes a single JSON file showing every `PostprocessContext` field and the `PostprocessResult` output format.

THIRD_PARTY_NOTICES

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# THIRD-PARTY NOTICES
2+
3+
This software includes third-party components. The following notice is provided
4+
to comply with their license terms.
5+
6+
---
7+
8+
## Sun Valley ttk theme (sv-ttk / Sun-Valley-ttk-theme)
9+
10+
Project: Sun-Valley-ttk-theme
11+
Upstream: https://github.com/rdbende/Sun-Valley-ttk-theme
12+
License: MIT License
13+
14+
MIT License
15+
16+
Copyright (c) rdbende <rdbende@proton.me>
17+
18+
Permission is hereby granted, free of charge, to any person obtaining a copy
19+
of this software and associated documentation files (the "Software"), to deal
20+
in the Software without restriction, including without limitation the rights
21+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
22+
copies of the Software, and to permit persons to whom the Software is
23+
furnished to do so, subject to the following conditions:
24+
25+
The above copyright notice and this permission notice shall be included in all
26+
copies or substantial portions of the Software.
27+
28+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
29+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
30+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT S

0 commit comments

Comments
 (0)