Add coronary artery segmentation bundle (CoronSegmentator)#761
Add coronary artery segmentation bundle (CoronSegmentator)#761Foxconn-bgroup-aidd wants to merge 27 commits intoProject-MONAI:devfrom
Conversation
| @@ -0,0 +1,7 @@ | |||
| Input/ | |||
There was a problem hiding this comment.
Hi @Foxconn-bgroup-aidd , can we remove this file?
A part of files are already ignored (set in https://github.com/Project-MONAI/model-zoo/blob/dev/.gitignore), and for customized folder like Input/ and Output/, I think we can remove them directly. What do you think?
There was a problem hiding this comment.
There is no problem removing them. Thanks.
| @@ -0,0 +1,17 @@ | |||
| { | |||
| "bundle_root": "./", | |||
There was a problem hiding this comment.
could you prepare a unit test script?
It's helpful to check if (1) large files are downloadable (2) inference pipeline works (3) dependencies mentioned in metadata are correct and enough to run the script.
You can refer to this example to prepare the script: https://github.com/Project-MONAI/model-zoo/blob/dev/ci/unit_tests/test_spleen_ct_segmentation.py
There was a problem hiding this comment.
@yiheng-wang-nv, I prepare the script refer to above url. but when I try to run the script , I got the error: ModuleNotFoundError: Cannot locate class or function path: 'scripts.heart_digital_twin.CoroSegmentator_Pipeline'. It seems to can't access the custom class is defined in inference.json. I put the script in ci/unitest/ folder and run the script at model_zoo root directory. How can I solve the problem? Thanks for answer. Following is revised inference.json:
{
"imports": ["$import glob"],
"bundle_root": "./",
"ckpt_dir": "$@bundle_root + 'models'",
"dataset_dir": "",
"data": "$list(sorted(glob.glob(@dataset_dir + '/*.nii.gz')))",
"output_ext": ".usd",
"output_dir": "$@bundle_root + 'Output'",
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
"inferer": {
"_target_": "scripts.heart_digital_twin.CoroSegmentator_Pipeline",
"input_params": {
"inputFile": "$@data[0]",
"outputDir": "$@output_dir"
}
},
"dataset": {},
"evaluator": {},
"network_def": {},
"run": ["$@inferer.run()"]
}
There was a problem hiding this comment.
Hi @Foxconn-bgroup-aidd, please try create an init.py file in the scripts directory to make it as Python packages and also put the scripts folder just under you bundle instead of under the unittest folder. Thanks.
There was a problem hiding this comment.
Thanks for comment, @KumoLiu . Following is my directory tree of project
model-zoo/
|--ci/
|--unit_tests/
|--test_coronaryArtery_ct_segmentation.py
|--models/
|--myBundle/
|--large_file.yml
|--LICENSE
|--configs/
|--inference.json
|--metadata.json
|--docs/
|--models/
|--scripts/
|--__init__.py
|--heartDitgitalTwin.py
I run the python ci/unit_tests/test_coronaryArtery_ct_segmentation.py and got the ModuleNotFoundError: Cannot locate class or function path: 'scripts.heart_digital_twin.CoroSegmentator_Pipeline'. I still try to solve the problem which pydoc.locate(scripts.heart_digital_twin.CoroSegmentator_Pipeline) can't access custom class. Thanks for help.
There was a problem hiding this comment.
@KumoLiu I add the path of bundle into the sys.path and solve it, Thanks!
KumoLiu
left a comment
There was a problem hiding this comment.
Hi @Foxconn-bgroup-aidd, thanks for the contributions! Leave several comments inline.
| @@ -0,0 +1,88 @@ | |||
| { | |||
| "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json", | |||
There was a problem hiding this comment.
Hi @Foxconn-bgroup-aidd, could you please utilize the latest schema available at the following link:
"https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json"
Based on the content, it appears that you are already using the most recent version, as it includes "required_packages_version." Thanks.
| "monai": " 1.4.0", | ||
| "nnunetv2": "2.6.0", | ||
| "nibabel": "5.3.2", | ||
| "numpy": "2.3.1", |
There was a problem hiding this comment.
Could you please double check the monai version you are using, the one mentioned above is "1.4.0"? Thanks!
| "monai": " 1.4.0", | |
| "nnunetv2": "2.6.0", | |
| "nibabel": "5.3.2", | |
| "numpy": "2.3.1", | |
| "nnunetv2": "2.6.0", | |
| "nibabel": "5.3.2", |
| "0": "Background", | ||
| "1": "Coronary Artery" |
There was a problem hiding this comment.
It seems the "num_channels" is only one for the outputs.
| "optional_packages_version": { | ||
| "monai": "1.3.2", | ||
| "torch": "2.7.0+cu126", | ||
| "numpy": "2.3.1" | ||
| } |
There was a problem hiding this comment.
I guess you can remove this part, the "optional_packages_version" is no longer a required one in the latest schema. thanks.
There was a problem hiding this comment.
The version of monai used in our project is “1.4.0”. “optional_packages_version" will be removed.
| The ImageCAS dataset is publicly available at: | ||
| https://github.com/XiaoweiXu/ImageCAS-A-Large-Scale-Dataset-and-Benchmark-for-Coronary-Artery-Segmentation-based-on-CT.git'' |
There was a problem hiding this comment.
Would be better to mention the dataset link in the "Data" section, what do you think?
| # try: | ||
| # Step 1: Convert input to NRRD | ||
| input_nrrd = self._convert_to_nrrd(self.input_path, input_tmp_dir) | ||
| # Step 2: Run segmentation | ||
| seg_nrrd = self._run_segmentation(input_nrrd, output_tmp_dir) | ||
| # Step 3: Process results | ||
| self._process_segmentation_results(seg_nrrd, output_tmp_dir) | ||
| # except Exception as e: | ||
| # logger.error(f"Processing failed: {str(e)}") | ||
| # raise |
There was a problem hiding this comment.
Could you please remove the comment code here or perhaps include the try-catch logic is also make sense.
| base_center = Gf.Vec3d( | ||
| (bbox_min[0] + bbox_max[0]) / 2, bbox_min[1], (bbox_min[2] + bbox_max[2]) / 2 | ||
| ) | ||
| # 计算平移量,使得基底中心移动到原点 |
There was a problem hiding this comment.
Please check and remove all Chinese comments inline. Thanks.
| "output_ext": ".usd", | ||
| "input_file": "Input/E_CT_0000.nii.gz", | ||
| "infer": { | ||
| "_target_": "scripts.heart_digital_twin.CoroSegmentator_Pipeline", |
There was a problem hiding this comment.
As @yiheng-wang-nv mentioned above, please include the unit test to ensure the bundle works as expected.
I didn't see this file under the scripts folder.
…and metadata.json
…/model-zoo into 760-CoronSegmentator
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Did you mis-delete this test file?
There was a problem hiding this comment.
We have fixed the issue of incorrectly deleting test files.
There was a problem hiding this comment.
Hi, I’ve pushed the updated code. Thank you for your feedback so far! Please let me know if there’s anything else I should work on.
…/model-zoo into 760-CoronSegmentator
for more information, see https://pre-commit.ci
|
HI @Foxconn-bgroup-aidd , based on the pre-commit CI, it seems the following json files have format issues: |
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
for more information, see https://pre-commit.ci
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
for more information, see https://pre-commit.ci
|
Hi @Foxconn-bgroup-aidd we still want to accept this model so please have a look at outstanding conversations and resolve those that have been addressed. @KumoLiu and @yiheng-wang-nv can continue their review from there. Thanks! |
…t, black, flake8 and pytype codeformat test
|
/build |
| "monai": " 1.5.0", | ||
| "nnunetv2": "2.6.2", | ||
| "nibabel": "5.3.2", | ||
| "numpy": "2.3.3", |
There was a problem hiding this comment.
Hi @Foxconn-bgroup-aidd , can you confirm the numpy version here? I saw CI error: https://github.com/Project-MONAI/model-zoo/actions/runs/18928765961/job/54041291172?pr=761#step:7:488 It may due to wrong numpy version, or the python version used in CI does not support this version. May I know which python version do you use?
There was a problem hiding this comment.
Hi, we are using Python 3.12.9, and we have already pushed a new version. Please let us know if you need any additional information
There was a problem hiding this comment.
Hi @ericspod , can you help to check this PR? We may need to enable tests on new python versions. I'm very busy on other projects, and may not have enough time for model zoo recently.
| @@ -0,0 +1,11 @@ | |||
| large_files: | |||
| - path: "models/cardiacModel.pt" | |||
| # url: "https://mega.nz/file/L8oz3QhA#illHJWSCmsmV5xqbXktwiu-1jUebbpJ0HSKv0hbMFCc" | |||
There was a problem hiding this comment.
Please remove these commented links.
| return md5.hexdigest() | ||
|
|
||
|
|
||
| def _download(url, destination_path): |
There was a problem hiding this comment.
MONAI has it's own download routines which you can use instead, and which also do hash checking.
|
@KumoLiu we seem to be having an issue with space for the tests, did we have a solution for this in this repo? Thanks! |
…oblem which remaining space is not enough.
for more information, see https://pre-commit.ci
WalkthroughThis PR introduces a complete coronary artery CT segmentation bundle integrated with MONAI. It includes test infrastructure, configuration files for inference and metadata, documentation, model weight management, and implementation scripts for nnUNet-based segmentation with STL component processing and digital twin pipeline orchestration. Changes
Sequence DiagramsequenceDiagram
actor User
participant CoroSegmentatorPipeline as CoroSegmentatorPipeline
participant ModelDownloader as Model Downloader
participant NNUnetPredictor as NNUnetPredictor
participant STLSplitter as STLSplitter
participant OutputDir as Output Directory
User->>CoroSegmentatorPipeline: run(input_params)
CoroSegmentatorPipeline->>ModelDownloader: Download model weights from large_file.yml
ModelDownloader-->>CoroSegmentatorPipeline: Models ready (cardiacModel.pt, CAseg066.zip)
CoroSegmentatorPipeline->>NNUnetPredictor: run(nii_file, output_folder)
NNUnetPredictor->>NNUnetPredictor: Setup nnUNet environment & copy input
NNUnetPredictor->>NNUnetPredictor: nnUNetv2_predict (CPU/GPU)
NNUnetPredictor->>STLSplitter: Process segmentation output STL
STLSplitter->>STLSplitter: Build connectivity via DSU
STLSplitter->>STLSplitter: Merge components (KD-tree proximity)
STLSplitter->>OutputDir: Write merged STL components
STLSplitter-->>NNUnetPredictor: STL processing complete
NNUnetPredictor-->>CoroSegmentatorPipeline: Segmentation complete
CoroSegmentatorPipeline-->>User: Success (True) or Failure (False)
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
models/CoronSegmentator/docs/data_license.txt (1)
25-29:⚠️ Potential issue | 🟡 MinorDependency versions don't match
configs/metadata.json.This table lists
monai 1.4.0andnnunetv2 2.5.1, butmodels/CoronSegmentator/configs/metadata.jsondeclaresmonai_version: 1.5.0andrequired_packages_version.nnunetv2: 2.6.2. Please reconcile so that the documented third-party versions match the runtime requirements actually installed by the bundle.📝 Suggested alignment
-monai 1.4.0 Apache-2.0 MONAI Consortium / Project MONAI +monai 1.5.0 Apache-2.0 MONAI Consortium / Project MONAI usd-core 25.5 Apache-2.0 Pixar Animation Studios -nnunetv2 (editable) 2.5.1 Apache-2.0 Fabian Isensee / Helmholtz Munich +nnunetv2 (editable) 2.6.2 Apache-2.0 Fabian Isensee / Helmholtz Munich🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/CoronSegmentator/docs/data_license.txt` around lines 25 - 29, The versions in models/CoronSegmentator/docs/data_license.txt (monai 1.4.0, nnunetv2 2.5.1) are inconsistent with the runtime metadata in configs/metadata.json (monai_version and required_packages_version.nnunetv2); update the table in data_license.txt so the listed package versions match the values in configs/metadata.json (adjust the monai and nnunetv2 entries to the monai_version and required_packages_version.nnunetv2 declared there), or if the metadata is wrong, update configs/metadata.json instead—ensure both files reflect the same canonical versions.
🧹 Nitpick comments (3)
models/CoronSegmentator/configs/inference.json (2)
5-6:bundle_root: "."is fragile; resolve relative to the config file or require an override.When the bundle is invoked from a different CWD (CI, the unit test, or
monai.bundle runfrom the repo root),"."makesckpt_dirandoutput_dirresolve under the wrong directory, andscripts.heart_digital_twinwill not be importable unless the caller has manually inserted the bundle intosys.path(which is exactly the workaround the author had to apply in the unit test, per the prior review thread).Most bundles in this repo expect
bundle_rootto be passed in via CLI override (e.g.,--bundle_root models/CoronSegmentator). It would also be worth adding a smallimportsentry that inserts@bundle_rootintosys.pathsoscripts.heart_digital_twinresolves regardless of CWD, e.g.:- "imports": [ - "$import glob" - ], + "imports": [ + "$import glob", + "$import sys", + "$sys.path.append(`@bundle_root`)" + ],Also applies to: 10-10
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/CoronSegmentator/configs/inference.json` around lines 5 - 6, The config uses bundle_root: "." which breaks resolution when run from other CWDs; change the config so bundle_root is not a hardcoded "." by requiring a CLI override or resolving it relative to the config file (update the bundle_root value and any derived fields like ckpt_dir and output_dir that reference `@bundle_root`), and add an imports entry to insert `@bundle_root` into sys.path so scripts.heart_digital_twin can be imported regardless of CWD; update references in the config (bundle_root, ckpt_dir, output_dir, imports) to use the resolved `@bundle_root` token rather than a literal ".".
9-11: Several declared keys appear to be dead.
output_ext: ".usd"(Line 9) anddevice(Line 11) are never referenced byinferer,run, or any other entry —CoroSegmentatorPipeline.__init__only consumesinputFile/outputDir.dataset: {},evaluator: {},network_def: {}(Lines 19–21) are empty objects. If they're only there to satisfyConfigWorkflow.check_properties, please add a brief comment in the README; otherwise drop them.If the pipeline really should obey
deviceand write withoutput_ext, plumb them through toinput_paramsso the workflow is self-consistent.Also applies to: 19-21
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/CoronSegmentator/configs/inference.json` around lines 9 - 11, The JSON contains dead/unused keys (output_ext, device, and empty objects dataset/evaluator/network_def) — either remove them or wire them through: if you want the pipeline to honor device and output_ext, add them into the same input_params object that is passed to the inferer/run so CoronSegmentatorPipeline.__init__ (and whoever builds input_params) can consume them (e.g., include "device" and "output_ext" in the input_params section and ensure inferer/run forwards input_params into CoronSegmentatorPipeline.__init__); otherwise delete output_ext and device from the config and either remove or document the empty dataset/evaluator/network_def entries in the README to satisfy ConfigWorkflow.check_properties. Ensure your changes reference the config keys by name and that inferer/run and CoronSegmentatorPipeline.__init__ are updated consistently.models/CoronSegmentator/configs/metadata.json (1)
57-76:outputs.pred.type: "usd"does not match MONAI bundle conventions.The
typefield across the model-zoo describes the tensor kind—image,probability,tuples,object, etc.—not file format. USD is a file format produced downstream from the network; the network itself outputs a segmentation tensor. Settype: "image"(withformat: "segmentation") to accurately describe the tensor, and document USD export separately in the README.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/CoronSegmentator/configs/metadata.json` around lines 57 - 76, The metadata currently mislabels the model output: change outputs.pred.type from "usd" to "image" so the tensor kind correctly reflects a segmentation output (leave "format": "segmentation", "num_channels", "spatial_shape", etc. unchanged), and move any USD/export-file format details into the README or an export step documentation rather than the metadata.json; update the entry referenced as outputs.pred.type accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ci/unit_tests/test_coronaryArtery_ct_segmentation.py`:
- Around line 55-56: The synthetic test data is invalid: fix creation of
test_image and test_label by sampling CT-intensity-range into a dtype that can
hold Hounsfield units and producing true binary labels; specifically change the
array dtype for test_image (variable name test_image) from np.int8 to np.int16
(so values sampled from low=-1024, high=1000 are represented correctly) and
change the randint call for test_label (variable name test_label) to use high=2
so it can generate 0 and 1 labels rather than all zeros.
- Around line 65-77: The test calls check_workflow(inferrer,
check_properties=True) which triggers ConfigWorkflow ->
CoroSegmentatorPipeline.run and performs heavy external work
(monai.apps.utils.download_url and NNUnetPredictor.run spawning
nnUNetv2_predict); to fix, patch/mock those external operations in the test so
only wiring is validated: either stub CoroSegmentatorPipeline.run to a no-op or
mock monai.apps.utils.download_url and subprocess.run (or the method
NNUnetPredictor.run) to return successfully without downloading or invoking
nnUNetv2_predict; ensure your mocks target the exact symbols ConfigWorkflow
(test instantiation), CoroSegmentatorPipeline.run, download_url,
NNUnetPredictor.run, and the subprocess invocation name nnUNetv2_predict so the
test runs in CI without performing network or subprocess work.
In `@models/CoronSegmentator/configs/inference.json`:
- Line 8: The config currently builds "data" eagerly via "data":
"$list(sorted(glob.glob(`@dataset_dir` + '/*.nii.gz')))" and then uses `@data`[0] in
inferer.input_params.inputFile which raises IndexError when dataset_dir is
empty; change this by deferring or guarding the lookup: remove the eager
`@data`[0] dependency and instead resolve the input file at runtime inside
CoronSegmentatorPipeline.run() (or add a safe helper that returns a
sentinel/default or raises a clear, documented error if no .nii.gz files are
found), update inferer.input_params.inputFile to reference that runtime-resolved
value (or a safe placeholder), and add prominent documentation in the config
comment that dataset_dir must be overridden or contain NIfTI files so
ConfigWorkflow.check_workflow and monai.bundle run do not crash at construction
time.
In `@models/CoronSegmentator/configs/metadata.json`:
- Around line 38-50: The metadata.json currently declares format: "hounsfield"
but sets value_range: [0, 1], which is inconsistent; either set "value_range" to
a representative HU window (e.g., the clipping window your pipeline uses such as
[-1024, 3071] or a narrower window like [-125, 275] depending on preprocessing)
while keeping format: "hounsfield", or change format to reflect pre-normalized
data (e.g., "normalized" or "float32_normalized") and update README to document
the min-max or z-score normalization; update the "value_range" field and README
accordingly and ensure any references to network_data_format elsewhere in the
repo are consistent with the chosen option.
In `@models/CoronSegmentator/docs/README.md`:
- Around line 13-15: The README's opening paragraph, Tags, and "processed by"
steps still describe dual whole-heart + coronary segmentation, but the bundle is
coronary-only (see metadata.json task: "Coronary artery ct segmentation");
update models/CoronSegmentator/docs/README.md to remove references to MONAI
Auto3DSeg and whole-heart segmentation, revise the opening paragraph to state
coronary artery–only scope, adjust the Tags list, and change the "processed by"
steps (lines referencing MONAI/Auto3DSeg) to only list the custom nnU-Net
coronary segmentation and subsequent STL→USD conversion for 3D visualization.
- Around line 45-46: The ImageCAS dataset URL in
models/CoronSegmentator/docs/README.md contains two stray single quotes at the
end ("...-on-CT.git''") which break the link; remove the extra quote characters
so the URL ends with ".git" (update the ImageCAS dataset URL line in the README
to
"https://github.com/XiaoweiXu/ImageCAS-A-Large-Scale-Dataset-and-Benchmark-for-Coronary-Artery-Segmentation-based-on-CT.git").
- Around line 41-43: The README's example `monai.bundle run` invocation is
missing required parameters, causing the `data` list to be empty and `@data[0]`
to IndexError; update the example to pass `--meta_file`, `--logging_file`, and a
non-empty `--dataset_dir` that matches the glob used in `configs/inference.json`
(ensure `dataset_dir` points to a folder containing `.nii.gz` files), e.g.
include flags for `--meta_file configs/metadata.json --logging_file
configs/logging.conf --dataset_dir /path/to/dataset`; confirm
`configs/inference.json`'s `"dataset_dir"` is not left as an empty string or
document overriding it via the CLI.
In `@models/CoronSegmentator/large_file.yml`:
- Line 6: The YAML field hash_type is defined but never passed into the
downloader; update CoroSegmentatorPipeline.run to read hash_type from each file
entry (e.g. file_info.get("hash_type", "md5")) and pass it into the download
call as download_url(url, filepath=..., hash_val=expected_hash,
hash_type=hash_type) so the declared algorithm is honored (alternatively remove
the unused YAML field if you prefer); locate this change around the run method
in CoroSegmentatorPipeline and the call site that currently supplies only
path/url/hash_val.
- Line 4: The model bundle in models/CoronSegmentator/large_file.yml uses a
Google Drive URL under the url key which is fragile for CI; upload the weights
to a stable direct-download host (e.g., MONAI-extra-test-data GitHub release or
another CDN), replace the url value with the new direct-download link, recompute
and update the corresponding hash_val in the same YAML entry to match the
uploaded file, and apply the same replacement for the other affected bundle
referenced as "9-9" so all references use stable hosts instead of Google Drive.
In
`@models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py`:
- Around line 437-441: The log messages use backslash-continued f-strings which
preserve the indentation and produce unwanted spaces; locate the except blocks
catching subprocess.CalledProcessError (the logger.error calls that reference
self.input_path and e.returncode) and replace the backslash-continued f-strings
with either a single-line f-string or implicit string concatenation (e.g., f"...
segmentation inference failed with code {e.returncode}") so the message has no
embedded indentation; apply the same fix to the other two occurrences that use
logger.error with self.input_path and e.returncode.
- Around line 358-366: preprocessed_path is set inside the installed bundle
(script_path.parent) which can pollute/read-only install dirs; change it to live
under the temporary workspace like raw_path (use self.temp_dir.name, e.g.
self.preprocessed_path = os.path.join(self.temp_dir.name,
"nnUNet/nnUNet_preprocessed")) so nnUNet writes to a transient location; also
harden the model_weight construction (the list comprehension that calls
os.listdir on script_path.parent.parent.parent/"models") by first checking that
the models directory exists and using an empty list if not (or use
os.path.exists/isdir) before listing to avoid FileNotFoundError when the models/
dir is absent.
In `@models/CoronSegmentator/scripts/heart_digital_twin.py`:
- Line 9: Replace the leftover template placeholder in the module docstring at
the top of heart_digital_twin.py with a concise, accurate description of what
the module does (e.g., "Convert NIfTI (.nii/.nii.gz) medical image files into
USD scene files and generate associated heart digital twin metadata"), ensuring
the docstring reflects the script's purpose, inputs, outputs and any important
side effects; update the module-level docstring string literal (the first
triple-quoted string in the file) accordingly.
- Around line 92-104: The run() error handling silently returns False and
performs fragile cleanup; update the except block in heart_digital_twin.run to
both surface failures to the workflow (either re-raise the caught exception
after logging or raise a new exception so ConfigWorkflow sees the failure) and
replace the manual os.remove/os.rmdir loop with shutil.rmtree(seg_folder,
ignore_errors=False) (or ignore_errors=True if you prefer best-effort cleanup)
to correctly remove nested files/directories; ensure you import shutil and keep
the gc.collect() in the finally block.
---
Outside diff comments:
In `@models/CoronSegmentator/docs/data_license.txt`:
- Around line 25-29: The versions in
models/CoronSegmentator/docs/data_license.txt (monai 1.4.0, nnunetv2 2.5.1) are
inconsistent with the runtime metadata in configs/metadata.json (monai_version
and required_packages_version.nnunetv2); update the table in data_license.txt so
the listed package versions match the values in configs/metadata.json (adjust
the monai and nnunetv2 entries to the monai_version and
required_packages_version.nnunetv2 declared there), or if the metadata is wrong,
update configs/metadata.json instead—ensure both files reflect the same
canonical versions.
---
Nitpick comments:
In `@models/CoronSegmentator/configs/inference.json`:
- Around line 5-6: The config uses bundle_root: "." which breaks resolution when
run from other CWDs; change the config so bundle_root is not a hardcoded "." by
requiring a CLI override or resolving it relative to the config file (update the
bundle_root value and any derived fields like ckpt_dir and output_dir that
reference `@bundle_root`), and add an imports entry to insert `@bundle_root` into
sys.path so scripts.heart_digital_twin can be imported regardless of CWD; update
references in the config (bundle_root, ckpt_dir, output_dir, imports) to use the
resolved `@bundle_root` token rather than a literal ".".
- Around line 9-11: The JSON contains dead/unused keys (output_ext, device, and
empty objects dataset/evaluator/network_def) — either remove them or wire them
through: if you want the pipeline to honor device and output_ext, add them into
the same input_params object that is passed to the inferer/run so
CoronSegmentatorPipeline.__init__ (and whoever builds input_params) can consume
them (e.g., include "device" and "output_ext" in the input_params section and
ensure inferer/run forwards input_params into
CoronSegmentatorPipeline.__init__); otherwise delete output_ext and device from
the config and either remove or document the empty dataset/evaluator/network_def
entries in the README to satisfy ConfigWorkflow.check_properties. Ensure your
changes reference the config keys by name and that inferer/run and
CoronSegmentatorPipeline.__init__ are updated consistently.
In `@models/CoronSegmentator/configs/metadata.json`:
- Around line 57-76: The metadata currently mislabels the model output: change
outputs.pred.type from "usd" to "image" so the tensor kind correctly reflects a
segmentation output (leave "format": "segmentation", "num_channels",
"spatial_shape", etc. unchanged), and move any USD/export-file format details
into the README or an export step documentation rather than the metadata.json;
update the entry referenced as outputs.pred.type accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ea410f65-0bda-4362-98e8-c5f512b64932
📒 Files selected for processing (12)
ci/unit_tests/test_coronaryArtery_ct_segmentation.pymodels/CoronSegmentator/.gitignoremodels/CoronSegmentator/LICENSEmodels/CoronSegmentator/configs/inference.jsonmodels/CoronSegmentator/configs/logging.confmodels/CoronSegmentator/configs/metadata.jsonmodels/CoronSegmentator/docs/README.mdmodels/CoronSegmentator/docs/data_license.txtmodels/CoronSegmentator/large_file.ymlmodels/CoronSegmentator/scripts/__init__.pymodels/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.pymodels/CoronSegmentator/scripts/heart_digital_twin.py
| test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int8) | ||
| test_label = np.random.randint(low=0, high=1, size=input_shape).astype(np.int8) |
There was a problem hiding this comment.
Both synthetic image dtype and label range are buggy.
- Line 55:
np.int8only holds values in[-128, 127], but you sample in[-1024, 1000). The cast silently wraps around, producing meaningless CT intensities (e.g.,-1024 → 0,999 → -25). Usenp.int16for HU-scale CT. - Line 56:
np.random.randint(low=0, high=1, ...)never produces a1becausehighis exclusive; every label volume is all zeros. Usehigh=2for binary labels.
🐛 Proposed fix
- test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int8)
- test_label = np.random.randint(low=0, high=1, size=input_shape).astype(np.int8)
+ test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int16)
+ test_label = np.random.randint(low=0, high=2, size=input_shape).astype(np.int8)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int8) | |
| test_label = np.random.randint(low=0, high=1, size=input_shape).astype(np.int8) | |
| test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int16) | |
| test_label = np.random.randint(low=0, high=2, size=input_shape).astype(np.int8) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/unit_tests/test_coronaryArtery_ct_segmentation.py` around lines 55 - 56,
The synthetic test data is invalid: fix creation of test_image and test_label by
sampling CT-intensity-range into a dtype that can hold Hounsfield units and
producing true binary labels; specifically change the array dtype for test_image
(variable name test_image) from np.int8 to np.int16 (so values sampled from
low=-1024, high=1000 are represented correctly) and change the randint call for
test_label (variable name test_label) to use high=2 so it can generate 0 and 1
labels rather than all zeros.
| @parameterized.expand([TEST_CASE_1]) | ||
| def test_infer_config(self, override): | ||
| override["dataset_dir"] = self.dataset_dir | ||
| bundle_root = override["bundle_root"] | ||
|
|
||
| inferrer = ConfigWorkflow( | ||
| workflow_type="infer", | ||
| config_file=Path(bundle_root) / "configs/inference.json", | ||
| logging_file=Path(bundle_root) / "configs/logging.conf", | ||
| meta_file=Path(bundle_root) / "configs/metadata.json", | ||
| **override, | ||
| ) | ||
| check_workflow(inferrer, check_properties=True) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm what check_workflow does and whether ConfigWorkflow.run is being short-circuited anywhere.
rg -nP -C2 'check_workflow\b' ci/unit_tests/utils.py
rg -nP -C3 'class CoroSegmentatorPipeline\b|def run\(self\)' models/CoronSegmentator/scripts/heart_digital_twin.py
# Look for similar bundles' tests that mock heavy steps to see the established pattern in this repo.
rg -nP -C2 'mock\.|patch\(|MagicMock' ci/unit_tests | head -n 60Repository: Project-MONAI/model-zoo
Length of output: 669
🏁 Script executed:
# Get the full check_workflow function
rg -nP -A 20 'def check_workflow\(workflow: ConfigWorkflow' ci/unit_tests/utils.pyRepository: Project-MONAI/model-zoo
Length of output: 479
🏁 Script executed:
# Get the full run method of CoroSegmentatorPipeline
rg -nP -A 50 'def run\(self\):' models/CoronSegmentator/scripts/heart_digital_twin.pyRepository: Project-MONAI/model-zoo
Length of output: 1651
🏁 Script executed:
# Check if the test file itself has any mocks or patches
cat -n ci/unit_tests/test_coronaryArtery_ct_segmentation.pyRepository: Project-MONAI/model-zoo
Length of output: 3556
🏁 Script executed:
# Find run_coronary_artery_segmentation and check if it spawns nnUNetv2_predict
rg -nP -A 30 'def run_coronary_artery_segmentation' models/CoronSegmentator/Repository: Project-MONAI/model-zoo
Length of output: 3029
🏁 Script executed:
# Check for nnUNetv2_predict subprocess calls in the codebase
rg -nP 'nnUNetv2_predict|subprocess\.' models/CoronSegmentator/Repository: Project-MONAI/model-zoo
Length of output: 1216
🏁 Script executed:
# Look for integration test patterns in the codebase
rg -nP 'integration|@pytest.mark|@unittest' ci/unit_tests/ | grep -i integration | head -20Repository: Project-MONAI/model-zoo
Length of output: 49
🏁 Script executed:
# Check the inference.json config to see what triggers the pipeline
find models/CoronSegmentator -name "inference.json" -type f -exec head -100 {} \;Repository: Project-MONAI/model-zoo
Length of output: 752
🏁 Script executed:
# Check if ConfigWorkflow has any special handling for this bundle
rg -nP 'class ConfigWorkflow|def run\(' ci/unit_tests/utils.pyRepository: Project-MONAI/model-zoo
Length of output: 49
🏁 Script executed:
# Look at the scripts module to understand how pipeline is invoked from config
rg -nP 'CoroSegmentatorPipeline' models/CoronSegmentator/Repository: Project-MONAI/model-zoo
Length of output: 279
Mock/patch heavy operations (download_url and nnUNetv2_predict subprocess) to prevent full-pipeline execution in this unit test.
check_workflow(inferrer, check_properties=True) invokes workflow.run(), which loads inference.json and executes the pipeline:
ConfigWorkflow.run()→ instantiates and callsCoroSegmentatorPipeline.run()CoroSegmentatorPipeline.run()downloads ~hundreds of MB of model weights viadownload_url()from Google Drive view URLsrun_coronary_artery_segmentation()→NNUnetPredictor.run()spawnsnnUNetv2_predictsubprocess
In CI this will fail because:
monai.apps.utils.download_urlcannot follow through Google Drive's confirmation tokens for large filesnnUNetv2_predictwill not be available on PATH- Full pipeline execution exhausts test resources
The test has no mocks or patches to prevent this. Recommend mocking download_url and subprocess.run (or stubbing CoroSegmentatorPipeline.run()) so the test validates only the workflow wiring and property check. If full-pipeline coverage is needed, gate it behind an integration-test marker.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/unit_tests/test_coronaryArtery_ct_segmentation.py` around lines 65 - 77,
The test calls check_workflow(inferrer, check_properties=True) which triggers
ConfigWorkflow -> CoroSegmentatorPipeline.run and performs heavy external work
(monai.apps.utils.download_url and NNUnetPredictor.run spawning
nnUNetv2_predict); to fix, patch/mock those external operations in the test so
only wiring is validated: either stub CoroSegmentatorPipeline.run to a no-op or
mock monai.apps.utils.download_url and subprocess.run (or the method
NNUnetPredictor.run) to return successfully without downloading or invoking
nnUNetv2_predict; ensure your mocks target the exact symbols ConfigWorkflow
(test instantiation), CoroSegmentatorPipeline.run, download_url,
NNUnetPredictor.run, and the subprocess invocation name nnUNetv2_predict so the
test runs in CI without performing network or subprocess work.
| "bundle_root": ".", | ||
| "ckpt_dir": "$@bundle_root + '/models'", | ||
| "dataset_dir": "", | ||
| "data": "$list(sorted(glob.glob(@dataset_dir + '/*.nii.gz')))", |
There was a problem hiding this comment.
@data[0] will raise IndexError when dataset_dir is empty or contains no .nii.gz.
dataset_dir defaults to "" and data is built eagerly from glob.glob(@dataset_dir + '/*.nii.gz'). Then inferer.input_params.inputFile references @data[0], which is evaluated at workflow construction time. Result: just running monai.bundle run --config_file configs/inference.json (without supplying a populated dataset_dir) crashes before run is ever called, and ConfigWorkflow.check_workflow(check_properties=True) from the new unit test will likewise fail unless dataset_dir is overridden with a directory containing NIfTI files.
Consider deferring the lookup, e.g., resolve the input file inside CoroSegmentatorPipeline.run() from dataset_dir, or guard with a default sentinel. At minimum, document the required dataset_dir override prominently.
Also applies to: 15-15
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@models/CoronSegmentator/configs/inference.json` at line 8, The config
currently builds "data" eagerly via "data": "$list(sorted(glob.glob(`@dataset_dir`
+ '/*.nii.gz')))" and then uses `@data`[0] in inferer.input_params.inputFile which
raises IndexError when dataset_dir is empty; change this by deferring or
guarding the lookup: remove the eager `@data`[0] dependency and instead resolve
the input file at runtime inside CoronSegmentatorPipeline.run() (or add a safe
helper that returns a sentinel/default or raises a clear, documented error if no
.nii.gz files are found), update inferer.input_params.inputFile to reference
that runtime-resolved value (or a safe placeholder), and add prominent
documentation in the config comment that dataset_dir must be overridden or
contain NIfTI files so ConfigWorkflow.check_workflow and monai.bundle run do not
crash at construction time.
| "format": "hounsfield", | ||
| "modality": "CT", | ||
| "num_channels": 1, | ||
| "spatial_shape": [ | ||
| 512, | ||
| 512, | ||
| 256 | ||
| ], | ||
| "dtype": "float32", | ||
| "value_range": [ | ||
| 0, | ||
| 1 | ||
| ], |
There was a problem hiding this comment.
format: "hounsfield" with value_range: [0, 1] is contradictory.
Hounsfield units span roughly [-1024, 3071]. A [0, 1] range implies the inputs are min-max normalized (no longer raw HU). Please either:
- keep
format: "hounsfield"and setvalue_rangeto a representative HU window (e.g., the clipping window the pipeline uses internally), or - if the bundle expects pre-normalized inputs, change
formataccordingly and document the normalization in the README.
This also affects how downstream tooling that consumes network_data_format will preprocess data.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@models/CoronSegmentator/configs/metadata.json` around lines 38 - 50, The
metadata.json currently declares format: "hounsfield" but sets value_range: [0,
1], which is inconsistent; either set "value_range" to a representative HU
window (e.g., the clipping window your pipeline uses such as [-1024, 3071] or a
narrower window like [-125, 275] depending on preprocessing) while keeping
format: "hounsfield", or change format to reflect pre-normalized data (e.g.,
"normalized" or "float32_normalized") and update README to document the min-max
or z-score normalization; update the "value_range" field and README accordingly
and ensure any references to network_data_format elsewhere in the repo are
consistent with the chosen option.
| Coronsegmentator is an automated pipeline that performs dual-task segmentation on cardiac CT images, focusing on both whole-heart and coronary artery structures. It integrates MONAI’s Auto3DSeg for general cardiac segmentation and a custom nnU-Net model for detailed coronary artery segmentation. The pipeline further converts segmentation results (in STL format) into USD files for downstream 3D visualization and digital twin simulation using NVIDIA Omniverse. | ||
|
|
||
| The entire workflow is designed to enable precise, scalable, and automated generation of personalized coronary artery digital twins, supporting clinical planning, AI-based diagnosis, and medical visualization in research or pre-operative planning workflows. |
There was a problem hiding this comment.
Description still advertises whole-heart (Auto3DSeg) segmentation, but the bundle was reduced to coronary-only.
Per the latest commit history and models/CoronSegmentator/configs/metadata.json (task: "Coronary artery ct segmentation", description mentions only coronary artery), the bundle no longer performs the dual whole-heart + coronary task. The opening section here, the Tags, and the "processed by" steps (Lines 25–29) still mention MONAI Auto3DSeg whole-heart segmentation, which will mislead users.
Please update the README to reflect coronary-only scope.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@models/CoronSegmentator/docs/README.md` around lines 13 - 15, The README's
opening paragraph, Tags, and "processed by" steps still describe dual
whole-heart + coronary segmentation, but the bundle is coronary-only (see
metadata.json task: "Coronary artery ct segmentation"); update
models/CoronSegmentator/docs/README.md to remove references to MONAI Auto3DSeg
and whole-heart segmentation, revise the opening paragraph to state coronary
artery–only scope, adjust the Tags list, and change the "processed by" steps
(lines referencing MONAI/Auto3DSeg) to only list the custom nnU-Net coronary
segmentation and subsequent STL→USD conversion for 3D visualization.
| # url: "https://mega.nz/file/L8oz3QhA#illHJWSCmsmV5xqbXktwiu-1jUebbpJ0HSKv0hbMFCc" | ||
| url: "https://drive.google.com/file/d/11-pgULhmSZyAAfZG9n1vOusWHeH-hpwE/view?usp=drive_link" | ||
| hash_val: "0e783497e94293c86995392e5fbbc790" | ||
| hash_type: "md5" |
There was a problem hiding this comment.
hash_type is declared here but never read by the consumer.
models/CoronSegmentator/scripts/heart_digital_twin.py::CoroSegmentatorPipeline.run() only extracts path, url, and hash_val from each entry and calls download_url(url, filepath=..., hash_val=expected_hash) without forwarding hash_type. MONAI's download_url defaults to MD5, so behavior happens to be correct, but the YAML field is silently ignored. Either pass it through (hash_type=file_info.get("hash_type", "md5")) or drop the field to avoid implying it has effect.
Also applies to: 11-11
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@models/CoronSegmentator/large_file.yml` at line 6, The YAML field hash_type
is defined but never passed into the downloader; update
CoroSegmentatorPipeline.run to read hash_type from each file entry (e.g.
file_info.get("hash_type", "md5")) and pass it into the download call as
download_url(url, filepath=..., hash_val=expected_hash, hash_type=hash_type) so
the declared algorithm is honored (alternatively remove the unused YAML field if
you prefer); locate this change around the run method in CoroSegmentatorPipeline
and the call site that currently supplies only path/url/hash_val.
| script_path = Path(os.path.abspath(__file__)) | ||
| self.raw_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_raw") | ||
| self.preprocessed_path = os.path.join(script_path.parent, "nnUNet/nnUNet_preprocessed") | ||
| self.results_path = os.path.join(script_path.parent.parent.parent, "models/nnUNet_results") | ||
| self.model_weight = [ | ||
| os.path.join(script_path.parent.parent.parent, "models", f) | ||
| for f in os.listdir(os.path.join(script_path.parent.parent.parent, "models")) | ||
| if f.split(".")[-1] == "zip" | ||
| ] # check if any model weight exist |
There was a problem hiding this comment.
preprocessed_path should not live inside the installed bundle directory.
raw_path and seg_path correctly go under self.temp_dir, but preprocessed_path is set to <bundle>/scripts/coronaryArtery_segmentation/nnUNet/nnUNet_preprocessed. nnUNet may create files there, polluting the installed package directory and causing permission errors when the bundle is installed read-only (e.g., system site-packages, container layers). It also leaks state across pipeline invocations.
Move it under self.temp_dir for parity with the other paths, or to a user-cache directory.
♻️ Proposed fix
- self.raw_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_raw")
- self.preprocessed_path = os.path.join(script_path.parent, "nnUNet/nnUNet_preprocessed")
- self.results_path = os.path.join(script_path.parent.parent.parent, "models/nnUNet_results")
+ self.raw_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_raw")
+ self.preprocessed_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_preprocessed")
+ self.results_path = os.path.join(script_path.parent.parent.parent, "models/nnUNet_results")Additionally, line 364's os.listdir(... "models") will raise FileNotFoundError if the models/ directory doesn’t yet exist (e.g., a unit test that constructs NNUnetPredictor without going through CoroSegmentatorPipeline.run()). Consider guarding with a default empty list when the directory is absent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py`
around lines 358 - 366, preprocessed_path is set inside the installed bundle
(script_path.parent) which can pollute/read-only install dirs; change it to live
under the temporary workspace like raw_path (use self.temp_dir.name, e.g.
self.preprocessed_path = os.path.join(self.temp_dir.name,
"nnUNet/nnUNet_preprocessed")) so nnUNet writes to a transient location; also
harden the model_weight construction (the list comprehension that calls
os.listdir on script_path.parent.parent.parent/"models") by first checking that
the models directory exists and using an empty list if not (or use
os.path.exists/isdir) before listing to avoid FileNotFoundError when the models/
dir is absent.
| except subprocess.CalledProcessError as e: | ||
| logger.error( | ||
| f"Coronary artery <{os.path.basename(self.input_path)}> \ | ||
| segmentation inference failed with code {e.returncode}" | ||
| ) |
There was a problem hiding this comment.
Backslash-continued f-strings emit literal indentation in log messages.
Same pattern at lines 462-466 and 502-506. Inside an f"..."/"..." string, a \ line continuation does not strip the leading whitespace on the next line — the resulting log entry contains a long run of spaces (e.g., ... failed with code 1 segmentation inference failed). Use implicit string concatenation or textwrap.dedent/single-line strings instead.
♻️ Proposed fix (apply to all three sites)
- logger.error(
- f"Coronary artery <{os.path.basename(self.input_path)}> \
- segmentation inference failed with code {e.returncode}"
- )
+ logger.error(
+ "Coronary artery <%s> segmentation inference failed with code %s",
+ os.path.basename(self.input_path),
+ e.returncode,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except subprocess.CalledProcessError as e: | |
| logger.error( | |
| f"Coronary artery <{os.path.basename(self.input_path)}> \ | |
| segmentation inference failed with code {e.returncode}" | |
| ) | |
| except subprocess.CalledProcessError as e: | |
| logger.error( | |
| "Coronary artery <%s> segmentation inference failed with code %s", | |
| os.path.basename(self.input_path), | |
| e.returncode, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py`
around lines 437 - 441, The log messages use backslash-continued f-strings which
preserve the indentation and produce unwanted spaces; locate the except blocks
catching subprocess.CalledProcessError (the logger.error calls that reference
self.input_path and e.returncode) and replace the backslash-continued f-strings
with either a single-line f-string or implicit string concatenation (e.g., f"...
segmentation inference failed with code {e.returncode}") so the message has no
embedded indentation; apply the same fix to the other two occurrences that use
logger.error with self.input_path and e.returncode.
| License: Apache License 2.0 | ||
|
|
||
| Description: | ||
| This script performs <brief description of converting NII Files to USD Files>. |
There was a problem hiding this comment.
Leftover template placeholder in the module docstring.
<brief description of converting NII Files to USD Files> looks like an unfilled docstring template. Please replace it with the actual description of what this module does.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@models/CoronSegmentator/scripts/heart_digital_twin.py` at line 9, Replace the
leftover template placeholder in the module docstring at the top of
heart_digital_twin.py with a concise, accurate description of what the module
does (e.g., "Convert NIfTI (.nii/.nii.gz) medical image files into USD scene
files and generate associated heart digital twin metadata"), ensuring the
docstring reflects the script's purpose, inputs, outputs and any important side
effects; update the module-level docstring string literal (the first
triple-quoted string in the file) accordingly.
| try: | ||
| run_coronary_artery_segmentation(nii_path, seg_folder) | ||
| return True | ||
| except Exception as e: | ||
| logger.error(f"Processing failed for {self.nii_file}: {str(e)}") | ||
| # Remove the output folder if failed | ||
| if os.path.exists(seg_folder): | ||
| for f in os.listdir(seg_folder): | ||
| os.remove(os.path.join(seg_folder, f)) | ||
| os.rmdir(seg_folder) | ||
| return False | ||
| finally: | ||
| gc.collect() |
There was a problem hiding this comment.
Silent failure + fragile cleanup in run().
Two related concerns:
- On exception,
run()only logs and returnsFalse.ConfigWorkflowtreats that as a successful run, so a failed inference is invisible to the bundle/test harness. Either re-raise after logging, or have the workflowrunstep assert on the return value. - Cleanup uses
os.removefollowed byos.rmdir. Ifseg_folderever contains subdirectories (likely once STL/USD post-processing lands per the PR description),os.removewill raiseIsADirectoryErrorandos.rmdirwill fail on a non-empty dir.shutil.rmtreeis safer and simpler.
♻️ Proposed fix
-import gc
+import gc
+import shutil
@@
- try:
- run_coronary_artery_segmentation(nii_path, seg_folder)
- return True
- except Exception as e:
- logger.error(f"Processing failed for {self.nii_file}: {str(e)}")
- # Remove the output folder if failed
- if os.path.exists(seg_folder):
- for f in os.listdir(seg_folder):
- os.remove(os.path.join(seg_folder, f))
- os.rmdir(seg_folder)
- return False
- finally:
- gc.collect()
+ try:
+ run_coronary_artery_segmentation(nii_path, seg_folder)
+ return True
+ except Exception:
+ logger.exception(f"Processing failed for {self.nii_file}")
+ # Remove the output folder if failed
+ shutil.rmtree(seg_folder, ignore_errors=True)
+ raise
+ finally:
+ gc.collect()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| run_coronary_artery_segmentation(nii_path, seg_folder) | |
| return True | |
| except Exception as e: | |
| logger.error(f"Processing failed for {self.nii_file}: {str(e)}") | |
| # Remove the output folder if failed | |
| if os.path.exists(seg_folder): | |
| for f in os.listdir(seg_folder): | |
| os.remove(os.path.join(seg_folder, f)) | |
| os.rmdir(seg_folder) | |
| return False | |
| finally: | |
| gc.collect() | |
| try: | |
| run_coronary_artery_segmentation(nii_path, seg_folder) | |
| return True | |
| except Exception: | |
| logger.exception(f"Processing failed for {self.nii_file}") | |
| # Remove the output folder if failed | |
| shutil.rmtree(seg_folder, ignore_errors=True) | |
| raise | |
| finally: | |
| gc.collect() |
🧰 Tools
🪛 Ruff (0.15.11)
[warning] 95-95: Do not catch blind exception: Exception
(BLE001)
[warning] 96-96: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@models/CoronSegmentator/scripts/heart_digital_twin.py` around lines 92 - 104,
The run() error handling silently returns False and performs fragile cleanup;
update the except block in heart_digital_twin.run to both surface failures to
the workflow (either re-raise the caught exception after logging or raise a new
exception so ConfigWorkflow sees the failure) and replace the manual
os.remove/os.rmdir loop with shutil.rmtree(seg_folder, ignore_errors=False) (or
ignore_errors=True if you prefer best-effort cleanup) to correctly remove nested
files/directories; ensure you import shutil and keep the gc.collect() in the
finally block.
|
Hi @Foxconn-bgroup-aidd sorry for the long delay on this one, please have a look at the comments made by us and Coderabbit and mark resolved anything that's sorted. I think there's some valid concerns about the tests that should be addressed. Thanks! |
proposed Model: CoronSegmentator
This PR adds a new coronary artery segmentation model named
coronsegmentatorundermodels/coronsegmentator/.Coronsegmentator is an automated pipeline that performs dual-task segmentation on cardiac CT images, focusing on both whole-heart and coronary artery structures. It integrates MONAI’s Auto3DSeg for general cardiac segmentation and a custom nnU-Net model for detailed coronary artery segmentation. The pipeline further converts segmentation results (in STL format) into USD files for downstream 3D visualization and digital twin simulation using NVIDIA Omniverse.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests
Chores