Skip to content

Add coronary artery segmentation bundle (CoronSegmentator)#761

Open
Foxconn-bgroup-aidd wants to merge 27 commits intoProject-MONAI:devfrom
Foxconn-bgroup-aidd:760-CoronSegmentator
Open

Add coronary artery segmentation bundle (CoronSegmentator)#761
Foxconn-bgroup-aidd wants to merge 27 commits intoProject-MONAI:devfrom
Foxconn-bgroup-aidd:760-CoronSegmentator

Conversation

@Foxconn-bgroup-aidd
Copy link
Copy Markdown

@Foxconn-bgroup-aidd Foxconn-bgroup-aidd commented Jul 16, 2025

proposed Model: CoronSegmentator

This PR adds a new coronary artery segmentation model named coronsegmentator under models/coronsegmentator/.

Coronsegmentator is an automated pipeline that performs dual-task segmentation on cardiac CT images, focusing on both whole-heart and coronary artery structures. It integrates MONAI’s Auto3DSeg for general cardiac segmentation and a custom nnU-Net model for detailed coronary artery segmentation. The pipeline further converts segmentation results (in STL format) into USD files for downstream 3D visualization and digital twin simulation using NVIDIA Omniverse.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added a coronary artery CT segmentation model enabling dual-task cardiac imaging analysis with whole-heart and vessel segmentation capabilities.
  • Documentation

    • Added comprehensive guides, dataset references, and licensing documentation.
  • Tests

    • Added unit tests validating the inference workflow configuration.
  • Chores

    • Added configuration files for inference, logging, and model metadata.
    • Added repository and license documentation files.

@Nic-Ma Nic-Ma requested review from KumoLiu and yiheng-wang-nv July 21, 2025 16:22
Comment thread models/CoronSegmentator/.gitignore Outdated
@@ -0,0 +1,7 @@
Input/
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Foxconn-bgroup-aidd , can we remove this file?
A part of files are already ignored (set in https://github.com/Project-MONAI/model-zoo/blob/dev/.gitignore), and for customized folder like Input/ and Output/, I think we can remove them directly. What do you think?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no problem removing them. Thanks.

@@ -0,0 +1,17 @@
{
"bundle_root": "./",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you prepare a unit test script?
It's helpful to check if (1) large files are downloadable (2) inference pipeline works (3) dependencies mentioned in metadata are correct and enough to run the script.

You can refer to this example to prepare the script: https://github.com/Project-MONAI/model-zoo/blob/dev/ci/unit_tests/test_spleen_ct_segmentation.py

Copy link
Copy Markdown

@tylin7111095022 tylin7111095022 Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yiheng-wang-nv, I prepare the script refer to above url. but when I try to run the script , I got the error: ModuleNotFoundError: Cannot locate class or function path: 'scripts.heart_digital_twin.CoroSegmentator_Pipeline'. It seems to can't access the custom class is defined in inference.json. I put the script in ci/unitest/ folder and run the script at model_zoo root directory. How can I solve the problem? Thanks for answer. Following is revised inference.json:

{
  "imports": ["$import glob"],
  "bundle_root": "./",
  "ckpt_dir": "$@bundle_root + 'models'",
  "dataset_dir": "",
  "data": "$list(sorted(glob.glob(@dataset_dir + '/*.nii.gz')))",
  "output_ext": ".usd",
  "output_dir": "$@bundle_root + 'Output'",
  "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
  "inferer": {
    "_target_": "scripts.heart_digital_twin.CoroSegmentator_Pipeline",
    "input_params": {
      "inputFile": "$@data[0]",
      "outputDir": "$@output_dir"
    }
  },

  "dataset": {},
  "evaluator": {},
  "network_def": {},

  "run": ["$@inferer.run()"]
}

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Foxconn-bgroup-aidd, please try create an init.py file in the scripts directory to make it as Python packages and also put the scripts folder just under you bundle instead of under the unittest folder. Thanks.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for comment, @KumoLiu . Following is my directory tree of project

model-zoo/
|--ci/
    |--unit_tests/
        |--test_coronaryArtery_ct_segmentation.py
|--models/
    |--myBundle/
        |--large_file.yml
        |--LICENSE
        |--configs/
            |--inference.json
            |--metadata.json
        |--docs/
        |--models/
        |--scripts/
            |--__init__.py
            |--heartDitgitalTwin.py

I run the python ci/unit_tests/test_coronaryArtery_ct_segmentation.py and got the ModuleNotFoundError: Cannot locate class or function path: 'scripts.heart_digital_twin.CoroSegmentator_Pipeline'. I still try to solve the problem which pydoc.locate(scripts.heart_digital_twin.CoroSegmentator_Pipeline) can't access custom class. Thanks for help.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KumoLiu I add the path of bundle into the sys.path and solve it, Thanks!

Copy link
Copy Markdown
Collaborator

@KumoLiu KumoLiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Foxconn-bgroup-aidd, thanks for the contributions! Leave several comments inline.

@@ -0,0 +1,88 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Foxconn-bgroup-aidd, could you please utilize the latest schema available at the following link:
"https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json"

Based on the content, it appears that you are already using the most recent version, as it includes "required_packages_version." Thanks.

Comment on lines +10 to +13
"monai": " 1.4.0",
"nnunetv2": "2.6.0",
"nibabel": "5.3.2",
"numpy": "2.3.1",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please double check the monai version you are using, the one mentioned above is "1.4.0"? Thanks!

Suggested change
"monai": " 1.4.0",
"nnunetv2": "2.6.0",
"nibabel": "5.3.2",
"numpy": "2.3.1",
"nnunetv2": "2.6.0",
"nibabel": "5.3.2",

Comment on lines +77 to +78
"0": "Background",
"1": "Coronary Artery"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the "num_channels" is only one for the outputs.

Comment on lines +83 to +87
"optional_packages_version": {
"monai": "1.3.2",
"torch": "2.7.0+cu126",
"numpy": "2.3.1"
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you can remove this part, the "optional_packages_version" is no longer a required one in the latest schema. thanks.

Copy link
Copy Markdown
Author

@Foxconn-bgroup-aidd Foxconn-bgroup-aidd Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The version of monai used in our project is “1.4.0”. “optional_packages_version" will be removed.

Comment on lines +45 to +46
The ImageCAS dataset is publicly available at:
https://github.com/XiaoweiXu/ImageCAS-A-Large-Scale-Dataset-and-Benchmark-for-Coronary-Artery-Segmentation-based-on-CT.git''
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be better to mention the dataset link in the "Data" section, what do you think?

Comment thread models/CoronSegmentator/requirements.txt Outdated
Comment on lines +174 to +183
# try:
# Step 1: Convert input to NRRD
input_nrrd = self._convert_to_nrrd(self.input_path, input_tmp_dir)
# Step 2: Run segmentation
seg_nrrd = self._run_segmentation(input_nrrd, output_tmp_dir)
# Step 3: Process results
self._process_segmentation_results(seg_nrrd, output_tmp_dir)
# except Exception as e:
# logger.error(f"Processing failed: {str(e)}")
# raise
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please remove the comment code here or perhaps include the try-catch logic is also make sense.

base_center = Gf.Vec3d(
(bbox_min[0] + bbox_max[0]) / 2, bbox_min[1], (bbox_min[2] + bbox_max[2]) / 2
)
# 计算平移量,使得基底中心移动到原点
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check and remove all Chinese comments inline. Thanks.

"output_ext": ".usd",
"input_file": "Input/E_CT_0000.nii.gz",
"infer": {
"_target_": "scripts.heart_digital_twin.CoroSegmentator_Pipeline",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @yiheng-wang-nv mentioned above, please include the unit test to ensure the bundle works as expected.
I didn't see this file under the scripts folder.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mis-delete this test file?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have fixed the issue of incorrectly deleting test files.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I’ve pushed the updated code. Thank you for your feedback so far! Please let me know if there’s anything else I should work on.

@yiheng-wang-nv
Copy link
Copy Markdown
Collaborator

HI @Foxconn-bgroup-aidd , based on the pre-commit CI, it seems the following json files have format issues:

Input File models/CoronSegmentator/configs/metadata.json is not a valid JSON, consider using check-json
Input File models/CoronSegmentator/configs/inference.json is not a valid JSON, consider using check-json

Foxconn-bgroup-aidd and others added 8 commits September 8, 2025 15:54
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
Signed-off-by: Foxconn-bgroup-aidd <bgroup-aidd@foxconn.com>
@ericspod
Copy link
Copy Markdown
Member

ericspod commented Oct 7, 2025

Hi @Foxconn-bgroup-aidd we still want to accept this model so please have a look at outstanding conversations and resolve those that have been addressed. @KumoLiu and @yiheng-wang-nv can continue their review from there. Thanks!

@yiheng-wang-nv
Copy link
Copy Markdown
Collaborator

/build

"monai": " 1.5.0",
"nnunetv2": "2.6.2",
"nibabel": "5.3.2",
"numpy": "2.3.3",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Foxconn-bgroup-aidd , can you confirm the numpy version here? I saw CI error: https://github.com/Project-MONAI/model-zoo/actions/runs/18928765961/job/54041291172?pr=761#step:7:488 It may due to wrong numpy version, or the python version used in CI does not support this version. May I know which python version do you use?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, we are using Python 3.12.9, and we have already pushed a new version. Please let us know if you need any additional information

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ericspod , can you help to check this PR? We may need to enable tests on new python versions. I'm very busy on other projects, and may not have enough time for model zoo recently.

Copy link
Copy Markdown
Member

@ericspod ericspod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had just very minor comments to make. @KumoLiu had mentioned a few things previously, if these have been addressed we can marked those comments as resolved. Once the tests pass we can add this.

@@ -0,0 +1,11 @@
large_files:
- path: "models/cardiacModel.pt"
# url: "https://mega.nz/file/L8oz3QhA#illHJWSCmsmV5xqbXktwiu-1jUebbpJ0HSKv0hbMFCc"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove these commented links.

return md5.hexdigest()


def _download(url, destination_path):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MONAI has it's own download routines which you can use instead, and which also do hash checking.

@ericspod
Copy link
Copy Markdown
Member

ericspod commented Dec 1, 2025

@KumoLiu we seem to be having an issue with space for the tests, did we have a solution for this in this repo? Thanks!

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 27, 2026

Walkthrough

This PR introduces a complete coronary artery CT segmentation bundle integrated with MONAI. It includes test infrastructure, configuration files for inference and metadata, documentation, model weight management, and implementation scripts for nnUNet-based segmentation with STL component processing and digital twin pipeline orchestration.

Changes

Cohort / File(s) Summary
Test Infrastructure
ci/unit_tests/test_coronaryArtery_ct_segmentation.py
New unittest-based test file for the CoronSegmentator MONAI ConfigWorkflow. Generates temporary 3D NIfTI datasets, exercises the inference workflow, includes parameterized test overrides, and implements test method prioritization via helper function.
Configuration & Metadata
models/CoronSegmentator/configs/inference.json, models/CoronSegmentator/configs/logging.conf, models/CoronSegmentator/configs/metadata.json
Configuration files for MONAI workflow inference, logging setup, and model metadata schema. Includes runtime requirements (MONAI, PyTorch, NumPy), network data format specs for CT inputs and segmentation outputs, and inferer pipeline references.
Documentation & Licensing
models/CoronSegmentator/docs/README.md, models/CoronSegmentator/docs/data_license.txt, models/CoronSegmentator/LICENSE
User-facing documentation describing the dual-task cardiac CT segmentation pipeline, dataset/model references, inference workflow, and licensing information. Includes Apache 2.0 terms and third-party attribution for ImageCAS, MONAI, USD, and nnU-Net dependencies.
Project Configuration
models/CoronSegmentator/.gitignore, models/CoronSegmentator/large_file.yml
Version control excludes for Python cache and virtual environment. Large file manifest specifying model weights download URLs, hash verification (md5), and integrity metadata.
Segmentation & STL Processing
models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py
Core segmentation script implementing STLSplitter class with disjoint-set union (DSU) for triangle connectivity and KD-tree-based component merging, and NNUnetPredictor class that manages nnUNet environment setup, model weight installation, prediction execution on CPU/GPU, and temporary directory cleanup with comprehensive error handling.
Pipeline Orchestration
models/CoronSegmentator/scripts/heart_digital_twin.py
High-level pipeline coordinator defining CoroSegmentatorPipeline class that orchestrates model weight downloads from large_file.yml, invokes coronary artery segmentation, manages output folder creation, handles exceptions with cleanup, and supports JSON-based configuration input.

Sequence Diagram

sequenceDiagram
    actor User
    participant CoroSegmentatorPipeline as CoroSegmentatorPipeline
    participant ModelDownloader as Model Downloader
    participant NNUnetPredictor as NNUnetPredictor
    participant STLSplitter as STLSplitter
    participant OutputDir as Output Directory

    User->>CoroSegmentatorPipeline: run(input_params)
    CoroSegmentatorPipeline->>ModelDownloader: Download model weights from large_file.yml
    ModelDownloader-->>CoroSegmentatorPipeline: Models ready (cardiacModel.pt, CAseg066.zip)
    CoroSegmentatorPipeline->>NNUnetPredictor: run(nii_file, output_folder)
    NNUnetPredictor->>NNUnetPredictor: Setup nnUNet environment & copy input
    NNUnetPredictor->>NNUnetPredictor: nnUNetv2_predict (CPU/GPU)
    NNUnetPredictor->>STLSplitter: Process segmentation output STL
    STLSplitter->>STLSplitter: Build connectivity via DSU
    STLSplitter->>STLSplitter: Merge components (KD-tree proximity)
    STLSplitter->>OutputDir: Write merged STL components
    STLSplitter-->>NNUnetPredictor: STL processing complete
    NNUnetPredictor-->>CoroSegmentatorPipeline: Segmentation complete
    CoroSegmentatorPipeline-->>User: Success (True) or Failure (False)
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A coronary maze for hearts so fine,
With nnUNet and geometry divine,
STL components split and merged with care,
From CT scans to digital visions rare,
The segmentation pipeline hops with glee! 🏥✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description provides context about the model but lacks required checklist items from the template, including codeformat tests, docstrings, metadata version updates, and verification of naming conventions and package versions. Complete all checklist items from the template, particularly confirming codeformat tests passed, package versions are correct in metadata.json, large files use download links, and naming conventions are followed.
Docstring Coverage ⚠️ Warning Docstring coverage is 65.52% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding a new coronary artery segmentation model bundle called CoronSegmentator to the repository.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
models/CoronSegmentator/docs/data_license.txt (1)

25-29: ⚠️ Potential issue | 🟡 Minor

Dependency versions don't match configs/metadata.json.

This table lists monai 1.4.0 and nnunetv2 2.5.1, but models/CoronSegmentator/configs/metadata.json declares monai_version: 1.5.0 and required_packages_version.nnunetv2: 2.6.2. Please reconcile so that the documented third-party versions match the runtime requirements actually installed by the bundle.

📝 Suggested alignment
-monai	1.4.0	Apache-2.0	MONAI Consortium / Project MONAI
+monai	1.5.0	Apache-2.0	MONAI Consortium / Project MONAI
 usd-core	25.5	Apache-2.0	Pixar Animation Studios
-nnunetv2 (editable)	2.5.1	Apache-2.0	Fabian Isensee / Helmholtz Munich
+nnunetv2 (editable)	2.6.2	Apache-2.0	Fabian Isensee / Helmholtz Munich
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/docs/data_license.txt` around lines 25 - 29, The
versions in models/CoronSegmentator/docs/data_license.txt (monai 1.4.0, nnunetv2
2.5.1) are inconsistent with the runtime metadata in configs/metadata.json
(monai_version and required_packages_version.nnunetv2); update the table in
data_license.txt so the listed package versions match the values in
configs/metadata.json (adjust the monai and nnunetv2 entries to the
monai_version and required_packages_version.nnunetv2 declared there), or if the
metadata is wrong, update configs/metadata.json instead—ensure both files
reflect the same canonical versions.
🧹 Nitpick comments (3)
models/CoronSegmentator/configs/inference.json (2)

5-6: bundle_root: "." is fragile; resolve relative to the config file or require an override.

When the bundle is invoked from a different CWD (CI, the unit test, or monai.bundle run from the repo root), "." makes ckpt_dir and output_dir resolve under the wrong directory, and scripts.heart_digital_twin will not be importable unless the caller has manually inserted the bundle into sys.path (which is exactly the workaround the author had to apply in the unit test, per the prior review thread).

Most bundles in this repo expect bundle_root to be passed in via CLI override (e.g., --bundle_root models/CoronSegmentator). It would also be worth adding a small imports entry that inserts @bundle_root into sys.path so scripts.heart_digital_twin resolves regardless of CWD, e.g.:

-    "imports": [
-        "$import glob"
-    ],
+    "imports": [
+        "$import glob",
+        "$import sys",
+        "$sys.path.append(`@bundle_root`)"
+    ],

Also applies to: 10-10

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/configs/inference.json` around lines 5 - 6, The
config uses bundle_root: "." which breaks resolution when run from other CWDs;
change the config so bundle_root is not a hardcoded "." by requiring a CLI
override or resolving it relative to the config file (update the bundle_root
value and any derived fields like ckpt_dir and output_dir that reference
`@bundle_root`), and add an imports entry to insert `@bundle_root` into sys.path so
scripts.heart_digital_twin can be imported regardless of CWD; update references
in the config (bundle_root, ckpt_dir, output_dir, imports) to use the resolved
`@bundle_root` token rather than a literal ".".

9-11: Several declared keys appear to be dead.

  • output_ext: ".usd" (Line 9) and device (Line 11) are never referenced by inferer, run, or any other entry — CoroSegmentatorPipeline.__init__ only consumes inputFile/outputDir.
  • dataset: {}, evaluator: {}, network_def: {} (Lines 19–21) are empty objects. If they're only there to satisfy ConfigWorkflow.check_properties, please add a brief comment in the README; otherwise drop them.

If the pipeline really should obey device and write with output_ext, plumb them through to input_params so the workflow is self-consistent.

Also applies to: 19-21

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/configs/inference.json` around lines 9 - 11, The JSON
contains dead/unused keys (output_ext, device, and empty objects
dataset/evaluator/network_def) — either remove them or wire them through: if you
want the pipeline to honor device and output_ext, add them into the same
input_params object that is passed to the inferer/run so
CoronSegmentatorPipeline.__init__ (and whoever builds input_params) can consume
them (e.g., include "device" and "output_ext" in the input_params section and
ensure inferer/run forwards input_params into
CoronSegmentatorPipeline.__init__); otherwise delete output_ext and device from
the config and either remove or document the empty dataset/evaluator/network_def
entries in the README to satisfy ConfigWorkflow.check_properties. Ensure your
changes reference the config keys by name and that inferer/run and
CoronSegmentatorPipeline.__init__ are updated consistently.
models/CoronSegmentator/configs/metadata.json (1)

57-76: outputs.pred.type: "usd" does not match MONAI bundle conventions.

The type field across the model-zoo describes the tensor kind—image, probability, tuples, object, etc.—not file format. USD is a file format produced downstream from the network; the network itself outputs a segmentation tensor. Set type: "image" (with format: "segmentation") to accurately describe the tensor, and document USD export separately in the README.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/configs/metadata.json` around lines 57 - 76, The
metadata currently mislabels the model output: change outputs.pred.type from
"usd" to "image" so the tensor kind correctly reflects a segmentation output
(leave "format": "segmentation", "num_channels", "spatial_shape", etc.
unchanged), and move any USD/export-file format details into the README or an
export step documentation rather than the metadata.json; update the entry
referenced as outputs.pred.type accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ci/unit_tests/test_coronaryArtery_ct_segmentation.py`:
- Around line 55-56: The synthetic test data is invalid: fix creation of
test_image and test_label by sampling CT-intensity-range into a dtype that can
hold Hounsfield units and producing true binary labels; specifically change the
array dtype for test_image (variable name test_image) from np.int8 to np.int16
(so values sampled from low=-1024, high=1000 are represented correctly) and
change the randint call for test_label (variable name test_label) to use high=2
so it can generate 0 and 1 labels rather than all zeros.
- Around line 65-77: The test calls check_workflow(inferrer,
check_properties=True) which triggers ConfigWorkflow ->
CoroSegmentatorPipeline.run and performs heavy external work
(monai.apps.utils.download_url and NNUnetPredictor.run spawning
nnUNetv2_predict); to fix, patch/mock those external operations in the test so
only wiring is validated: either stub CoroSegmentatorPipeline.run to a no-op or
mock monai.apps.utils.download_url and subprocess.run (or the method
NNUnetPredictor.run) to return successfully without downloading or invoking
nnUNetv2_predict; ensure your mocks target the exact symbols ConfigWorkflow
(test instantiation), CoroSegmentatorPipeline.run, download_url,
NNUnetPredictor.run, and the subprocess invocation name nnUNetv2_predict so the
test runs in CI without performing network or subprocess work.

In `@models/CoronSegmentator/configs/inference.json`:
- Line 8: The config currently builds "data" eagerly via "data":
"$list(sorted(glob.glob(`@dataset_dir` + '/*.nii.gz')))" and then uses `@data`[0] in
inferer.input_params.inputFile which raises IndexError when dataset_dir is
empty; change this by deferring or guarding the lookup: remove the eager
`@data`[0] dependency and instead resolve the input file at runtime inside
CoronSegmentatorPipeline.run() (or add a safe helper that returns a
sentinel/default or raises a clear, documented error if no .nii.gz files are
found), update inferer.input_params.inputFile to reference that runtime-resolved
value (or a safe placeholder), and add prominent documentation in the config
comment that dataset_dir must be overridden or contain NIfTI files so
ConfigWorkflow.check_workflow and monai.bundle run do not crash at construction
time.

In `@models/CoronSegmentator/configs/metadata.json`:
- Around line 38-50: The metadata.json currently declares format: "hounsfield"
but sets value_range: [0, 1], which is inconsistent; either set "value_range" to
a representative HU window (e.g., the clipping window your pipeline uses such as
[-1024, 3071] or a narrower window like [-125, 275] depending on preprocessing)
while keeping format: "hounsfield", or change format to reflect pre-normalized
data (e.g., "normalized" or "float32_normalized") and update README to document
the min-max or z-score normalization; update the "value_range" field and README
accordingly and ensure any references to network_data_format elsewhere in the
repo are consistent with the chosen option.

In `@models/CoronSegmentator/docs/README.md`:
- Around line 13-15: The README's opening paragraph, Tags, and "processed by"
steps still describe dual whole-heart + coronary segmentation, but the bundle is
coronary-only (see metadata.json task: "Coronary artery ct segmentation");
update models/CoronSegmentator/docs/README.md to remove references to MONAI
Auto3DSeg and whole-heart segmentation, revise the opening paragraph to state
coronary artery–only scope, adjust the Tags list, and change the "processed by"
steps (lines referencing MONAI/Auto3DSeg) to only list the custom nnU-Net
coronary segmentation and subsequent STL→USD conversion for 3D visualization.
- Around line 45-46: The ImageCAS dataset URL in
models/CoronSegmentator/docs/README.md contains two stray single quotes at the
end ("...-on-CT.git''") which break the link; remove the extra quote characters
so the URL ends with ".git" (update the ImageCAS dataset URL line in the README
to
"https://github.com/XiaoweiXu/ImageCAS-A-Large-Scale-Dataset-and-Benchmark-for-Coronary-Artery-Segmentation-based-on-CT.git").
- Around line 41-43: The README's example `monai.bundle run` invocation is
missing required parameters, causing the `data` list to be empty and `@data[0]`
to IndexError; update the example to pass `--meta_file`, `--logging_file`, and a
non-empty `--dataset_dir` that matches the glob used in `configs/inference.json`
(ensure `dataset_dir` points to a folder containing `.nii.gz` files), e.g.
include flags for `--meta_file configs/metadata.json --logging_file
configs/logging.conf --dataset_dir /path/to/dataset`; confirm
`configs/inference.json`'s `"dataset_dir"` is not left as an empty string or
document overriding it via the CLI.

In `@models/CoronSegmentator/large_file.yml`:
- Line 6: The YAML field hash_type is defined but never passed into the
downloader; update CoroSegmentatorPipeline.run to read hash_type from each file
entry (e.g. file_info.get("hash_type", "md5")) and pass it into the download
call as download_url(url, filepath=..., hash_val=expected_hash,
hash_type=hash_type) so the declared algorithm is honored (alternatively remove
the unused YAML field if you prefer); locate this change around the run method
in CoroSegmentatorPipeline and the call site that currently supplies only
path/url/hash_val.
- Line 4: The model bundle in models/CoronSegmentator/large_file.yml uses a
Google Drive URL under the url key which is fragile for CI; upload the weights
to a stable direct-download host (e.g., MONAI-extra-test-data GitHub release or
another CDN), replace the url value with the new direct-download link, recompute
and update the corresponding hash_val in the same YAML entry to match the
uploaded file, and apply the same replacement for the other affected bundle
referenced as "9-9" so all references use stable hosts instead of Google Drive.

In
`@models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py`:
- Around line 437-441: The log messages use backslash-continued f-strings which
preserve the indentation and produce unwanted spaces; locate the except blocks
catching subprocess.CalledProcessError (the logger.error calls that reference
self.input_path and e.returncode) and replace the backslash-continued f-strings
with either a single-line f-string or implicit string concatenation (e.g., f"...
segmentation inference failed with code {e.returncode}") so the message has no
embedded indentation; apply the same fix to the other two occurrences that use
logger.error with self.input_path and e.returncode.
- Around line 358-366: preprocessed_path is set inside the installed bundle
(script_path.parent) which can pollute/read-only install dirs; change it to live
under the temporary workspace like raw_path (use self.temp_dir.name, e.g.
self.preprocessed_path = os.path.join(self.temp_dir.name,
"nnUNet/nnUNet_preprocessed")) so nnUNet writes to a transient location; also
harden the model_weight construction (the list comprehension that calls
os.listdir on script_path.parent.parent.parent/"models") by first checking that
the models directory exists and using an empty list if not (or use
os.path.exists/isdir) before listing to avoid FileNotFoundError when the models/
dir is absent.

In `@models/CoronSegmentator/scripts/heart_digital_twin.py`:
- Line 9: Replace the leftover template placeholder in the module docstring at
the top of heart_digital_twin.py with a concise, accurate description of what
the module does (e.g., "Convert NIfTI (.nii/.nii.gz) medical image files into
USD scene files and generate associated heart digital twin metadata"), ensuring
the docstring reflects the script's purpose, inputs, outputs and any important
side effects; update the module-level docstring string literal (the first
triple-quoted string in the file) accordingly.
- Around line 92-104: The run() error handling silently returns False and
performs fragile cleanup; update the except block in heart_digital_twin.run to
both surface failures to the workflow (either re-raise the caught exception
after logging or raise a new exception so ConfigWorkflow sees the failure) and
replace the manual os.remove/os.rmdir loop with shutil.rmtree(seg_folder,
ignore_errors=False) (or ignore_errors=True if you prefer best-effort cleanup)
to correctly remove nested files/directories; ensure you import shutil and keep
the gc.collect() in the finally block.

---

Outside diff comments:
In `@models/CoronSegmentator/docs/data_license.txt`:
- Around line 25-29: The versions in
models/CoronSegmentator/docs/data_license.txt (monai 1.4.0, nnunetv2 2.5.1) are
inconsistent with the runtime metadata in configs/metadata.json (monai_version
and required_packages_version.nnunetv2); update the table in data_license.txt so
the listed package versions match the values in configs/metadata.json (adjust
the monai and nnunetv2 entries to the monai_version and
required_packages_version.nnunetv2 declared there), or if the metadata is wrong,
update configs/metadata.json instead—ensure both files reflect the same
canonical versions.

---

Nitpick comments:
In `@models/CoronSegmentator/configs/inference.json`:
- Around line 5-6: The config uses bundle_root: "." which breaks resolution when
run from other CWDs; change the config so bundle_root is not a hardcoded "." by
requiring a CLI override or resolving it relative to the config file (update the
bundle_root value and any derived fields like ckpt_dir and output_dir that
reference `@bundle_root`), and add an imports entry to insert `@bundle_root` into
sys.path so scripts.heart_digital_twin can be imported regardless of CWD; update
references in the config (bundle_root, ckpt_dir, output_dir, imports) to use the
resolved `@bundle_root` token rather than a literal ".".
- Around line 9-11: The JSON contains dead/unused keys (output_ext, device, and
empty objects dataset/evaluator/network_def) — either remove them or wire them
through: if you want the pipeline to honor device and output_ext, add them into
the same input_params object that is passed to the inferer/run so
CoronSegmentatorPipeline.__init__ (and whoever builds input_params) can consume
them (e.g., include "device" and "output_ext" in the input_params section and
ensure inferer/run forwards input_params into
CoronSegmentatorPipeline.__init__); otherwise delete output_ext and device from
the config and either remove or document the empty dataset/evaluator/network_def
entries in the README to satisfy ConfigWorkflow.check_properties. Ensure your
changes reference the config keys by name and that inferer/run and
CoronSegmentatorPipeline.__init__ are updated consistently.

In `@models/CoronSegmentator/configs/metadata.json`:
- Around line 57-76: The metadata currently mislabels the model output: change
outputs.pred.type from "usd" to "image" so the tensor kind correctly reflects a
segmentation output (leave "format": "segmentation", "num_channels",
"spatial_shape", etc. unchanged), and move any USD/export-file format details
into the README or an export step documentation rather than the metadata.json;
update the entry referenced as outputs.pred.type accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ea410f65-0bda-4362-98e8-c5f512b64932

📥 Commits

Reviewing files that changed from the base of the PR and between 22761f0 and f807c07.

📒 Files selected for processing (12)
  • ci/unit_tests/test_coronaryArtery_ct_segmentation.py
  • models/CoronSegmentator/.gitignore
  • models/CoronSegmentator/LICENSE
  • models/CoronSegmentator/configs/inference.json
  • models/CoronSegmentator/configs/logging.conf
  • models/CoronSegmentator/configs/metadata.json
  • models/CoronSegmentator/docs/README.md
  • models/CoronSegmentator/docs/data_license.txt
  • models/CoronSegmentator/large_file.yml
  • models/CoronSegmentator/scripts/__init__.py
  • models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py
  • models/CoronSegmentator/scripts/heart_digital_twin.py

Comment on lines +55 to +56
test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int8)
test_label = np.random.randint(low=0, high=1, size=input_shape).astype(np.int8)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Both synthetic image dtype and label range are buggy.

  • Line 55: np.int8 only holds values in [-128, 127], but you sample in [-1024, 1000). The cast silently wraps around, producing meaningless CT intensities (e.g., -1024 → 0, 999 → -25). Use np.int16 for HU-scale CT.
  • Line 56: np.random.randint(low=0, high=1, ...) never produces a 1 because high is exclusive; every label volume is all zeros. Use high=2 for binary labels.
🐛 Proposed fix
-            test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int8)
-            test_label = np.random.randint(low=0, high=1, size=input_shape).astype(np.int8)
+            test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int16)
+            test_label = np.random.randint(low=0, high=2, size=input_shape).astype(np.int8)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int8)
test_label = np.random.randint(low=0, high=1, size=input_shape).astype(np.int8)
test_image = np.random.randint(low=-1024, high=1000, size=input_shape).astype(np.int16)
test_label = np.random.randint(low=0, high=2, size=input_shape).astype(np.int8)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ci/unit_tests/test_coronaryArtery_ct_segmentation.py` around lines 55 - 56,
The synthetic test data is invalid: fix creation of test_image and test_label by
sampling CT-intensity-range into a dtype that can hold Hounsfield units and
producing true binary labels; specifically change the array dtype for test_image
(variable name test_image) from np.int8 to np.int16 (so values sampled from
low=-1024, high=1000 are represented correctly) and change the randint call for
test_label (variable name test_label) to use high=2 so it can generate 0 and 1
labels rather than all zeros.

Comment on lines +65 to +77
@parameterized.expand([TEST_CASE_1])
def test_infer_config(self, override):
override["dataset_dir"] = self.dataset_dir
bundle_root = override["bundle_root"]

inferrer = ConfigWorkflow(
workflow_type="infer",
config_file=Path(bundle_root) / "configs/inference.json",
logging_file=Path(bundle_root) / "configs/logging.conf",
meta_file=Path(bundle_root) / "configs/metadata.json",
**override,
)
check_workflow(inferrer, check_properties=True)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm what check_workflow does and whether ConfigWorkflow.run is being short-circuited anywhere.
rg -nP -C2 'check_workflow\b' ci/unit_tests/utils.py
rg -nP -C3 'class CoroSegmentatorPipeline\b|def run\(self\)' models/CoronSegmentator/scripts/heart_digital_twin.py
# Look for similar bundles' tests that mock heavy steps to see the established pattern in this repo.
rg -nP -C2 'mock\.|patch\(|MagicMock' ci/unit_tests | head -n 60

Repository: Project-MONAI/model-zoo

Length of output: 669


🏁 Script executed:

# Get the full check_workflow function
rg -nP -A 20 'def check_workflow\(workflow: ConfigWorkflow' ci/unit_tests/utils.py

Repository: Project-MONAI/model-zoo

Length of output: 479


🏁 Script executed:

# Get the full run method of CoroSegmentatorPipeline
rg -nP -A 50 'def run\(self\):' models/CoronSegmentator/scripts/heart_digital_twin.py

Repository: Project-MONAI/model-zoo

Length of output: 1651


🏁 Script executed:

# Check if the test file itself has any mocks or patches
cat -n ci/unit_tests/test_coronaryArtery_ct_segmentation.py

Repository: Project-MONAI/model-zoo

Length of output: 3556


🏁 Script executed:

# Find run_coronary_artery_segmentation and check if it spawns nnUNetv2_predict
rg -nP -A 30 'def run_coronary_artery_segmentation' models/CoronSegmentator/

Repository: Project-MONAI/model-zoo

Length of output: 3029


🏁 Script executed:

# Check for nnUNetv2_predict subprocess calls in the codebase
rg -nP 'nnUNetv2_predict|subprocess\.' models/CoronSegmentator/

Repository: Project-MONAI/model-zoo

Length of output: 1216


🏁 Script executed:

# Look for integration test patterns in the codebase
rg -nP 'integration|@pytest.mark|@unittest' ci/unit_tests/ | grep -i integration | head -20

Repository: Project-MONAI/model-zoo

Length of output: 49


🏁 Script executed:

# Check the inference.json config to see what triggers the pipeline
find models/CoronSegmentator -name "inference.json" -type f -exec head -100 {} \;

Repository: Project-MONAI/model-zoo

Length of output: 752


🏁 Script executed:

# Check if ConfigWorkflow has any special handling for this bundle
rg -nP 'class ConfigWorkflow|def run\(' ci/unit_tests/utils.py

Repository: Project-MONAI/model-zoo

Length of output: 49


🏁 Script executed:

# Look at the scripts module to understand how pipeline is invoked from config
rg -nP 'CoroSegmentatorPipeline' models/CoronSegmentator/

Repository: Project-MONAI/model-zoo

Length of output: 279


Mock/patch heavy operations (download_url and nnUNetv2_predict subprocess) to prevent full-pipeline execution in this unit test.

check_workflow(inferrer, check_properties=True) invokes workflow.run(), which loads inference.json and executes the pipeline:

  1. ConfigWorkflow.run() → instantiates and calls CoroSegmentatorPipeline.run()
  2. CoroSegmentatorPipeline.run() downloads ~hundreds of MB of model weights via download_url() from Google Drive view URLs
  3. run_coronary_artery_segmentation()NNUnetPredictor.run() spawns nnUNetv2_predict subprocess

In CI this will fail because:

  • monai.apps.utils.download_url cannot follow through Google Drive's confirmation tokens for large files
  • nnUNetv2_predict will not be available on PATH
  • Full pipeline execution exhausts test resources

The test has no mocks or patches to prevent this. Recommend mocking download_url and subprocess.run (or stubbing CoroSegmentatorPipeline.run()) so the test validates only the workflow wiring and property check. If full-pipeline coverage is needed, gate it behind an integration-test marker.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ci/unit_tests/test_coronaryArtery_ct_segmentation.py` around lines 65 - 77,
The test calls check_workflow(inferrer, check_properties=True) which triggers
ConfigWorkflow -> CoroSegmentatorPipeline.run and performs heavy external work
(monai.apps.utils.download_url and NNUnetPredictor.run spawning
nnUNetv2_predict); to fix, patch/mock those external operations in the test so
only wiring is validated: either stub CoroSegmentatorPipeline.run to a no-op or
mock monai.apps.utils.download_url and subprocess.run (or the method
NNUnetPredictor.run) to return successfully without downloading or invoking
nnUNetv2_predict; ensure your mocks target the exact symbols ConfigWorkflow
(test instantiation), CoroSegmentatorPipeline.run, download_url,
NNUnetPredictor.run, and the subprocess invocation name nnUNetv2_predict so the
test runs in CI without performing network or subprocess work.

"bundle_root": ".",
"ckpt_dir": "$@bundle_root + '/models'",
"dataset_dir": "",
"data": "$list(sorted(glob.glob(@dataset_dir + '/*.nii.gz')))",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

@data[0] will raise IndexError when dataset_dir is empty or contains no .nii.gz.

dataset_dir defaults to "" and data is built eagerly from glob.glob(@dataset_dir + '/*.nii.gz'). Then inferer.input_params.inputFile references @data[0], which is evaluated at workflow construction time. Result: just running monai.bundle run --config_file configs/inference.json (without supplying a populated dataset_dir) crashes before run is ever called, and ConfigWorkflow.check_workflow(check_properties=True) from the new unit test will likewise fail unless dataset_dir is overridden with a directory containing NIfTI files.

Consider deferring the lookup, e.g., resolve the input file inside CoroSegmentatorPipeline.run() from dataset_dir, or guard with a default sentinel. At minimum, document the required dataset_dir override prominently.

Also applies to: 15-15

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/configs/inference.json` at line 8, The config
currently builds "data" eagerly via "data": "$list(sorted(glob.glob(`@dataset_dir`
+ '/*.nii.gz')))" and then uses `@data`[0] in inferer.input_params.inputFile which
raises IndexError when dataset_dir is empty; change this by deferring or
guarding the lookup: remove the eager `@data`[0] dependency and instead resolve
the input file at runtime inside CoronSegmentatorPipeline.run() (or add a safe
helper that returns a sentinel/default or raises a clear, documented error if no
.nii.gz files are found), update inferer.input_params.inputFile to reference
that runtime-resolved value (or a safe placeholder), and add prominent
documentation in the config comment that dataset_dir must be overridden or
contain NIfTI files so ConfigWorkflow.check_workflow and monai.bundle run do not
crash at construction time.

Comment on lines +38 to +50
"format": "hounsfield",
"modality": "CT",
"num_channels": 1,
"spatial_shape": [
512,
512,
256
],
"dtype": "float32",
"value_range": [
0,
1
],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

format: "hounsfield" with value_range: [0, 1] is contradictory.

Hounsfield units span roughly [-1024, 3071]. A [0, 1] range implies the inputs are min-max normalized (no longer raw HU). Please either:

  • keep format: "hounsfield" and set value_range to a representative HU window (e.g., the clipping window the pipeline uses internally), or
  • if the bundle expects pre-normalized inputs, change format accordingly and document the normalization in the README.

This also affects how downstream tooling that consumes network_data_format will preprocess data.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/configs/metadata.json` around lines 38 - 50, The
metadata.json currently declares format: "hounsfield" but sets value_range: [0,
1], which is inconsistent; either set "value_range" to a representative HU
window (e.g., the clipping window your pipeline uses such as [-1024, 3071] or a
narrower window like [-125, 275] depending on preprocessing) while keeping
format: "hounsfield", or change format to reflect pre-normalized data (e.g.,
"normalized" or "float32_normalized") and update README to document the min-max
or z-score normalization; update the "value_range" field and README accordingly
and ensure any references to network_data_format elsewhere in the repo are
consistent with the chosen option.

Comment on lines +13 to +15
Coronsegmentator is an automated pipeline that performs dual-task segmentation on cardiac CT images, focusing on both whole-heart and coronary artery structures. It integrates MONAI’s Auto3DSeg for general cardiac segmentation and a custom nnU-Net model for detailed coronary artery segmentation. The pipeline further converts segmentation results (in STL format) into USD files for downstream 3D visualization and digital twin simulation using NVIDIA Omniverse.

The entire workflow is designed to enable precise, scalable, and automated generation of personalized coronary artery digital twins, supporting clinical planning, AI-based diagnosis, and medical visualization in research or pre-operative planning workflows.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Description still advertises whole-heart (Auto3DSeg) segmentation, but the bundle was reduced to coronary-only.

Per the latest commit history and models/CoronSegmentator/configs/metadata.json (task: "Coronary artery ct segmentation", description mentions only coronary artery), the bundle no longer performs the dual whole-heart + coronary task. The opening section here, the Tags, and the "processed by" steps (Lines 25–29) still mention MONAI Auto3DSeg whole-heart segmentation, which will mislead users.

Please update the README to reflect coronary-only scope.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/docs/README.md` around lines 13 - 15, The README's
opening paragraph, Tags, and "processed by" steps still describe dual
whole-heart + coronary segmentation, but the bundle is coronary-only (see
metadata.json task: "Coronary artery ct segmentation"); update
models/CoronSegmentator/docs/README.md to remove references to MONAI Auto3DSeg
and whole-heart segmentation, revise the opening paragraph to state coronary
artery–only scope, adjust the Tags list, and change the "processed by" steps
(lines referencing MONAI/Auto3DSeg) to only list the custom nnU-Net coronary
segmentation and subsequent STL→USD conversion for 3D visualization.

# url: "https://mega.nz/file/L8oz3QhA#illHJWSCmsmV5xqbXktwiu-1jUebbpJ0HSKv0hbMFCc"
url: "https://drive.google.com/file/d/11-pgULhmSZyAAfZG9n1vOusWHeH-hpwE/view?usp=drive_link"
hash_val: "0e783497e94293c86995392e5fbbc790"
hash_type: "md5"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

hash_type is declared here but never read by the consumer.

models/CoronSegmentator/scripts/heart_digital_twin.py::CoroSegmentatorPipeline.run() only extracts path, url, and hash_val from each entry and calls download_url(url, filepath=..., hash_val=expected_hash) without forwarding hash_type. MONAI's download_url defaults to MD5, so behavior happens to be correct, but the YAML field is silently ignored. Either pass it through (hash_type=file_info.get("hash_type", "md5")) or drop the field to avoid implying it has effect.

Also applies to: 11-11

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/large_file.yml` at line 6, The YAML field hash_type
is defined but never passed into the downloader; update
CoroSegmentatorPipeline.run to read hash_type from each file entry (e.g.
file_info.get("hash_type", "md5")) and pass it into the download call as
download_url(url, filepath=..., hash_val=expected_hash, hash_type=hash_type) so
the declared algorithm is honored (alternatively remove the unused YAML field if
you prefer); locate this change around the run method in CoroSegmentatorPipeline
and the call site that currently supplies only path/url/hash_val.

Comment on lines +358 to +366
script_path = Path(os.path.abspath(__file__))
self.raw_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_raw")
self.preprocessed_path = os.path.join(script_path.parent, "nnUNet/nnUNet_preprocessed")
self.results_path = os.path.join(script_path.parent.parent.parent, "models/nnUNet_results")
self.model_weight = [
os.path.join(script_path.parent.parent.parent, "models", f)
for f in os.listdir(os.path.join(script_path.parent.parent.parent, "models"))
if f.split(".")[-1] == "zip"
] # check if any model weight exist
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

preprocessed_path should not live inside the installed bundle directory.

raw_path and seg_path correctly go under self.temp_dir, but preprocessed_path is set to <bundle>/scripts/coronaryArtery_segmentation/nnUNet/nnUNet_preprocessed. nnUNet may create files there, polluting the installed package directory and causing permission errors when the bundle is installed read-only (e.g., system site-packages, container layers). It also leaks state across pipeline invocations.

Move it under self.temp_dir for parity with the other paths, or to a user-cache directory.

♻️ Proposed fix
-            self.raw_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_raw")
-            self.preprocessed_path = os.path.join(script_path.parent, "nnUNet/nnUNet_preprocessed")
-            self.results_path = os.path.join(script_path.parent.parent.parent, "models/nnUNet_results")
+            self.raw_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_raw")
+            self.preprocessed_path = os.path.join(self.temp_dir.name, "nnUNet/nnUNet_preprocessed")
+            self.results_path = os.path.join(script_path.parent.parent.parent, "models/nnUNet_results")

Additionally, line 364's os.listdir(... "models") will raise FileNotFoundError if the models/ directory doesn’t yet exist (e.g., a unit test that constructs NNUnetPredictor without going through CoroSegmentatorPipeline.run()). Consider guarding with a default empty list when the directory is absent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py`
around lines 358 - 366, preprocessed_path is set inside the installed bundle
(script_path.parent) which can pollute/read-only install dirs; change it to live
under the temporary workspace like raw_path (use self.temp_dir.name, e.g.
self.preprocessed_path = os.path.join(self.temp_dir.name,
"nnUNet/nnUNet_preprocessed")) so nnUNet writes to a transient location; also
harden the model_weight construction (the list comprehension that calls
os.listdir on script_path.parent.parent.parent/"models") by first checking that
the models directory exists and using an empty list if not (or use
os.path.exists/isdir) before listing to avoid FileNotFoundError when the models/
dir is absent.

Comment on lines +437 to +441
except subprocess.CalledProcessError as e:
logger.error(
f"Coronary artery <{os.path.basename(self.input_path)}> \
segmentation inference failed with code {e.returncode}"
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Backslash-continued f-strings emit literal indentation in log messages.

Same pattern at lines 462-466 and 502-506. Inside an f"..."/"..." string, a \ line continuation does not strip the leading whitespace on the next line — the resulting log entry contains a long run of spaces (e.g., ... failed with code 1 segmentation inference failed). Use implicit string concatenation or textwrap.dedent/single-line strings instead.

♻️ Proposed fix (apply to all three sites)
-            logger.error(
-                f"Coronary artery <{os.path.basename(self.input_path)}> \
-                segmentation inference failed with code {e.returncode}"
-            )
+            logger.error(
+                "Coronary artery <%s> segmentation inference failed with code %s",
+                os.path.basename(self.input_path),
+                e.returncode,
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except subprocess.CalledProcessError as e:
logger.error(
f"Coronary artery <{os.path.basename(self.input_path)}> \
segmentation inference failed with code {e.returncode}"
)
except subprocess.CalledProcessError as e:
logger.error(
"Coronary artery <%s> segmentation inference failed with code %s",
os.path.basename(self.input_path),
e.returncode,
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@models/CoronSegmentator/scripts/coronaryArtery_segmentation/coronaryArtery_seg.py`
around lines 437 - 441, The log messages use backslash-continued f-strings which
preserve the indentation and produce unwanted spaces; locate the except blocks
catching subprocess.CalledProcessError (the logger.error calls that reference
self.input_path and e.returncode) and replace the backslash-continued f-strings
with either a single-line f-string or implicit string concatenation (e.g., f"...
segmentation inference failed with code {e.returncode}") so the message has no
embedded indentation; apply the same fix to the other two occurrences that use
logger.error with self.input_path and e.returncode.

License: Apache License 2.0

Description:
This script performs <brief description of converting NII Files to USD Files>.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Leftover template placeholder in the module docstring.

<brief description of converting NII Files to USD Files> looks like an unfilled docstring template. Please replace it with the actual description of what this module does.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/scripts/heart_digital_twin.py` at line 9, Replace the
leftover template placeholder in the module docstring at the top of
heart_digital_twin.py with a concise, accurate description of what the module
does (e.g., "Convert NIfTI (.nii/.nii.gz) medical image files into USD scene
files and generate associated heart digital twin metadata"), ensuring the
docstring reflects the script's purpose, inputs, outputs and any important side
effects; update the module-level docstring string literal (the first
triple-quoted string in the file) accordingly.

Comment on lines +92 to +104
try:
run_coronary_artery_segmentation(nii_path, seg_folder)
return True
except Exception as e:
logger.error(f"Processing failed for {self.nii_file}: {str(e)}")
# Remove the output folder if failed
if os.path.exists(seg_folder):
for f in os.listdir(seg_folder):
os.remove(os.path.join(seg_folder, f))
os.rmdir(seg_folder)
return False
finally:
gc.collect()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Silent failure + fragile cleanup in run().

Two related concerns:

  1. On exception, run() only logs and returns False. ConfigWorkflow treats that as a successful run, so a failed inference is invisible to the bundle/test harness. Either re-raise after logging, or have the workflow run step assert on the return value.
  2. Cleanup uses os.remove followed by os.rmdir. If seg_folder ever contains subdirectories (likely once STL/USD post-processing lands per the PR description), os.remove will raise IsADirectoryError and os.rmdir will fail on a non-empty dir. shutil.rmtree is safer and simpler.
♻️ Proposed fix
-import gc
+import gc
+import shutil
@@
-        try:
-            run_coronary_artery_segmentation(nii_path, seg_folder)
-            return True
-        except Exception as e:
-            logger.error(f"Processing failed for {self.nii_file}: {str(e)}")
-            # Remove the output folder if failed
-            if os.path.exists(seg_folder):
-                for f in os.listdir(seg_folder):
-                    os.remove(os.path.join(seg_folder, f))
-                os.rmdir(seg_folder)
-            return False
-        finally:
-            gc.collect()
+        try:
+            run_coronary_artery_segmentation(nii_path, seg_folder)
+            return True
+        except Exception:
+            logger.exception(f"Processing failed for {self.nii_file}")
+            # Remove the output folder if failed
+            shutil.rmtree(seg_folder, ignore_errors=True)
+            raise
+        finally:
+            gc.collect()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
run_coronary_artery_segmentation(nii_path, seg_folder)
return True
except Exception as e:
logger.error(f"Processing failed for {self.nii_file}: {str(e)}")
# Remove the output folder if failed
if os.path.exists(seg_folder):
for f in os.listdir(seg_folder):
os.remove(os.path.join(seg_folder, f))
os.rmdir(seg_folder)
return False
finally:
gc.collect()
try:
run_coronary_artery_segmentation(nii_path, seg_folder)
return True
except Exception:
logger.exception(f"Processing failed for {self.nii_file}")
# Remove the output folder if failed
shutil.rmtree(seg_folder, ignore_errors=True)
raise
finally:
gc.collect()
🧰 Tools
🪛 Ruff (0.15.11)

[warning] 95-95: Do not catch blind exception: Exception

(BLE001)


[warning] 96-96: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@models/CoronSegmentator/scripts/heart_digital_twin.py` around lines 92 - 104,
The run() error handling silently returns False and performs fragile cleanup;
update the except block in heart_digital_twin.run to both surface failures to
the workflow (either re-raise the caught exception after logging or raise a new
exception so ConfigWorkflow sees the failure) and replace the manual
os.remove/os.rmdir loop with shutil.rmtree(seg_folder, ignore_errors=False) (or
ignore_errors=True if you prefer best-effort cleanup) to correctly remove nested
files/directories; ensure you import shutil and keep the gc.collect() in the
finally block.

@ericspod
Copy link
Copy Markdown
Member

Hi @Foxconn-bgroup-aidd sorry for the long delay on this one, please have a look at the comments made by us and Coderabbit and mark resolved anything that's sorted. I think there's some valid concerns about the tests that should be addressed. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants