MONet Bundle Integration into MONAI Deploy#574
MONet Bundle Integration into MONAI Deploy#574SimoneBendazzoli93 wants to merge 16 commits intoProject-MONAI:mainfrom
Conversation
dfea4ff to
f9aaedc
Compare
|
There was a problem hiding this comment.
This breaks the existing project structure by introducing a new folder 'devel', can this not be part of the file under ''operators" folder? - Also, can you add some links in the docstrings on how to generate the MONET bundle. Does this currently support all versions of nnunet?
There was a problem hiding this comment.
The devel folder was accidentally included in the PR. I have now removed, adding also some references to the MONet Bundle in the docstrings
- Included MONetBundleInferenceOperator in the __init__.py file for operator registration. - Updated import statements to reflect the addition of the new operator. Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Corrected the bundle suffixes tuple to include a period before 'yml'. - Fixed a method call to ensure casefold() is invoked correctly. - Initialized meta_data to an empty dictionary if not provided. These changes enhance code clarity and prevent potential runtime errors. Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Introduced a new operator, MONetBundleInferenceOperator, for performing inference using the MONet bundle. - Extended functionality from MonaiBundleInferenceOperator to support nnUNet-specific configurations. - Implemented methods for initializing configurations and performing predictions with multimodal data handling. This addition enhances the inference capabilities within the MONAI framework. Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Introduced a new file containing the implementation of the MONetBundleInferenceOperator. - This operator extends the MonaiBundleInferenceOperator to facilitate inference with nnUNet-specific configurations. - Implemented methods for configuration initialization and multimodal data prediction, enhancing the MONAI framework's inference capabilities. Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Registered MONetBundleInferenceOperator in the __init__.py file to ensure it is included in the module's public API. - This change facilitates easier access to the operator for users of the MONAI framework. Signed-off-by: Simone Bendazzoli <simben@kth.se>
… tested alone (Project-MONAI#573) * Added saving decoded pixels for in deepth review if needed Signed-off-by: M Q <mingmelvinq@nvidia.com> * Fixed linting complaints Signed-off-by: M Q <mingmelvinq@nvidia.com> * Fixed the code and improve the tests with failed tests to be addressed. Signed-off-by: M Q <mingmelvinq@nvidia.com> * Force YBR for JEPG baseline, and test nvimgcodec without any decault decoders Signed-off-by: M Q <mingmelvinq@nvidia.com> * Critical changes make uncompressed images matching pydicom default decoders. Signed-off-by: M Q <mingmelvinq@nvidia.com> * Removed support for 12bit "JPEG Extended, Process 2+4" Signed-off-by: M Q <mingmelvinq@nvidia.com> * Address review comments including from AI agent Signed-off-by: M Q <mingmelvinq@nvidia.com> * Added reason for ignoring dcm files known to fail to uncompress Signed-off-by: M Q <mingmelvinq@nvidia.com> * Updated the notes on perf test results Signed-off-by: M Q <mingmelvinq@nvidia.com> * Explicitly minimized lazy loading impact and added comments on it. Signed-off-by: M Q <mingmelvinq@nvidia.com> * Updated doc sentences Signed-off-by: M Q <mingmelvinq@nvidia.com> * Editorial changes made to comments Signed-off-by: M Q <mingmelvinq@nvidia.com> --------- Signed-off-by: M Q <mingmelvinq@nvidia.com> Signed-off-by: Simone Bendazzoli <simben@kth.se>
* Release v3.5.0 Signed-off-by: M Q <mingmelvinq@nvidia.com> * Bump version: 3.4.0 → 3.5.0 Signed-off-by: M Q <mingmelvinq@nvidia.com> --------- Signed-off-by: M Q <mingmelvinq@nvidia.com> Signed-off-by: Simone Bendazzoli <simben@kth.se>
…mplementation of the MONetBundleInferenceOperator. This deletion simplifies the codebase by eliminating unused or redundant components. Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Enhanced the docstring for MONetBundleInferenceOperator to include a reference to the MONet bundle repository and provide additional context on its functionality. - This update improves clarity for users regarding the operator's purpose and usage. Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Improved the type checking for the model_network parameter to enhance readability and maintainability. - Adjusted formatting in the predict method for better clarity and consistency in multimodal data handling. - These changes contribute to cleaner code and improved functionality within the MONAI framework. Signed-off-by: Simone Bendazzoli <simben@kth.se>
dc5bdd3 to
09b569a
Compare
- Integrated TritonModel type checking into the MONetBundleInferenceOperator to enhance model compatibility. - Updated the predict method to retain metadata from input data, improving the output structure for predictions. These changes improve the operator's functionality and usability within the MONAI framework.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdds a new MONetBundleInferenceOperator for MONet/nnUNet-style multimodal inference, exposes it in the public API, and fixes YAML bundle suffix handling plus metadata initialization in MonaiBundleInferenceOperator. Changes
Sequence DiagramsequenceDiagram
actor Client
participant MONetOp as MONetBundleInferenceOperator
participant Transform as ResampleToMatch / ConcatItemsd
participant Predictor as nnUNet Predictor
Client->>MONetOp: predict(data, **kwargs)
MONetOp->>MONetOp: _init_config / _set_model_network
alt multimodal kwargs present
MONetOp->>Transform: resample extra modalities to match image
Transform-->>MONetOp: resampled modalities
MONetOp->>Transform: concat modalities into "image" tensor
Transform-->>MONetOp: multimodal input tensor
end
MONetOp->>MONetOp: ensure batch dimension
MONetOp->>Predictor: run predictor(input)
Predictor-->>MONetOp: prediction
MONetOp->>MONetOp: copy input meta to prediction
MONetOp-->>Client: return prediction
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/deploy/operators/monai_bundle_inference_operator.py (1)
149-149:⚠️ Potential issue | 🔴 CriticalBug: Missing leading dot on
"yml"suffix in_read_directory_bundle_config.
bundle_suffixeshere has"yml"without a leading dot, so constructingf"{config_name_base}{suffix}"at Line 170 would produce e.g."inferenceyml"instead of"inference.yml". The archive-based reader at Line 189 was correctly fixed to".yml", but this directory-based reader was missed.🐛 Proposed fix
- bundle_suffixes = (".json", ".yaml", "yml") # The only supported file ext(s) + bundle_suffixes = (".json", ".yaml", ".yml") # The only supported file ext(s)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/deploy/operators/monai_bundle_inference_operator.py` at line 149, In _read_directory_bundle_config the bundle_suffixes tuple is missing a leading dot for "yml", causing f"{config_name_base}{suffix}" to produce filenames like "inferenceyml"; update bundle_suffixes in monai_bundle_inference_operator.py to include the leading dot (".yml") so that filenames built by config_name_base + suffix are correct; confirm the change in the _read_directory_bundle_config function where config_name_base and suffix are concatenated.
🧹 Nitpick comments (2)
monai/deploy/operators/monet_bundle_inference_operator.py (2)
90-95: Non-MetaTensor kwargs (e.g. from base class) are silently dropped from multimodal data.The base class
computepasses**other_inputstopredict, which may include non-tensor entries. Theif len(kwargs) > 0guard enters the multimodal path for any kwargs, but only MetaTensor values are added tomultimodal_data. Non-MetaTensor kwargs are silently ignored. Consider filtering kwargs more explicitly — e.g. only enter multimodal path if there are actually MetaTensor values:Proposed fix
- if len(kwargs) > 0: - multimodal_data = {"image": data} - for key in kwargs.keys(): - if isinstance(kwargs[key], MetaTensor): - multimodal_data[key] = ResampleToMatch(mode="bilinear")(kwargs[key], img_dst=data) - data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"] + meta_tensor_kwargs = {k: v for k, v in kwargs.items() if isinstance(v, MetaTensor)} + if meta_tensor_kwargs: + multimodal_data = {"image": data} + for key, value in meta_tensor_kwargs.items(): + multimodal_data[key] = ResampleToMatch(mode="bilinear")(value, img_dst=data) + data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 90 - 95, The multimodal branch currently triggers for any kwargs but only adds MetaTensor values and silently drops others; change the logic in predict (referencing kwargs, multimodal_data, MetaTensor, ResampleToMatch, ConcatItemsd) to first filter kwargs for MetaTensor entries (e.g., meta_kwargs = {k:v for k,v in kwargs.items() if isinstance(v, MetaTensor)}), only enter the multimodal path when meta_kwargs is non-empty, build multimodal_data from meta_kwargs (resampling via ResampleToMatch and concatenating with ConcatItemsd) and leave non-MetaTensor kwargs untouched so compute/predict (and other_inputs) still receive them.
17-17: Hard import ofmonai.transformsbreaks theoptional_importpattern used elsewhere.The base operator and this file use
optional_importfortorchandMetaTensor, butConcatItemsdandResampleToMatchare imported directly. Ifmonaiis not installed (or partially installed), this will raiseImportErrorat module load time rather than deferring it to usage.Proposed fix
-from monai.transforms import ConcatItemsd, ResampleToMatch +ConcatItemsd, _ = optional_import("monai.transforms", name="ConcatItemsd") +ResampleToMatch, _ = optional_import("monai.transforms", name="ResampleToMatch")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/deploy/operators/monet_bundle_inference_operator.py` at line 17, Replace the hard import of ConcatItemsd and ResampleToMatch with the same optional_import pattern used for torch/MetaTensor: use optional_import("monai.transforms") to get the transforms module (or None), then assign ConcatItemsd = transforms.ConcatItemsd and ResampleToMatch = transforms.ResampleToMatch if transforms is not None; if they are None, ensure any code that uses ConcatItemsd/ResampleToMatch checks for None and raises a clear ImportError or defers functionality until monai is available. Reference the symbols ConcatItemsd and ResampleToMatch in monet_bundle_inference_operator.py and follow the existing optional_import usage style in the file for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Line 1: Update the file header in monet_bundle_inference_operator.py to
correct the copyright year: replace the incorrect "2002" with "2025" in the
top-of-file comment so the copyright line accurately reflects the current year.
- Around line 26-46: Remove the duplicated sentence in the module/class
docstring that repeats "A specialized operator for performing inference using
the MONet bundle"; edit the docstring (above MonetBundleInferenceOperator / the
class definition containing _init_config and predict) to keep a single, coherent
opening sentence, preserve the rest of the docstring content and formatting
(attributes/methods sections), and ensure the triple-quoted string remains
properly closed and PEP257-style spacing is preserved.
- Around line 58-64: The _init_config implementation is re-parsing the bundle
and overwriting self._parser after calling super()._init_config, which causes
double I/O and a mismatch with objects the parent initialized (e.g.,
self._device, self._inferer, self._preproc, self._postproc); remove the extra
get_bundle_config call and instead reuse the parser the parent already created
(use self._parser) to obtain network_def via
self._parser.get_parsed_content("network_def") and assign that to
self._nnunet_predictor without reassigning self._parser.
- Around line 75-81: The runtime type-check block for model_network is using
torch.jit.isinstance (meant for TorchScript refinement) which is incorrect for
eager Python; replace torch.jit.isinstance(model_network,
torch.jit.ScriptModule) with the standard isinstance(model_network,
torch.jit.ScriptModule) in the validation that checks model_network in the
MonetBundleInferenceOperator (the block referencing model_network,
torch.jit.ScriptModule, TorchScriptModel, TritonModel) so the condition uses
only Python isinstance checks and the TypeError remains unchanged.
---
Outside diff comments:
In `@monai/deploy/operators/monai_bundle_inference_operator.py`:
- Line 149: In _read_directory_bundle_config the bundle_suffixes tuple is
missing a leading dot for "yml", causing f"{config_name_base}{suffix}" to
produce filenames like "inferenceyml"; update bundle_suffixes in
monai_bundle_inference_operator.py to include the leading dot (".yml") so that
filenames built by config_name_base + suffix are correct; confirm the change in
the _read_directory_bundle_config function where config_name_base and suffix are
concatenated.
---
Nitpick comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 90-95: The multimodal branch currently triggers for any kwargs but
only adds MetaTensor values and silently drops others; change the logic in
predict (referencing kwargs, multimodal_data, MetaTensor, ResampleToMatch,
ConcatItemsd) to first filter kwargs for MetaTensor entries (e.g., meta_kwargs =
{k:v for k,v in kwargs.items() if isinstance(v, MetaTensor)}), only enter the
multimodal path when meta_kwargs is non-empty, build multimodal_data from
meta_kwargs (resampling via ResampleToMatch and concatenating with ConcatItemsd)
and leave non-MetaTensor kwargs untouched so compute/predict (and other_inputs)
still receive them.
- Line 17: Replace the hard import of ConcatItemsd and ResampleToMatch with the
same optional_import pattern used for torch/MetaTensor: use
optional_import("monai.transforms") to get the transforms module (or None), then
assign ConcatItemsd = transforms.ConcatItemsd and ResampleToMatch =
transforms.ResampleToMatch if transforms is not None; if they are None, ensure
any code that uses ConcatItemsd/ResampleToMatch checks for None and raises a
clear ImportError or defers functionality until monai is available. Reference
the symbols ConcatItemsd and ResampleToMatch in
monet_bundle_inference_operator.py and follow the existing optional_import usage
style in the file for consistency.
| def _init_config(self, config_names): | ||
|
|
||
| super()._init_config(config_names) | ||
| parser = get_bundle_config(str(self._bundle_path), config_names) | ||
| self._parser = parser | ||
|
|
||
| self._nnunet_predictor = parser.get_parsed_content("network_def") |
There was a problem hiding this comment.
Double parsing: _init_config parses the bundle config twice.
super()._init_config(config_names) already calls get_bundle_config and assigns self._parser. Lines 61–62 immediately re-parse the same bundle and overwrite self._parser, discarding the one the parent just set up. This doubles the I/O and parsing work. More importantly, the parent's _init_config configures self._device, self._inferer, self._preproc, self._postproc, etc., all using the first parser. Overwriting self._parser afterward creates a divergence between those cached objects and the active parser.
If the intent is just to get network_def, you can use the parser that super() already set:
Proposed fix
def _init_config(self, config_names):
super()._init_config(config_names)
- parser = get_bundle_config(str(self._bundle_path), config_names)
- self._parser = parser
-
- self._nnunet_predictor = parser.get_parsed_content("network_def")
+ self._nnunet_predictor = self._parser.get_parsed_content("network_def")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _init_config(self, config_names): | |
| super()._init_config(config_names) | |
| parser = get_bundle_config(str(self._bundle_path), config_names) | |
| self._parser = parser | |
| self._nnunet_predictor = parser.get_parsed_content("network_def") | |
| def _init_config(self, config_names): | |
| super()._init_config(config_names) | |
| self._nnunet_predictor = self._parser.get_parsed_content("network_def") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 58 -
64, The _init_config implementation is re-parsing the bundle and overwriting
self._parser after calling super()._init_config, which causes double I/O and a
mismatch with objects the parent initialized (e.g., self._device, self._inferer,
self._preproc, self._postproc); remove the extra get_bundle_config call and
instead reuse the parser the parent already created (use self._parser) to obtain
network_def via self._parser.get_parsed_content("network_def") and assign that
to self._nnunet_predictor without reassigning self._parser.
There was a problem hiding this comment.
This suggestion from copilot needs review - If the parent class is initializing the model properly, we can accept the suggestion, but I am not sure if get_bundle_config is needed to apply any config patch.
There was a problem hiding this comment.
@SimoneBendazzoli93 - please review this suggestion above
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
Minor typos Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Elanchezhian <chezhipower@gmail.com>
Applying minor patch to docs Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Elanchezhian <chezhipower@gmail.com>
minor change for stability Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Elanchezhian <chezhipower@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
monai/deploy/operators/monet_bundle_inference_operator.py (2)
57-63:⚠️ Potential issue | 🟠 MajorAvoid reparsing and overwriting
self._parseraftersuper()._init_config.Line 60 and Line 61 reinitialize parser state already built by the base class. This duplicates parsing work and can desync parser-dependent fields initialized in
MonaiBundleInferenceOperator._init_config.Proposed fix
def _init_config(self, config_names): super()._init_config(config_names) - parser = get_bundle_config(str(self._bundle_path), config_names) - self._parser = parser - - self._nnunet_predictor = parser.get_parsed_content("network_def") + self._nnunet_predictor = self._parser.get_parsed_content("network_def")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 57 - 63, The code reinitializes the parser after calling super()._init_config, which duplicates work and can overwrite parser state; in _init_config avoid calling get_bundle_config again and do not assign to self._parser a new parser—use the parser instance already initialized by super()._init_config (self._parser) and set self._nnunet_predictor = self._parser.get_parsed_content("network_def") (or call get_parsed_content on the existing parser variable) instead of reassigning self._parser via get_bundle_config.
74-80:⚠️ Potential issue | 🟠 MajorUse
isinstancefor eager runtime checks and align the error message with accepted types.Line 76 uses
torch.jit.isinstance, which is intended for TorchScript type refinement, not regular Python runtime validation. Also, Line 80’s message omits acceptedTorchScriptModelandTritonModel.Proposed fix
if ( not isinstance(model_network, torch.nn.Module) - and not torch.jit.isinstance(model_network, torch.jit.ScriptModule) + and not isinstance(model_network, torch.jit.ScriptModule) and not isinstance(model_network, TorchScriptModel) and not isinstance(model_network, TritonModel) ): - raise TypeError("model_network must be an instance of torch.nn.Module or torch.jit.ScriptModule") + raise TypeError( + "model_network must be an instance of torch.nn.Module, " + "torch.jit.ScriptModule, TorchScriptModel, or TritonModel" + )In PyTorch (including 1.10.2), is `torch.jit.isinstance` intended for TorchScript type refinement rather than regular eager-mode runtime type checks?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 74 - 80, The runtime type check in monet_bundle_inference_operator.py incorrectly uses torch.jit.isinstance (meant for TorchScript refinement) and the TypeError message omits accepted types; update the conditional in the validation block that checks model_network (the one currently testing torch.nn.Module, torch.jit.ScriptModule, TorchScriptModel, TritonModel) to use plain isinstance(...) for all checks (replace torch.jit.isinstance with isinstance) and change the raised TypeError message in that same block to list all accepted types: torch.nn.Module, torch.jit.ScriptModule, TorchScriptModel, and TritonModel so the message accurately reflects the allowed types.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 89-94: The current loop in the inference operator only adds kwargs
entries when isinstance(..., MetaTensor), silently dropping others; update the
handling in the method (where multimodal_data, ResampleToMatch, and ConcatItemsd
are used) to validate kwargs: iterate items in kwargs and for each key either
resample and add it to multimodal_data if it's a MetaTensor, or raise a clear
TypeError/ValueError that includes the offending key name and its actual type so
callers know they passed an unsupported modality type (do not silently ignore
non-MetaTensor values).
- Line 98: The assignment prediction.meta = data.meta can raise if either
prediction or data lack a .meta attribute; update the
MonetBundleInferenceOperator where this line occurs to guard the propagation by
checking attributes (e.g., using hasattr(prediction, "meta") and hasattr(data,
"meta") or isinstance checks) and only copy data.meta when both objects expose
.meta, otherwise skip or attach a safe metadata container; ensure you reference
the variables prediction and data in the conditional so behavior remains
unchanged for tensor-like outputs.
---
Duplicate comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 57-63: The code reinitializes the parser after calling
super()._init_config, which duplicates work and can overwrite parser state; in
_init_config avoid calling get_bundle_config again and do not assign to
self._parser a new parser—use the parser instance already initialized by
super()._init_config (self._parser) and set self._nnunet_predictor =
self._parser.get_parsed_content("network_def") (or call get_parsed_content on
the existing parser variable) instead of reassigning self._parser via
get_bundle_config.
- Around line 74-80: The runtime type check in
monet_bundle_inference_operator.py incorrectly uses torch.jit.isinstance (meant
for TorchScript refinement) and the TypeError message omits accepted types;
update the conditional in the validation block that checks model_network (the
one currently testing torch.nn.Module, torch.jit.ScriptModule, TorchScriptModel,
TritonModel) to use plain isinstance(...) for all checks (replace
torch.jit.isinstance with isinstance) and change the raised TypeError message in
that same block to list all accepted types: torch.nn.Module,
torch.jit.ScriptModule, TorchScriptModel, and TritonModel so the message
accurately reflects the allowed types.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 32044423-f81a-4a86-a66f-992e420065a8
📒 Files selected for processing (1)
monai/deploy/operators/monet_bundle_inference_operator.py
| if len(kwargs) > 0: | ||
| multimodal_data = {"image": data} | ||
| for key in kwargs.keys(): | ||
| if isinstance(kwargs[key], MetaTensor): | ||
| multimodal_data[key] = ResampleToMatch(mode="bilinear")(kwargs[key], img_dst=data) | ||
| data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"] |
There was a problem hiding this comment.
Do not silently drop non-MetaTensor modalities from kwargs.
At Line 92–94, only MetaTensor values are added to multimodal_data; all other provided modalities are ignored without error. That can produce incomplete inference inputs.
Proposed fix
if len(kwargs) > 0:
multimodal_data = {"image": data}
for key in kwargs.keys():
if isinstance(kwargs[key], MetaTensor):
multimodal_data[key] = ResampleToMatch(mode="bilinear")(kwargs[key], img_dst=data)
+ else:
+ multimodal_data[key] = kwargs[key]
data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 89 -
94, The current loop in the inference operator only adds kwargs entries when
isinstance(..., MetaTensor), silently dropping others; update the handling in
the method (where multimodal_data, ResampleToMatch, and ConcatItemsd are used)
to validate kwargs: iterate items in kwargs and for each key either resample and
add it to multimodal_data if it's a MetaTensor, or raise a clear
TypeError/ValueError that includes the offending key name and its actual type so
callers know they passed an unsupported modality type (do not silently ignore
non-MetaTensor values).
There was a problem hiding this comment.
I think this can be ignored
There was a problem hiding this comment.
@chezhia, understood — I'll defer to your judgment on this one. Acknowledged and moving on!
|
@SimoneBendazzoli93 It looks like the DCO (Developer Certificate of Origin) check is failing. To fix this, please ensure all your commits are signed off. You can do this by amending your previous commits using: Or, if you have multiple commits, you can perform an interactive rebase: Then, force-push the changes to the branch. This is required for the PR to be mergedr |
protection for meta attribute - added safety Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Elanchezhian <chezhipower@gmail.com>
|
chezhia
left a comment
There was a problem hiding this comment.
A couple of items identified by copilot needs review from author. Accepted a few minor suggestions.
| def _init_config(self, config_names): | ||
|
|
||
| super()._init_config(config_names) | ||
| parser = get_bundle_config(str(self._bundle_path), config_names) | ||
| self._parser = parser | ||
|
|
||
| self._nnunet_predictor = parser.get_parsed_content("network_def") |
There was a problem hiding this comment.
This suggestion from copilot needs review - If the parent class is initializing the model properly, we can accept the suggestion, but I am not sure if get_bundle_config is needed to apply any config patch.
| def _init_config(self, config_names): | ||
|
|
||
| super()._init_config(config_names) | ||
| parser = get_bundle_config(str(self._bundle_path), config_names) | ||
| self._parser = parser | ||
|
|
||
| self._nnunet_predictor = parser.get_parsed_content("network_def") |
There was a problem hiding this comment.
@SimoneBendazzoli93 - please review this suggestion above
| if len(kwargs) > 0: | ||
| multimodal_data = {"image": data} | ||
| for key in kwargs.keys(): | ||
| if isinstance(kwargs[key], MetaTensor): | ||
| multimodal_data[key] = ResampleToMatch(mode="bilinear")(kwargs[key], img_dst=data) | ||
| data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"] |
There was a problem hiding this comment.
I think this can be ignored




This PR introduces support for the MONet Bundle (an nnUNet wrapper for the MONAI Bundle) into MONAI Deploy.
Key Features:
Added a new operator:
MONetBundleInferenceOperator, extendingMonaiBundleInferenceOperatorIncluded an example application demonstrating spleen segmentation using the
MONetBundleInferenceOperatorSummary by CodeRabbit
New Features
Bug Fixes