Skip to content

Commit b2db04d

Browse files
eitag-uniericspodKumoLiu
authored
Fix incorrect docstring defaults for DiffusionModelUNet (#8571)
Fixes # Fix incorrect defaults for use_combined_linear and add default for use_flash_attention in docstring ### Description The DiffusionModelUNet docstring listed use_combined_linear as default True, while the class signature sets it to False. It also omitted the default for use_flash_attention, which is False in the signature. This PR updates the Args section to match the actual defaults. ### Types of changes <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Non-breaking change (fix or new feature that would not break existing functionality). - [x] Integration tests passed locally by running `./runtests.sh -f -u --net --coverage`. - [x] Quick tests passed locally by running `./runtests.sh --quick --unittests --disttests`. - [x] In-line docstrings updated. - [x] Documentation updated, tested `make html` command in the `docs/` folder. Co-authored-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com> Co-authored-by: YunLiu <55491388+KumoLiu@users.noreply.github.com>
1 parent 8536538 commit b2db04d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

monai/networks/nets/diffusion_model_unet.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1529,9 +1529,9 @@ class DiffusionModelUNet(nn.Module):
15291529
upcast_attention: if True, upcast attention operations to full precision.
15301530
dropout_cattn: if different from zero, this will be the dropout value for the cross-attention layers.
15311531
include_fc: whether to include the final linear layer. Default to True.
1532-
use_combined_linear: whether to use a single linear layer for qkv projection, default to True.
1532+
use_combined_linear: whether to use a single linear layer for qkv projection, default to False.
15331533
use_flash_attention: if True, use Pytorch's inbuilt flash attention for a memory efficient attention mechanism
1534-
(see https://pytorch.org/docs/2.2/generated/torch.nn.functional.scaled_dot_product_attention.html).
1534+
(see https://pytorch.org/docs/2.2/generated/torch.nn.functional.scaled_dot_product_attention.html), default to False.
15351535
"""
15361536

15371537
def __init__(

0 commit comments

Comments
 (0)