Skip to content

Commit c9ff9db

Browse files
committed
Simplify the note about GradVac grouping and make it use explicitly the names used in GradVac's paper and in LibMTL
1 parent 275eeb1 commit c9ff9db

File tree

1 file changed

+2
-6
lines changed

1 file changed

+2
-6
lines changed

src/torchjd/aggregation/_gradvac.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -42,12 +42,8 @@ class GradVac(GramianWeightedAggregator, Stateful):
4242
you need reproducibility.
4343
4444
.. note::
45-
To apply GradVac with per-layer or per-parameter-group granularity, create a separate
46-
:class:`GradVac` instance for each group and call
47-
:func:`~torchjd.autojac.jac_to_grad` once per group after
48-
:func:`~torchjd.autojac.mtl_backward`. Each instance maintains its own EMA state,
49-
matching the per-block targets :math:`\hat{\phi}_{ijk}` from the original paper. See
50-
the :doc:`Grouping </examples/grouping>` example for details.
45+
To apply GradVac with the `whole_model`, `enc_dec`, `all_layer` or `all_matrix` grouping
46+
strategy, please refer to the :doc:`Grouping </examples/grouping>` examples.
5147
"""
5248

5349
def __init__(self, beta: float = 0.5, eps: float = 1e-8) -> None:

0 commit comments

Comments
 (0)