File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Load Diff This file was deleted.
Original file line number Diff line number Diff line change @@ -29,9 +29,6 @@ This section contains some usage examples for TorchJD.
2929- :doc: `PyTorch Lightning Integration <lightning_integration >` showcases how to combine
3030 TorchJD with PyTorch Lightning, by providing an example implementation of a multi-task
3131 ``LightningModule `` optimized by Jacobian descent.
32- - :doc: `Grouping <grouping >` shows how to apply an aggregator independently per parameter group
33- (e.g. per layer), so that conflict resolution happens at a finer granularity than the full
34- shared parameter vector.
3532- :doc: `Automatic Mixed Precision <amp >` shows how to combine mixed precision training with TorchJD.
3633
3734.. toctree ::
@@ -46,4 +43,3 @@ This section contains some usage examples for TorchJD.
4643 monitoring.rst
4744 lightning_integration.rst
4845 amp.rst
49- grouping.rst
Original file line number Diff line number Diff line change @@ -41,14 +41,6 @@ class GradVac(GramianWeightedAggregator):
4141 For each task :math:`i`, the order of other tasks :math:`j` is shuffled independently
4242 using the global PyTorch RNG (``torch.randperm``). Seed it with ``torch.manual_seed`` if
4343 you need reproducibility.
44-
45- .. note::
46- To apply GradVac with per-layer or per-parameter-group granularity, create a separate
47- :class:`GradVac` instance for each group and call
48- :func:`~torchjd.autojac.jac_to_grad` once per group after
49- :func:`~torchjd.autojac.mtl_backward`. Each instance maintains its own EMA state,
50- matching the per-block targets :math:`\hat{\phi}_{ijk}` from the original paper. See
51- the :doc:`Grouping </examples/grouping>` example for details.
5244 """
5345
5446 def __init__ (self , beta : float = 0.5 , eps : float = 1e-8 ) -> None :
You can’t perform that action at this time.
0 commit comments