Skip to content

Commit 7680fb0

Browse files
committed
Fix links in README.md
1 parent 9631517 commit 7680fb0

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -63,13 +63,13 @@ There are two main ways to use TorchJD. The first one is to replace the usual ca
6363
[`torchjd.autojac.mtl_backward`](https://torchjd.org/stable/docs/autojac/mtl_backward/), depending
6464
on the use-case. This will compute the Jacobian of the vector of losses with respect to the model
6565
parameters, and aggregate it with the specified
66-
[`Aggregator`](https://torchjd.org/stable/docs/aggregation/index.html#torchjd.aggregation.Aggregator).
66+
[`Aggregator`](https://torchjd.org/stable/docs/aggregation#torchjd.aggregation.Aggregator).
6767
Whenever you want to optimize the vector of per-sample losses, you should rather use the
68-
[`torchjd.autogram.Engine`](https://torchjd.org/stable/docs/autogram/engine.html). Instead of
68+
[`torchjd.autogram.Engine`](https://torchjd.org/stable/docs/autogram/engine/). Instead of
6969
computing the full Jacobian at once, it computes the Gramian of this Jacobian, layer by layer, in a
7070
memory-efficient way. A vector of weights (one per element of the batch) can then be extracted from
7171
this Gramian, using a
72-
[`Weighting`](https://torchjd.org/stable/docs/aggregation/index.html#torchjd.aggregation.Weighting),
72+
[`Weighting`](https://torchjd.org/stable/docs/aggregation#torchjd.aggregation.Weighting),
7373
and used to combine the losses of the batch. Assuming each element of the batch is
7474
processed independently from the others, this approach is equivalent to
7575
[`torchjd.autojac.backward`](https://torchjd.org/stable/docs/autojac/backward/) while being
@@ -210,7 +210,7 @@ for input, target1, target2 in zip(inputs, task1_targets, task2_targets):
210210
> [!NOTE]
211211
> Here, because the losses are a matrix instead of a simple vector, we compute a *generalized
212212
> Gramian* and we extract weights from it using a
213-
> [GeneralizedWeighting](https://torchjd.org/docs/aggregation/index.html#torchjd.aggregation.GeneralizedWeighting).
213+
> [GeneralizedWeighting](https://torchjd.org/stable/docs/aggregation/#torchjd.aggregation.GeneralizedWeighting).
214214
215215
More usage examples can be found [here](https://torchjd.org/stable/examples/).
216216

@@ -220,7 +220,7 @@ TorchJD provides many existing aggregators from the literature, listed in the fo
220220
<!-- recommended aggregators first, then alphabetical order -->
221221
| Aggregator | Weighting | Publication |
222222
|------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
223-
| [UPGrad](https://torchjd.org/stable/docs/aggregation/upgrad.html#torchjd.aggregation.UPGrad) (recommended) | [UPGradWeighting](https://torchjd.org/stable/docs/aggregation/upgrad#torchjd.aggregation.UPGradWeighting) | [Jacobian Descent For Multi-Objective Optimization](https://arxiv.org/pdf/2406.16232) |
223+
| [UPGrad](https://torchjd.org/stable/docs/aggregation/upgrad/#torchjd.aggregation.UPGrad) (recommended) | [UPGradWeighting](https://torchjd.org/stable/docs/aggregation/upgrad/#torchjd.aggregation.UPGradWeighting) | [Jacobian Descent For Multi-Objective Optimization](https://arxiv.org/pdf/2406.16232) |
224224
| [AlignedMTL](https://torchjd.org/stable/docs/aggregation/aligned_mtl#torchjd.aggregation.AlignedMTL) | [AlignedMTLWeighting](https://torchjd.org/stable/docs/aggregation/aligned_mtl#torchjd.aggregation.AlignedMTLWeighting) | [Independent Component Alignment for Multi-Task Learning](https://arxiv.org/pdf/2305.19000) |
225225
| [CAGrad](https://torchjd.org/stable/docs/aggregation/cagrad#torchjd.aggregation.CAGrad) | [CAGradWeighting](https://torchjd.org/stable/docs/aggregation/cagrad#torchjd.aggregation.CAGradWeighting) | [Conflict-Averse Gradient Descent for Multi-task Learning](https://arxiv.org/pdf/2110.14048) |
226226
| [ConFIG](https://torchjd.org/stable/docs/aggregation/config#torchjd.aggregation.ConFIG) | - | [ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks](https://arxiv.org/pdf/2408.11104) |

0 commit comments

Comments
 (0)