File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -10,6 +10,31 @@ changelog does not include internal changes that do not affect the user.
1010
1111### Changed
1212
13+ - ** BREAKING** : Removed from ` backward ` and ` mtl_backward ` the responsibility to aggregate the
14+ Jacobian. Now, these functions compute and populate the ` .jac ` fields of the parameters, and a new
15+ function ` torchjd.utils.jac_to_grad ` should then be called to aggregate those ` .jac ` fields into
16+ ` .grad ` fields.
17+ This means that users now have more control on what they do with the Jacobians (they can easily
18+ aggregate them group by group or even param by param if they want), but it now requires an extra
19+ line of code to do the Jacobian descent step. To update, please change:
20+ ``` python
21+ backward(losses, aggregator)
22+ ```
23+ to
24+ ``` python
25+ backward(losses)
26+ jac_to_grad(model.parameters(), aggregator)
27+ ```
28+ and
29+ ``` python
30+ mtl_backward(losses, features, aggregator)
31+ ```
32+ to
33+ ``` python
34+ mtl_backward(losses, features)
35+ jac_to_grad(shared_module.parameters(), aggregator)
36+ ```
37+
1338- Removed an unnecessary internal cloning of gradient. This should slightly improve the memory
1439 efficiency of ` autojac ` .
1540
You can’t perform that action at this time.
0 commit comments