33All notable changes to this project will be documented in this file.
44
55The format is based on [ Keep a Changelog] ( https://keepachangelog.com/en/1.1.0/ ) ,
6- and this project adheres to [ Semantic Versioning] ( https://semver.org/spec/v2.0.0.html ) . This changelog does not include internal
7- changes that do not affect the user.
6+ and this project adheres to [ Semantic Versioning] ( https://semver.org/spec/v2.0.0.html ) . This
7+ changelog does not include internal changes that do not affect the user.
88
99## [ Unreleased]
1010
@@ -13,7 +13,47 @@ changes that do not affect the user.
1313- Added the function ` torchjd.autojac.jac ` to compute the Jacobian of some outputs with respect to
1414 some inputs, without doing any aggregation. Its interface is very similar to
1515 ` torch.autograd.grad ` .
16- - Added ` __all__ ` in the ` __init__.py ` of packages. This should prevent PyLance from triggering warnings when importing from ` torchjd ` .
16+ - Added a ` scale_mode ` parameter to ` AlignedMTL ` and ` AlignedMTLWeighting ` , allowing to choose
17+ between ` "min" ` , ` "median" ` , and ` "rmse" ` scaling.
18+
19+ ### Changed
20+
21+ - ** BREAKING** : Removed from ` backward ` and ` mtl_backward ` the responsibility to aggregate the
22+ Jacobian. Now, these functions compute and populate the ` .jac ` fields of the parameters, and a new
23+ function ` torchjd.autojac.jac_to_grad ` should then be called to aggregate those ` .jac ` fields into
24+ ` .grad ` fields.
25+ This means that users now have more control on what they do with the Jacobians (they can easily
26+ aggregate them group by group or even param by param if they want), but it now requires an extra
27+ line of code to do the Jacobian descent step. To update, please change:
28+ ``` python
29+ backward(losses, aggregator)
30+ ```
31+ to
32+ ``` python
33+ backward(losses)
34+ jac_to_grad(model.parameters(), aggregator)
35+ ```
36+ and
37+ ``` python
38+ mtl_backward(losses, features, aggregator)
39+ ```
40+ to
41+ ``` python
42+ mtl_backward(losses, features)
43+ jac_to_grad(shared_module.parameters(), aggregator)
44+ ```
45+
46+ - Removed an unnecessary memory duplication. This should significantly improve the memory efficiency
47+ of ` autojac ` .
48+ - Removed an unnecessary internal cloning of gradient. This should slightly improve the memory
49+ efficiency of ` autojac ` .
50+
51+ ## [ 0.8.1] - 2026-01-07
52+
53+ ### Added
54+
55+ - Added ` __all__ ` in the ` __init__.py ` of packages. This should prevent PyLance from triggering
56+ warnings when importing from ` torchjd ` .
1757
1858## [ 0.8.0] - 2025-11-13
1959
0 commit comments