@@ -8,11 +8,21 @@ changelog does not include internal changes that do not affect the user.
88
99## [ Unreleased]
1010
11+ ## [ 0.9.0] - 2026-02-24
12+
1113### Added
1214
1315- Added the function ` torchjd.autojac.jac ` . It's the same as ` torchjd.autojac.backward ` except that
1416 it returns the Jacobians as a tuple instead of storing them in the ` .jac ` fields of the inputs.
1517 Its interface is analog to that of ` torch.autograd.grad ` .
18+ - Added a ` jac_tensors ` parameter to ` backward ` , allowing to pre-multiply the Jacobian computation
19+ by initial Jacobians. This enables multi-step chain rule computations and is analogous to the
20+ ` grad_tensors ` parameter in ` torch.autograd.backward ` .
21+ - Added a ` grad_tensors ` parameter to ` mtl_backward ` , allowing to use non-scalar ` losses ` (now
22+ renamed to ` tensors ` ). This is analogous to the ` grad_tensors ` parameter of
23+ ` torch.autograd.backward ` . When using ` scalar ` losses, the usage does not change.
24+ - Added a ` jac_outputs ` parameter to ` jac ` , allowing to pre-multiply the Jacobian computation by
25+ initial Jacobians. This is analogous to the ` grad_outputs ` parameter in ` torch.autograd.grad ` .
1626- Added a ` scale_mode ` parameter to ` AlignedMTL ` and ` AlignedMTLWeighting ` , allowing to choose
1727 between ` "min" ` , ` "median" ` , and ` "rmse" ` scaling.
1828- Added an attribute ` gramian_weighting ` to all aggregators that use a gramian-based ` Weighting ` .
@@ -45,11 +55,23 @@ changelog does not include internal changes that do not affect the user.
4555 mtl_backward(losses, features)
4656 jac_to_grad(shared_module.parameters(), aggregator)
4757 ```
48-
49- - Removed an unnecessary memory duplication. This should significantly improve the memory efficiency
50- of ` autojac ` .
51- - Removed an unnecessary internal cloning of gradient. This should slightly improve the memory
52- efficiency of ` autojac ` .
58+ - ** BREAKING** : Made some parameters of the public interface of ` torchjd ` positional-only or
59+ keyword-only:
60+ - ` backward ` : The ` tensors ` parameter is now positional-only. Suggested change:
61+ ` backward(tensors=losses) ` => ` backward(losses) ` . All other parameters are now keyword-only.
62+ - ` mtl_backward ` : The ` tensors ` parameter (previously named ` losses ` ) is now positional-only.
63+ Suggested change: ` mtl_backward(losses=losses, features=features) ` =>
64+ ` mtl_backward(losses, features=features) ` . The ` features ` parameter remains usable as positional
65+ or keyword. All other parameters are now keyword-only.
66+ - ` Aggregator.__call__ ` : The ` matrix ` parameter is now positonal-only. Suggested change:
67+ ` aggregator(matrix=matrix) ` => ` aggregator(matrix) ` .
68+ - ` Weighting.__call__ ` : The ` stat ` parameter is now positional-only. Suggested change:
69+ ` weighting(stat=gramian) ` => ` weighting(gramian) ` .
70+ - ` GeneralizedWeighting.__call__ ` : The ` generalized_gramian ` parameter is now positional-only.
71+ Suggested change: ` generalized_weighting(generalized_gramian=generalized_gramian) ` =>
72+ ` generalized_weighting(generalized_gramian) ` .
73+ - Removed several unnecessary memory duplications. This should significantly improve the memory
74+ efficiency and speed of ` autojac ` .
5375- Increased the lower bounds of the torch (from 2.0.0 to 2.3.0) and numpy (from 1.21.0
5476 to 1.21.2) dependencies to reflect what really works with torchjd. We now also run torchjd's tests
5577 with the dependency lower-bounds specified in ` pyproject.toml ` , so we should now always accurately
0 commit comments