You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<dd><p>Aggregates the Jacobians stored in the <codeclass="docutils literal notranslate"><spanclass="pre">.jac</span></code> fields of <codeclass="docutils literal notranslate"><spanclass="pre">tensors</span></code> and accumulates the result
300
300
into their <codeclass="docutils literal notranslate"><spanclass="pre">.grad</span></code> fields.</p>
<li><p><strong>tensors</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Iterable</span></code></a>[<aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a>]</span>) – The tensors whose <codeclass="docutils literal notranslate"><spanclass="pre">.jac</span></code> fields should be aggregated. All Jacobians must
305
305
have the same first dimension (e.g. number of losses).</p></li>
306
-
<li><p><strong>aggregator</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference internal" href="../../aggregation/#torchjd.aggregation.Aggregator" title="torchjd.aggregation._aggregator_bases.Aggregator"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Aggregator</span></code></a></span>) – The aggregator used to reduce the Jacobians into gradients.</p></li>
306
+
<li><p><strong>aggregator</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference internal" href="../../aggregation/#torchjd.aggregation.Aggregator" title="torchjd.aggregation._aggregator_bases.Aggregator"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Aggregator</span></code></a></span>) – The aggregator used to reduce the Jacobians into gradients. If it uses a
307
+
<aclass="reference internal" href="../../aggregation/#torchjd.aggregation.Weighting" title="torchjd.aggregation._weighting_bases.Weighting"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Weighting</span></code></a> to combine the rows of
308
+
the Jacobians, <codeclass="docutils literal notranslate"><spanclass="pre">jac_to_grad</span></code> will also return the computed weights.</p></li>
307
309
<li><p><strong>retain_jac</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#bool" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">bool</span></code></a></span>) – Whether to preserve the <codeclass="docutils literal notranslate"><spanclass="pre">.jac</span></code> fields of the tensors after they have been
308
310
used. Defaults to <codeclass="docutils literal notranslate"><spanclass="pre">False</span></code>.</p></li>
<spanclass="gp">>>> </span><spanclass="n">backward</span><spanclass="p">([</span><spanclass="n">y1</span><spanclass="p">,</span><spanclass="n">y2</span><spanclass="p">])</span><spanclass="c1"># param now has a .jac field</span>
336
-
<spanclass="gp">>>> </span><spanclass="n">jac_to_grad</span><spanclass="p">([</span><spanclass="n">param</span><spanclass="p">],</span><spanclass="n">aggregator</span><spanclass="o">=</span><spanclass="n">UPGrad</span><spanclass="p">())</span><spanclass="c1"># param now has a .grad field</span>
338
+
<spanclass="gp">>>> </span><spanclass="n">weights</span><spanclass="o">=</span><spanclass="n">jac_to_grad</span><spanclass="p">([</span><spanclass="n">param</span><spanclass="p">],</span><spanclass="n">UPGrad</span><spanclass="p">())</span><spanclass="c1"># param now has a .grad field</span>
<p>The <codeclass="docutils literal notranslate"><spanclass="pre">.grad</span></code> field of <codeclass="docutils literal notranslate"><spanclass="pre">param</span></code> now contains the aggregation (by UPGrad) of the Jacobian of
342
-
<spanclass="math notranslate nohighlight">\(\begin{bmatrix}y_1 \\ y_2\end{bmatrix}\)</span> with respect to <codeclass="docutils literal notranslate"><spanclass="pre">param</span></code>.</p>
346
+
<spanclass="math notranslate nohighlight">\(\begin{bmatrix}y_1 \\ y_2\end{bmatrix}\)</span> with respect to <codeclass="docutils literal notranslate"><spanclass="pre">param</span></code>. In this case, the
347
+
weights used to combine the Jacobian are equal because there was no conflict.</p>
0 commit comments