You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: latest/docs/aggregation/index.html
+1-2Lines changed: 1 addition & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -249,8 +249,7 @@
249
249
<spanid="aggregation"></span><h1>aggregation<aclass="headerlink" href="#module-torchjd.aggregation" title="Link to this heading">¶</a></h1>
250
250
<p>When doing Jacobian descent, the Jacobian matrix has to be aggregated into a vector to store in the
251
251
<codeclass="docutils literal notranslate"><spanclass="pre">.grad</span></code> fields of the model parameters. The
252
-
The <aclass="reference internal" href="#torchjd.aggregation.Aggregator" title="torchjd.aggregation._aggregator_bases.Aggregator"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Aggregator</span></code></a> is responsible for these
253
-
aggregations.</p>
252
+
<aclass="reference internal" href="#torchjd.aggregation.Aggregator" title="torchjd.aggregation._aggregator_bases.Aggregator"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Aggregator</span></code></a> is responsible for these aggregations.</p>
254
253
<p>When using the <aclass="reference internal" href="../autogram/"><spanclass="doc">autogram</span></a> engine, we rather need to extract a vector
255
254
of weights from the Gramian of the Jacobian. The
256
255
<aclass="reference internal" href="#torchjd.aggregation.Weighting" title="torchjd.aggregation._weighting_bases.Weighting"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Weighting</span></code></a> is responsible for this.</p>
0 commit comments