You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<emclass="property"><spanclass="pre">class</span><spanclass="w"></span></em><spanclass="sig-prename descclassname"><spanclass="pre">torchjd.aggregation.</span></span><spanclass="sig-name descname"><spanclass="pre">GradDrop</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">f=<function</span><spanclass="pre">_identity></span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">leak=None</span></span></em><spanclass="sig-paren">)</span><aclass="reference external" href="https://github.com/TorchJD/torchjd/blob/main/src/torchjd/aggregation/_graddrop.py#L14-L92"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torchjd.aggregation.GradDrop" title="Link to this definition">¶</a></dt>
245
+
<emclass="property"><spanclass="pre">class</span><spanclass="w"></span></em><spanclass="sig-prename descclassname"><spanclass="pre">torchjd.aggregation.</span></span><spanclass="sig-name descname"><spanclass="pre">GradDrop</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">f=<function</span><spanclass="pre">_identity></span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">leak=None</span></span></em><spanclass="sig-paren">)</span><aclass="reference external" href="https://github.com/TorchJD/torchjd/blob/main/src/torchjd/aggregation/_graddrop.py#L14-L91"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torchjd.aggregation.GradDrop" title="Link to this definition">¶</a></dt>
246
246
<dd><p><aclass="reference internal" href="../bases/#torchjd.aggregation.Aggregator" title="torchjd.aggregation._aggregator_bases.Aggregator"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Aggregator</span></code></a> that applies the gradient combination
247
247
steps from GradDrop, as defined in lines 10 to 15 of Algorithm 1 of <aclass="reference external" href="https://arxiv.org/pdf/2010.06808.pdf">Just Pick a Sign:
248
248
Optimizing Deep Multitask Models with Gradient Sign Dropout</a>.</p>
<emclass="property"><spanclass="pre">class</span><spanclass="w"></span></em><spanclass="sig-prename descclassname"><spanclass="pre">torchjd.aggregation.</span></span><spanclass="sig-name descname"><spanclass="pre">TrimmedMean</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">trim_number</span></span></em><spanclass="sig-paren">)</span><aclass="reference external" href="https://github.com/TorchJD/torchjd/blob/main/src/torchjd/aggregation/_trimmed_mean.py#L7-L73"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torchjd.aggregation.TrimmedMean" title="Link to this definition">¶</a></dt>
245
+
<emclass="property"><spanclass="pre">class</span><spanclass="w"></span></em><spanclass="sig-prename descclassname"><spanclass="pre">torchjd.aggregation.</span></span><spanclass="sig-name descname"><spanclass="pre">TrimmedMean</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">trim_number</span></span></em><spanclass="sig-paren">)</span><aclass="reference external" href="https://github.com/TorchJD/torchjd/blob/main/src/torchjd/aggregation/_trimmed_mean.py#L7-L72"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torchjd.aggregation.TrimmedMean" title="Link to this definition">¶</a></dt>
0 commit comments