You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights associated to the rows of the input matrices.</p>
259
+
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights associated to the rows of the input matrices.</p>
260
260
</dd>
261
261
</dl>
262
262
</dd></dl>
@@ -268,7 +268,7 @@ <h1>Constant<a class="headerlink" href="#constant" title="Link to this heading">
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights to return at each call.</p>
271
+
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights to return at each call.</p>
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the rows. If not provided, defaults to
261
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the rows. If not provided, defaults to
262
262
<spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\)</span>.</p></li>
263
263
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
264
264
<li><p><strong>reg_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to add to the diagonal of the gramian of the matrix. Due to
@@ -279,7 +279,7 @@ <h1>DualProj<a class="headerlink" href="#dualproj" title="Link to this heading">
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
282
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
283
283
<spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\)</span>.</p></li>
284
284
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
285
285
<li><p><strong>reg_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to add to the diagonal of the gramian of the matrix. Due to
Copy file name to clipboardExpand all lines: latest/docs/aggregation/graddrop/index.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -260,7 +260,7 @@ <h1>GradDrop<a class="headerlink" href="#graddrop" title="Link to this heading">
260
260
<ddclass="field-odd"><ulclass="simple">
261
261
<li><p><strong>f</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Callable" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Callable</span></code></a></span>) – The function to apply to the Gradient Positive Sign Purity. It should be monotically
262
262
increasing. Defaults to identity.</p></li>
263
-
<li><p><strong>leak</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The tensor of leak values, determining how much each row is allowed to leak
263
+
<li><p><strong>leak</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The tensor of leak values, determining how much each row is allowed to leak
264
264
through. Defaults to None, which means no leak.</p></li>
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the projected rows. If not provided,
261
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the projected rows. If not provided,
262
262
defaults to <spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in
263
263
\mathbb{R}^m\)</span>.</p></li>
264
264
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
@@ -280,7 +280,7 @@ <h1>UPGrad<a class="headerlink" href="#upgrad" title="Link to this heading">¶</
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.8)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
283
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
284
284
<spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\)</span>.</p></li>
285
285
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
286
286
<li><p><strong>reg_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to add to the diagonal of the gramian of the matrix. Due to
0 commit comments