You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights associated to the rows of the input matrices.</p>
302
+
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights associated to the rows of the input matrices.</p>
303
303
</dd>
304
304
</dl>
305
305
</dd></dl>
@@ -311,7 +311,7 @@ <h1>Constant<a class="headerlink" href="#constant" title="Link to this heading">
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights to return at each call.</p>
314
+
<ddclass="field-odd"><p><strong>weights</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The weights to return at each call.</p>
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the rows. If not provided, defaults to
304
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the rows. If not provided, defaults to
305
305
<spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\)</span>.</p></li>
306
306
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
307
307
<li><p><strong>reg_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to add to the diagonal of the gramian of the matrix. Due to
@@ -322,7 +322,7 @@ <h1>DualProj<a class="headerlink" href="#dualproj" title="Link to this heading">
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
325
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
326
326
<spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\)</span>.</p></li>
327
327
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
328
328
<li><p><strong>reg_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to add to the diagonal of the gramian of the matrix. Due to
Copy file name to clipboardExpand all lines: latest/docs/aggregation/graddrop/index.html
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -303,7 +303,7 @@ <h1>GradDrop<a class="headerlink" href="#graddrop" title="Link to this heading">
303
303
<ddclass="field-odd"><ulclass="simple">
304
304
<li><p><strong>f</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Callable" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Callable</span></code></a></span>) – The function to apply to the Gradient Positive Sign Purity. It should be monotically
305
305
increasing. Defaults to identity.</p></li>
306
-
<li><p><strong>leak</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The tensor of leak values, determining how much each row is allowed to leak
306
+
<li><p><strong>leak</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The tensor of leak values, determining how much each row is allowed to leak
307
307
through. Defaults to None, which means no leak.</p></li>
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the projected rows. If not provided,
304
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector used to combine the projected rows. If not provided,
305
305
defaults to <spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in
306
306
\mathbb{R}^m\)</span>.</p></li>
307
307
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
@@ -323,7 +323,7 @@ <h1>UPGrad<a class="headerlink" href="#upgrad" title="Link to this heading">¶</
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.9)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
326
+
<li><p><strong>pref_vector</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a> | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The preference vector to use. If not provided, defaults to
327
327
<spanclass="math notranslate nohighlight">\(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\)</span>.</p></li>
328
328
<li><p><strong>norm_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to avoid division by zero when normalizing.</p></li>
329
329
<li><p><strong>reg_eps</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">float</span></code></a></span>) – A small value to add to the diagonal of the gramian of the matrix. Due to
0 commit comments