Skip to content

Commit 76941a1

Browse files
authored
test: Use sphinx.ext.doctest for doctesting (#625)
* Fix output errors in documentation examples * Add sphinx.ext.doctest extension * Add emphasize-lines to testcode. This also adds the docutils doc dependency. * Remove pytest doc tests * Add testcode directives for doctests * Update instructions on doc testing * Move lightning from test to doc deps * Update checks so that the build-doc job runs doctest
1 parent 4f15a2e commit 76941a1

24 files changed

+124
-661
lines changed

.github/workflows/checks.yml

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,8 +54,8 @@ jobs:
5454
options: ${{ matrix.options || 'full' }}
5555
groups: test ${{ matrix.extra_groups }}
5656

57-
- name: Run tests
58-
run: uv run pytest -W error tests/unit tests/doc --cov=src --cov-report=xml
57+
- name: Run unit tests
58+
run: uv run pytest -W error tests/unit --cov=src --cov-report=xml
5959
env:
6060
PYTEST_TORCH_DTYPE: ${{ matrix.dtype || 'float32' }}
6161

@@ -65,7 +65,7 @@ jobs:
6565
token: ${{ secrets.CODECOV_TOKEN }}
6666

6767
build-doc:
68-
name: Build documentation
68+
name: Build and test documentation
6969
runs-on: ubuntu-latest
7070
steps:
7171
- name: Checkout repository
@@ -84,6 +84,10 @@ jobs:
8484
working-directory: docs
8585
run: uv run make dirhtml
8686

87+
- name: Test Documentation
88+
working-directory: docs
89+
run: uv run make doctest
90+
8791
check-links:
8892
name: Link correctness
8993
runs-on: ubuntu-latest

AGENTS.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,6 @@
66
- We use uv for everything (e.g. we do `uv run python ...` to run some python code, and
77
`uv run pytest tests/unit` to run unit tests). Please prefer `uv run python -c ...` over
88
`python3 -c ...`
9-
- When you create or modify a code example in a public docstring, always update the corresponding
10-
doc test in the appropriate file of `tests/doc`. This also applies to any change in an example of
11-
a `.rst` file, that must be updated in the corresponding test in `tests/doc/test_rst.py`.
129
- After generating code, please run `uv run ty check`, `uv run ruff check` and `uv run ruff format`.
1310
Fix any error.
1411
- After changing anything in `src` or in `tests/unit` or `tests/doc`, please identify the affected

CONTRIBUTING.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -99,10 +99,9 @@ uv run pre-commit install
9999
CUBLAS_WORKSPACE_CONFIG=:4096:8 PYTEST_TORCH_DEVICE=cuda:0 uv run pytest tests/unit
100100
```
101101

102-
- To check that the usage examples from docstrings and `.rst` files are correct, we test their
103-
behavior in `tests/doc`. To run these tests, do:
102+
- To check that the usage examples from docstrings and `.rst` files are correct, run:
104103
```bash
105-
uv run pytest tests/doc
104+
uv run make doctest -C docs
106105
```
107106

108107
- To compute the code coverage locally, you should run the unit tests and the doc tests together,

docs/source/conf.py

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,14 @@
66
import inspect
77
import os
88
import sys
9+
from typing import ClassVar
910

1011
import tomli
12+
from docutils.parsers.rst import directives
13+
from sphinx.application import Sphinx
14+
from sphinx.directives.code import parse_line_num_spec
15+
from sphinx.ext.doctest import TestcodeDirective
16+
from sphinx.util.typing import OptionSpec
1117

1218
# -- Project information -----------------------------------------------------
1319
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
@@ -40,6 +46,7 @@
4046
"sphinx.ext.intersphinx",
4147
"myst_parser", # Enables markdown support
4248
"sphinx_design", # Enables side to side cards
49+
"sphinx.ext.doctest",
4350
]
4451

4552
# -- Options for HTML output -------------------------------------------------
@@ -134,3 +141,37 @@ def _get_version_str() -> str:
134141
except KeyError:
135142
version_str = "main"
136143
return version_str
144+
145+
146+
class _TestcodeWithEmphasisDirective(TestcodeDirective):
147+
"""
148+
Extension of ``.. testcode::`` that additionally supports ``:emphasize-lines:``.
149+
150+
Sphinx's built-in ``.. testcode::`` directive does not support ``:emphasize-lines:``. This
151+
subclass adds that option and forwards it as ``highlight_args['hl_lines']`` on the resulting
152+
node, which is the same mechanism used by ``.. code-block::``.
153+
154+
Ideally, this should be integrated to sphinx.ext.doctest as part of a solution to
155+
https://github.com/sphinx-doc/sphinx/issues/6915 and
156+
https://github.com/sphinx-doc/sphinx/issues/6858.
157+
"""
158+
159+
option_spec: ClassVar[OptionSpec] = {
160+
**TestcodeDirective.option_spec,
161+
"emphasize-lines": directives.unchanged_required,
162+
}
163+
164+
def run(self) -> list:
165+
result = super().run()
166+
linespec = self.options.get("emphasize-lines")
167+
if linespec and result:
168+
node = result[0]
169+
nlines = len(self.content)
170+
hl_lines = parse_line_num_spec(linespec, nlines)
171+
hl_lines = [x + 1 for x in hl_lines if x < nlines]
172+
node["highlight_args"] = {"hl_lines": hl_lines}
173+
return result
174+
175+
176+
def setup(app: Sphinx) -> None:
177+
app.add_directive("testcode", _TestcodeWithEmphasisDirective, override=True)

docs/source/examples/amp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ case, the losses) should preferably be scaled with a `GradScaler
1111
<https://pytorch.org/docs/stable/amp.html#gradient-scaling>`_ to avoid gradient underflow. The
1212
following example shows the resulting code for a multi-task learning use-case.
1313

14-
.. code-block:: python
14+
.. testcode::
1515
:emphasize-lines: 2, 17, 27, 34-35, 37-38
1616

1717
import torch

docs/source/examples/basic_usage.rst

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ the parameters are updated using the resulting aggregation.
1212

1313
Import several classes from ``torch`` and ``torchjd``:
1414

15-
.. code-block:: python
15+
.. testcode::
1616

1717
import torch
1818
from torch.nn import Linear, MSELoss, ReLU, Sequential
@@ -24,14 +24,14 @@ Import several classes from ``torch`` and ``torchjd``:
2424

2525
Define the model and the optimizer, as usual:
2626

27-
.. code-block:: python
27+
.. testcode::
2828

2929
model = Sequential(Linear(10, 5), ReLU(), Linear(5, 2))
3030
optimizer = SGD(model.parameters(), lr=0.1)
3131

3232
Define the aggregator that will be used to combine the Jacobian matrix:
3333

34-
.. code-block:: python
34+
.. testcode::
3535

3636
aggregator = UPGrad()
3737

@@ -41,7 +41,7 @@ negatively affected by the update.
4141

4242
Now that everything is defined, we can train the model. Define the input and the associated target:
4343

44-
.. code-block:: python
44+
.. testcode::
4545

4646
input = torch.randn(16, 10) # Batch of 16 random input vectors of length 10
4747
target1 = torch.randn(16) # First batch of 16 targets
@@ -51,7 +51,7 @@ Here, we generate fake inputs and labels for the sake of the example.
5151

5252
We can now compute the losses associated to each element of the batch.
5353

54-
.. code-block:: python
54+
.. testcode::
5555

5656
loss_fn = MSELoss()
5757
output = model(input)
@@ -62,7 +62,7 @@ The last steps are similar to gradient descent-based optimization, but using the
6262

6363
Perform the Jacobian descent backward pass:
6464

65-
.. code-block:: python
65+
.. testcode::
6666

6767
autojac.backward([loss1, loss2])
6868
jac_to_grad(model.parameters(), aggregator)
@@ -73,14 +73,14 @@ field of the parameters. It also deletes the ``.jac`` fields save some memory.
7373

7474
Update each parameter based on its ``.grad`` field, using the ``optimizer``:
7575

76-
.. code-block:: python
76+
.. testcode::
7777

7878
optimizer.step()
7979

8080
The model's parameters have been updated!
8181

8282
As usual, you should now reset the ``.grad`` field of each model parameter:
8383

84-
.. code-block:: python
84+
.. testcode::
8585

8686
optimizer.zero_grad()

docs/source/examples/iwmtl.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ this Gramian to reweight the gradients and resolve conflict entirely.
99

1010
The following example shows how to do that.
1111

12-
.. code-block:: python
12+
.. testcode::
1313
:emphasize-lines: 5-6, 18-20, 31-32, 34-35, 37-38, 40-41
1414

1515
import torch

docs/source/examples/iwrm.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ batch of data. When minimizing per-instance losses (IWRM), we use either autojac
4141
.. tab-set::
4242
.. tab-item:: autograd (baseline)
4343

44-
.. code-block:: python
44+
.. testcode::
4545

4646
import torch
4747
from torch.nn import Linear, MSELoss, ReLU, Sequential
@@ -75,7 +75,7 @@ batch of data. When minimizing per-instance losses (IWRM), we use either autojac
7575

7676
.. tab-item:: autojac
7777

78-
.. code-block:: python
78+
.. testcode::
7979
:emphasize-lines: 5-6, 12, 16, 21-23
8080

8181
import torch
@@ -110,7 +110,7 @@ batch of data. When minimizing per-instance losses (IWRM), we use either autojac
110110

111111
.. tab-item:: autogram (recommended)
112112

113-
.. code-block:: python
113+
.. testcode::
114114
:emphasize-lines: 5-6, 12, 16-17, 21-24
115115

116116
import torch

docs/source/examples/lightning_integration.rst

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,18 @@ The following code example demonstrates a basic multi-task learning setup using
1010
:class:`~lightning.pytorch.core.LightningModule` that will call :doc:`mtl_backward
1111
<../docs/autojac/mtl_backward>` at each training iteration.
1212

13-
.. code-block:: python
13+
.. testsetup::
14+
15+
import warnings
16+
import logging
17+
from lightning.fabric.utilities.warnings import PossibleUserWarning
18+
19+
logging.disable(logging.INFO)
20+
warnings.filterwarnings("ignore", category=DeprecationWarning)
21+
warnings.filterwarnings("ignore", category=FutureWarning)
22+
warnings.filterwarnings("ignore", category=PossibleUserWarning)
23+
24+
.. testcode::
1425
:emphasize-lines: 9-10, 18, 31-32
1526

1627
import torch

docs/source/examples/monitoring.rst

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,12 @@ Jacobian descent is doing something different than gradient descent. With
1414
:doc:`UPGrad <../docs/aggregation/upgrad>`, this happens when the original gradients conflict (i.e.
1515
they have a negative inner product).
1616

17-
.. code-block:: python
17+
.. testsetup::
18+
19+
import torch
20+
torch.manual_seed(0)
21+
22+
.. testcode::
1823
:emphasize-lines: 9-11, 13-18, 33-34
1924

2025
import torch
@@ -67,3 +72,22 @@ they have a negative inner product).
6772
jac_to_grad(shared_module.parameters(), aggregator)
6873
optimizer.step()
6974
optimizer.zero_grad()
75+
76+
.. testoutput::
77+
78+
Weights: tensor([0.5000, 0.5000])
79+
Cosine similarity: 1.0000
80+
Weights: tensor([0.5000, 0.5000])
81+
Cosine similarity: 1.0000
82+
Weights: tensor([0.5000, 0.5000])
83+
Cosine similarity: 1.0000
84+
Weights: tensor([0.6618, 1.0554])
85+
Cosine similarity: 0.9249
86+
Weights: tensor([0.6569, 1.2146])
87+
Cosine similarity: 0.8661
88+
Weights: tensor([0.5004, 0.5060])
89+
Cosine similarity: 1.0000
90+
Weights: tensor([0.5000, 0.5000])
91+
Cosine similarity: 1.0000
92+
Weights: tensor([0.5746, 1.1607])
93+
Cosine similarity: 0.9301

0 commit comments

Comments
 (0)