Skip to content

Commit bf8abb6

Browse files
XAhelinil-is-all
andauthored
[DOC] Fix outdated version-pinned doc URLs (#19325)
### Summary Replace version-pinned GitHub blob/tree URLs (specific commit hashes and release/0.4, release/0.6, release/1.0, release/1.2 branches) with `/main/` references across 10 doc files. Update `docs.pytorch.org/executorch/0.4/` URL to use `/stable/`. Update all line number anchors to match the current `main` branch. Fixes #19257 ### Test plan - All 29 updated URLs verified to return HTTP 200 via `curl` - All file paths confirmed to exist on `main` locally - All 21 line number references verified to point to the correct code (class/function definitions) - `lintrunner -a` passes with no lint issues cc @mergennachin @AlannaBurke @digantdesai @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell --------- Co-authored-by: Nikhil Viswanath Sivakumar <68182521+nil-is-all@users.noreply.github.com>
1 parent 226c1c5 commit bf8abb6

10 files changed

Lines changed: 30 additions & 30 deletions

docs/source/api-life-cycle.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ decorator.
104104

105105
Use <code>.. warning::</code> in the docstrings of deprecated and experimental
106106
APIs. See
107-
<a href="https://github.com/pytorch/pytorch/blob/cd8bbdc71a0258292381a7d54c8b353988d02ff4/torch/nn/utils/stateless.py#L170">example
107+
<a href="https://github.com/pytorch/pytorch/blob/main/torch/nn/utils/stateless.py#L176">example
108108
usage</a>.
109109

110110
</ul>
@@ -115,7 +115,7 @@ usage</a>.
115115
</td>
116116
<td>
117117

118-
Use the <code>ET_DEPRECATED</code> annotation macro. See <a href="https://github.com/pytorch/executorch/blob/8e0f856ee269b319ac4195509cf31e3f548aa0e8/runtime/executor/program.h#L81">example usage</a>.
118+
Use the <code>ET_DEPRECATED</code> annotation macro. See <a href="https://github.com/pytorch/executorch/blob/main/runtime/executor/program.h#L92">example usage</a>.
119119

120120
<p>
121121
<p>
@@ -125,7 +125,7 @@ Use the <code>ET_EXPERIMENTAL</code> annotation macro.
125125
<td>
126126

127127
Start Doxygen comments with <code>DEPRECATED:</code> See
128-
<a href="https://github.com/pytorch/executorch/blob/9d859653ae916d0a72f6b2b5c5925bed38832140/runtime/executor/program.h#L139">example
128+
<a href="https://github.com/pytorch/executorch/blob/main/runtime/executor/program.h#L164">example
129129
usage</a>.
130130

131131
<p>

docs/source/backends-qualcomm.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -608,7 +608,7 @@ Supports:
608608
For details, see: backends/qualcomm/quantizer/quantizer.py
609609

610610
### Operator Support
611-
[The full operator support matrix](https://github.com/pytorch/executorch/tree/f32cdc3de6f7176d70a80228f1a60bcd45d93437/backends/qualcomm/builders#operator-support-status) is tracked and frequently updated in the ExecuTorch repository.
611+
[The full operator support matrix](https://github.com/pytorch/executorch/tree/main/backends/qualcomm/builders#operator-support-status) is tracked and frequently updated in the ExecuTorch repository.
612612

613613
It lists:
614614
- Supported PyTorch ops (aten.*, custom ops)
@@ -633,4 +633,4 @@ If you encounter any issues while reproducing the tutorial, please file a github
633633
[issue](https://github.com/pytorch/executorch/issues) on ExecuTorch repo and tag use `#qcom_aisw` tag
634634

635635
### Debugging tips
636-
- Before trying any complicated models, try out [a simple model example](https://github.com/pytorch/executorch/tree/f32cdc3de6f7176d70a80228f1a60bcd45d93437/examples/qualcomm#simple-examples-to-verify-the-backend-is-working) and see it if works one device.
636+
- Before trying any complicated models, try out [a simple model example](https://github.com/pytorch/executorch/tree/main/examples/qualcomm#simple-examples-to-verify-the-backend-is-working) and see if it works on your device.

docs/source/backends/arm-ethos-u/arm-ethos-u-troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ You can see how this coupling between the memory mode and runtime application i
2424

2525
## Using Bundled.io and ETdump
2626

27-
The arm_executor_runner supports [bundled-io](https://docs.pytorch.org/executorch/0.4/bundled-io.html) and [ETdump](https://docs.pytorch.org/executorch/stable/etdump.html) debugging tools.
27+
The arm_executor_runner supports [bundled-io](https://docs.pytorch.org/executorch/stable/bundled-io.html) and [ETdump](https://docs.pytorch.org/executorch/stable/etdump.html) debugging tools.
2828

2929
To enable bundled-io, set `-DEXECUTORCH_BUILD_DEVTOOLS=ON` when building Executorch and `-DET_BUNDLE_IO=ON` when building the executor_runner. To enable ETdump, set `-DEXECUTORCH_BUILD_ARM_ETDUMP=ON` when building Executorch and `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` when building the executor_runner.
3030

docs/source/backends/nxp/nxp-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ $ ./examples/nxp/setup.sh
3939

4040
To test the eIQ Neutron Backend, both AoT flow for model preparation and Runtime for execution, refer to the [Getting started with eIQ Neutron NPU ExecuTorch backend](tutorials/nxp-basic-tutorial.md)
4141

42-
For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py).
42+
For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/main/examples/nxp/aot_neutron_compile.py).
4343

4444

4545
## Runtime Integration

docs/source/backends/nxp/nxp-partitioner.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Following fields can be set:
2828
Custom Delegation Options
2929
-------------------------
3030
By default the Neutron backend is defensive, what means it does not delegate operators which cannot be decided statically during partitioning. But as the model author you typically have insight into the model and so you can allow opportunistic delegation for some cases. For list of options, see
31-
`CustomDelegationOptions <https://github.com/pytorch/executorch/blob/release/1.2/backends/nxp/backend/custom_delegation_options.py#L11>`_
31+
`CustomDelegationOptions <https://github.com/pytorch/executorch/blob/main/backends/nxp/backend/custom_delegation_options.py#L11>`_
3232

3333
================
3434
Operator Support
@@ -37,7 +37,7 @@ Operator Support
3737
Operators are the building blocks of the ML model. See `IRs <https://docs.pytorch.org/docs/stable/torch.compiler_ir.html>`_ for more information on the PyTorch operator set.
3838

3939
This section lists the Edge operators supported by the Neutron backend.
40-
For detailed constraints of the operators see the conditions in the ``is_supported_*`` functions in the `Node converters <https://github.com/pytorch/executorch/blob/release/1.2/backends/nxp/neutron_partitioner.py#L202>`_
40+
For detailed constraints of the operators see the ``is_supported`` / ``_is_supported_in_IR`` / ``_is_supported_on_target`` checks in the `Node converters <https://github.com/pytorch/executorch/blob/main/backends/nxp/backend/ir/converter/node_converter.py#L118>`_
4141

4242

4343
.. csv-table:: Operator Support

docs/source/backends/xnnpack/xnnpack-partitioner.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
Partitioner API
33
===============
44

5-
The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor <https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/xnnpack_partitioner.py#L31>`_:
5+
The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor <https://github.com/pytorch/executorch/blob/main/backends/xnnpack/partition/xnnpack_partitioner.py#L31>`_:
66

7-
- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See `../config/__init__.py <https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/__init__.py#L66>`_ for an exhaustive list of available operator configs.
8-
- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType <https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/xnnpack_config.py#L24>`_.
7+
- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators are delegated. See `../config/__init__.py <https://github.com/pytorch/executorch/blob/main/backends/xnnpack/partition/config/__init__.py#L76>`_ for an exhaustive list of available operator configs.
8+
- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType <https://github.com/pytorch/executorch/blob/main/backends/xnnpack/partition/config/xnnpack_config.py#L30>`_.
99
- ``per_op_mode``: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false.
1010
- ``verbose``: If true, print additional information during lowering.
1111

docs/source/bundled-io.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -199,17 +199,17 @@ This stage mainly focuses on executing the model with the bundled inputs and com
199199

200200
### Get ExecuTorch Program Pointer from `BundledProgram` Buffer
201201
We need the pointer to ExecuTorch program to do the execution. To unify the process of loading and executing `BundledProgram` and Program flatbuffer, we create an API for this
202-
`executorch::bundled_program::get_program_data`. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L128-L137) of this API.
202+
`executorch::bundled_program::get_program_data`. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L128-L137) of this API.
203203

204204
### Load Bundled Input to Method
205-
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L253-L259) of this API.
205+
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L253-L259) of this API.
206206

207207
### Verify the Method's Output.
208-
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L301-L307) of this API.
208+
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L301-L307) of this API.
209209

210210
### Runtime Example
211211

212-
Please checkout our [example runner](https://github.com/pytorch/executorch/blob/release/0.6/examples/devtools/README.md#bundledprogram) for a bundled program. You could run these commands to test with the BundledProgram binary (`.bpte`) file you generated in the previous step:
212+
Please check out our [example runner](https://github.com/pytorch/executorch/blob/main/examples/devtools/README.md#bundledprogram) for a bundled program. You could run these commands to test with the BundledProgram binary (`.bpte`) file you generated in the previous step:
213213

214214
```bash
215215
cd executorch
@@ -218,7 +218,7 @@ cd executorch
218218
```
219219

220220
It is expected to see no output from running the above mentioned snippet.
221-
For a detailed example of how the runner should be like, please refer to our [example runner](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp).
221+
For a detailed example of how the runner should be like, please refer to our [example runner](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp).
222222

223223

224224
### Try the Complete Workflow

docs/source/compiler-custom-compiler-passes.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Our projection on the frequency of these use cases are:
2525

2626
For level 1 uses cases (creating one-to-X mappings, performing forwards iterations,
2727
and looking at local node information), we can utilize a helper class called
28-
[`ExportPass`](https://github.com/pytorch/executorch/blob/d9eef24bb720804aa7b400b05241487510ae0dc2/exir/pass_base.py#L44).
28+
[`ExportPass`](https://github.com/pytorch/executorch/blob/main/exir/pass_base.py#L655).
2929
This is an
3030
[interpreter-based](https://pytorch.org/docs/stable/fx.html#the-interpreter-pattern)
3131
way where we execute each node and recreate the graph except with
@@ -35,7 +35,7 @@ metadata such as stack trace, FakeTensor values, and torch.nn.Module hierarchy
3535
are preserved and updated depending on the transformations made.
3636

3737
To implement this pass, we can create a subclass of
38-
[`ExportPass`](https://github.com/pytorch/executorch/blob/d9eef24bb720804aa7b400b05241487510ae0dc2/exir/pass_base.py#L44)
38+
[`ExportPass`](https://github.com/pytorch/executorch/blob/main/exir/pass_base.py#L655)
3939
and implement the exposed functions. When called with a graph module, it will
4040
run the graph module and create a new graph containing the changes specified by
4141
the pass. This means that the graph module passed in must be runnable on CPU,
@@ -171,7 +171,7 @@ class ScalarToTensorPass(ExportPass):
171171
### Level 2
172172

173173
For creating many-to-one mappings, we can utilize FX's [subgraph
174-
rewriter](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/subgraph_rewriter.py#L77).
174+
rewriter](https://github.com/pytorch/pytorch/blob/main/torch/fx/subgraph_rewriter.py#L96).
175175
Given a `pattern`, it creates a subgraph of operators matching to the pattern,
176176
and then replaces each matched subgraph with the `replacement`.
177177

@@ -229,7 +229,7 @@ class ReplacedPatterns:
229229
### Level 3
230230

231231
For the third way of creating a pass, we can utilize the most basic
232-
[`PassBase`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/infra/pass_base.py#L22).
232+
[`PassBase`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/pass_base.py#L28).
233233
To create a pass, we can subclass this and implement the function `call` with
234234
the pass contents. Additionally, we can implement the functions `requires` and
235235
`ensures` which will be called before and after the function `call`. Note that
@@ -315,7 +315,7 @@ with IR Spec, so be careful when using them.
315315

316316
For finding subgraphs within a graph that match a specific pattern, we can
317317
utilize FX's
318-
[`SubgraphMatcher`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/utils/matcher_utils.py#L51).
318+
[`SubgraphMatcher`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/utils/matcher_utils.py#L63).
319319

320320
Class Attributes:
321321

@@ -382,7 +382,7 @@ class InternalMatch():
382382

383383
To find the largest subgraphs of nodes that support a specific invariant, we can
384384
utilize FX's
385-
[`CapabilityBasedPartitioner`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/infra/partitioner.py#L34C1-L34C1).
385+
[`CapabilityBasedPartitioner`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/partitioner.py#L65).
386386

387387
Class Attributes
388388

@@ -399,14 +399,14 @@ Class Attributes
399399
that are allowed to be in a single node partition.
400400

401401
The
402-
[`OperatorSupportBase`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/operator_support.py#L28)
402+
[`OperatorSupportBase`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/operator_support.py#L37)
403403
class is used by
404404
the partitioner to determine if a specific node in the graph belongs in the
405405
partition. This is done by overriding the `is_node_supported` function. You can
406-
chain multiple `OperatorSuppportBase` by using
407-
[`chain`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/operator_support.py#L150)(which
406+
chain multiple `OperatorSupportBase` by using
407+
[`chain`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/operator_support.py#L159)(which
408408
returns False if any of the OperatorSupportBase return False) and
409-
[`any_chain`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/operator_support.py#L164)
409+
[`any_chain`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/operator_support.py#L172)
410410
(which returns True if any of the OperatorSupportBase returns True).
411411

412412
Consider the following example:
@@ -440,7 +440,7 @@ not allow `call_module` nodes.
440440
### Combined
441441

442442
We also provide a combined helper function:
443-
[`generate_pattern_op_partitions`](https://github.com/pytorch/executorch/blob/d9eef24bb720804aa7b400b05241487510ae0dc2/exir/backend/canonical_partitioners/pattern_op_partitioner.py#L59)
443+
[`generate_pattern_op_partitions`](https://github.com/pytorch/executorch/blob/main/exir/backend/canonical_partitioners/pattern_op_partitioner.py#L107)
444444

445445
Args:
446446
* `graph_module (fx.GraphModule)`: Module that we want to partition

docs/source/compiler-memory-planning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ program = edge_program.to_executorch(
8282
)
8383
```
8484

85-
Users attempting to write a custom memory planning algorithm should start by looking at [the greedy algorithm's implementation](https://github.com/pytorch/executorch/blob/d62c41ca86435e5316e7ed292b6d68aff27a2fb7/exir/memory_planning.py#L459C1-L459C12).
85+
Users attempting to write a custom memory planning algorithm should start by looking at [the greedy algorithm's implementation](https://github.com/pytorch/executorch/blob/main/exir/memory_planning.py#L801).
8686

8787
## Debugging Tool
8888

docs/source/using-executorch-android.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Starting from 2025-04-12, you can download nightly `main` branch snapshots:
8282
* `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar`
8383
* `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar.sha256sums`
8484
* Replace `YYYYMMDD` with the actual date you want to use.
85-
* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/c66b37d010c88a113560693b14dc6bd112593c11/.github/workflows/android-release-artifacts.yml#L14-L15).
85+
* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/main/.github/workflows/android-release-artifacts.yml).
8686

8787
For example:
8888

0 commit comments

Comments
 (0)