You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Summary
Replace version-pinned GitHub blob/tree URLs (specific commit hashes and
release/0.4, release/0.6, release/1.0, release/1.2 branches) with
`/main/` references across 10 doc files. Update
`docs.pytorch.org/executorch/0.4/` URL to use `/stable/`. Update all
line number anchors to match the current `main` branch.
Fixes#19257
### Test plan
- All 29 updated URLs verified to return HTTP 200 via `curl`
- All file paths confirmed to exist on `main` locally
- All 21 line number references verified to point to the correct code
(class/function definitions)
- `lintrunner -a` passes with no lint issues
cc @mergennachin@AlannaBurke@digantdesai@freddan80@per@zingo@oscarandersson8218@mansnils@Sebastian-Larsson@robell
---------
Co-authored-by: Nikhil Viswanath Sivakumar <68182521+nil-is-all@users.noreply.github.com>
Use the <code>ET_DEPRECATED</code> annotation macro. See <ahref="https://github.com/pytorch/executorch/blob/8e0f856ee269b319ac4195509cf31e3f548aa0e8/runtime/executor/program.h#L81">example usage</a>.
118
+
Use the <code>ET_DEPRECATED</code> annotation macro. See <ahref="https://github.com/pytorch/executorch/blob/main/runtime/executor/program.h#L92">example usage</a>.
119
119
120
120
<p>
121
121
<p>
@@ -125,7 +125,7 @@ Use the <code>ET_EXPERIMENTAL</code> annotation macro.
125
125
<td>
126
126
127
127
Start Doxygen comments with <code>DEPRECATED:</code> See
Copy file name to clipboardExpand all lines: docs/source/backends-qualcomm.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -608,7 +608,7 @@ Supports:
608
608
For details, see: backends/qualcomm/quantizer/quantizer.py
609
609
610
610
### Operator Support
611
-
[The full operator support matrix](https://github.com/pytorch/executorch/tree/f32cdc3de6f7176d70a80228f1a60bcd45d93437/backends/qualcomm/builders#operator-support-status) is tracked and frequently updated in the ExecuTorch repository.
611
+
[The full operator support matrix](https://github.com/pytorch/executorch/tree/main/backends/qualcomm/builders#operator-support-status) is tracked and frequently updated in the ExecuTorch repository.
612
612
613
613
It lists:
614
614
- Supported PyTorch ops (aten.*, custom ops)
@@ -633,4 +633,4 @@ If you encounter any issues while reproducing the tutorial, please file a github
633
633
[issue](https://github.com/pytorch/executorch/issues) on ExecuTorch repo and tag use `#qcom_aisw` tag
634
634
635
635
### Debugging tips
636
-
- Before trying any complicated models, try out [a simple model example](https://github.com/pytorch/executorch/tree/f32cdc3de6f7176d70a80228f1a60bcd45d93437/examples/qualcomm#simple-examples-to-verify-the-backend-is-working) and see it if works one device.
636
+
- Before trying any complicated models, try out [a simple model example](https://github.com/pytorch/executorch/tree/main/examples/qualcomm#simple-examples-to-verify-the-backend-is-working) and see if it works on your device.
Copy file name to clipboardExpand all lines: docs/source/backends/arm-ethos-u/arm-ethos-u-troubleshooting.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ You can see how this coupling between the memory mode and runtime application i
24
24
25
25
## Using Bundled.io and ETdump
26
26
27
-
The arm_executor_runner supports [bundled-io](https://docs.pytorch.org/executorch/0.4/bundled-io.html) and [ETdump](https://docs.pytorch.org/executorch/stable/etdump.html) debugging tools.
27
+
The arm_executor_runner supports [bundled-io](https://docs.pytorch.org/executorch/stable/bundled-io.html) and [ETdump](https://docs.pytorch.org/executorch/stable/etdump.html) debugging tools.
28
28
29
29
To enable bundled-io, set `-DEXECUTORCH_BUILD_DEVTOOLS=ON` when building Executorch and `-DET_BUNDLE_IO=ON` when building the executor_runner. To enable ETdump, set `-DEXECUTORCH_BUILD_ARM_ETDUMP=ON` when building Executorch and `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` when building the executor_runner.
Copy file name to clipboardExpand all lines: docs/source/backends/nxp/nxp-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ $ ./examples/nxp/setup.sh
39
39
40
40
To test the eIQ Neutron Backend, both AoT flow for model preparation and Runtime for execution, refer to the [Getting started with eIQ Neutron NPU ExecuTorch backend](tutorials/nxp-basic-tutorial.md)
41
41
42
-
For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py).
42
+
For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/main/examples/nxp/aot_neutron_compile.py).
Copy file name to clipboardExpand all lines: docs/source/backends/nxp/nxp-partitioner.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ Following fields can be set:
28
28
Custom Delegation Options
29
29
-------------------------
30
30
By default the Neutron backend is defensive, what means it does not delegate operators which cannot be decided statically during partitioning. But as the model author you typically have insight into the model and so you can allow opportunistic delegation for some cases. For list of options, see
Operators are the building blocks of the ML model. See `IRs <https://docs.pytorch.org/docs/stable/torch.compiler_ir.html>`_ for more information on the PyTorch operator set.
38
38
39
39
This section lists the Edge operators supported by the Neutron backend.
40
-
For detailed constraints of the operators see the conditions in the ``is_supported_*`` functions in the `Node converters <https://github.com/pytorch/executorch/blob/release/1.2/backends/nxp/neutron_partitioner.py#L202>`_
40
+
For detailed constraints of the operators see the ``is_supported`` / ``_is_supported_in_IR`` / ``_is_supported_on_target`` checks in the `Node converters <https://github.com/pytorch/executorch/blob/main/backends/nxp/backend/ir/converter/node_converter.py#L118>`_
Copy file name to clipboardExpand all lines: docs/source/backends/xnnpack/xnnpack-partitioner.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,10 +2,10 @@
2
2
Partitioner API
3
3
===============
4
4
5
-
The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor <https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/xnnpack_partitioner.py#L31>`_:
5
+
The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor <https://github.com/pytorch/executorch/blob/main/backends/xnnpack/partition/xnnpack_partitioner.py#L31>`_:
6
6
7
-
- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See `../config/__init__.py <https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/__init__.py#L66>`_ for an exhaustive list of available operator configs.
8
-
- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType <https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/xnnpack_config.py#L24>`_.
7
+
- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators are delegated. See `../config/__init__.py <https://github.com/pytorch/executorch/blob/main/backends/xnnpack/partition/config/__init__.py#L76>`_ for an exhaustive list of available operator configs.
8
+
- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType <https://github.com/pytorch/executorch/blob/main/backends/xnnpack/partition/config/xnnpack_config.py#L30>`_.
9
9
- ``per_op_mode``: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false.
10
10
- ``verbose``: If true, print additional information during lowering.
Copy file name to clipboardExpand all lines: docs/source/bundled-io.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -199,17 +199,17 @@ This stage mainly focuses on executing the model with the bundled inputs and com
199
199
200
200
### Get ExecuTorch Program Pointer from `BundledProgram` Buffer
201
201
We need the pointer to ExecuTorch program to do the execution. To unify the process of loading and executing `BundledProgram` and Program flatbuffer, we create an API for this
202
-
`executorch::bundled_program::get_program_data`. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L128-L137) of this API.
202
+
`executorch::bundled_program::get_program_data`. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L128-L137) of this API.
203
203
204
204
### Load Bundled Input to Method
205
-
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L253-L259) of this API.
205
+
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L253-L259) of this API.
206
206
207
207
### Verify the Method's Output.
208
-
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L301-L307) of this API.
208
+
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L301-L307) of this API.
209
209
210
210
### Runtime Example
211
211
212
-
Please checkout our [example runner](https://github.com/pytorch/executorch/blob/release/0.6/examples/devtools/README.md#bundledprogram) for a bundled program. You could run these commands to test with the BundledProgram binary (`.bpte`) file you generated in the previous step:
212
+
Please check out our [example runner](https://github.com/pytorch/executorch/blob/main/examples/devtools/README.md#bundledprogram) for a bundled program. You could run these commands to test with the BundledProgram binary (`.bpte`) file you generated in the previous step:
213
213
214
214
```bash
215
215
cd executorch
@@ -218,7 +218,7 @@ cd executorch
218
218
```
219
219
220
220
It is expected to see no output from running the above mentioned snippet.
221
-
For a detailed example of how the runner should be like, please refer to our [example runner](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp).
221
+
For a detailed example of how the runner should be like, please refer to our [example runner](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp).
Copy file name to clipboardExpand all lines: docs/source/compiler-memory-planning.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,7 +82,7 @@ program = edge_program.to_executorch(
82
82
)
83
83
```
84
84
85
-
Users attempting to write a custom memory planning algorithm should start by looking at [the greedy algorithm's implementation](https://github.com/pytorch/executorch/blob/d62c41ca86435e5316e7ed292b6d68aff27a2fb7/exir/memory_planning.py#L459C1-L459C12).
85
+
Users attempting to write a custom memory planning algorithm should start by looking at [the greedy algorithm's implementation](https://github.com/pytorch/executorch/blob/main/exir/memory_planning.py#L801).
* Replace `YYYYMMDD` with the actual date you want to use.
85
-
* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/c66b37d010c88a113560693b14dc6bd112593c11/.github/workflows/android-release-artifacts.yml#L14-L15).
85
+
* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/main/.github/workflows/android-release-artifacts.yml).
0 commit comments