Skip to content

Commit bffcd9f

Browse files
authored
docs: align ExecutorchModule docs after the C++ port (#490)
## Description This PR changes the ExecutorchModule and the useExecutorchModule docs after the C++ port and the API refactor. ### Introduces a breaking change? - [ ] Yes - [ ] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [x] Documentation update (improves or adds clarity to existing documentation) - [ ] Other (chores, tests, code style improvements etc.) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
1 parent e948670 commit bffcd9f

4 files changed

Lines changed: 196 additions & 63 deletions

File tree

docs/docs/02-hooks/03-executorch-bindings/useExecutorchModule.md

Lines changed: 74 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: useExecutorchModule
33
---
44

5-
ExecuTorch bindings provide streamlined interface to access the [Module API](https://pytorch.org/executorch/stable/extension-module.html) directly from Javascript.
5+
useExecutorchModule provides React Native bindings to the ExecuTorch [Module API](https://pytorch.org/executorch/stable/extension-module.html) directly from JavaScript.
66

77
:::caution
88
These bindings are primarily intended for custom model integration where no dedicated hook exists. If you are considering using a provided model, first verify whether a dedicated hook is available. Dedicated hooks simplify the implementation process by managing necessary pre and post-processing automatically. Utilizing these can save you effort and reduce complexity, ensuring you do not implement additional handling that is already covered.
@@ -32,33 +32,74 @@ For more information on loading resources, take a look at [loading models](../..
3232

3333
### Returns
3434

35-
| Field | Type | Description |
36-
| :----------------: | :--------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
37-
| `error` | <code>string &#124; null</code> | Contains the error message if the model failed to load. |
38-
| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. |
39-
| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. |
40-
| `loadMethod` | `(methodName: string) => Promise<void>` | Loads resources specific to `methodName` into memory before execution. |
41-
| `loadForward` | `() => Promise<void>` | Loads resources specific to `forward` method into memory before execution. Uses `loadMethod` under the hood. |
42-
| `forward` | `(input: ETInput, shape: number[]) => Promise<number[][]>` | Executes the model's forward pass, where `input` is a Javascript typed array and `shape` is an array of integers representing input Tensor shape. The output is a Tensor - raw result of inference. |
43-
| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1. |
35+
| Field | Type | Description |
36+
| :----------------: | :--------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------: |
37+
| `error` | <code>string &#124; null</code> | Contains the error message if the model failed to load. |
38+
| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. |
39+
| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. |
40+
| `forward` | `(input: TensorPtr[]) => Promise<TensorPtr[]>` | Executes the model's forward pass, where `input` is an array of TensorPtr objects. If the inference is successful, an array of tensor pointers is returned. |
41+
| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1. |
4442

45-
## ETInput
43+
## TensorPtr
4644

47-
The `ETInput` type defines the typed arrays that can be used as inputs in the `forward` method:
45+
TensorPtr is a JS representation of the underlying tensor, which is then passed to the model. You can read more about creating tensors [here](https://docs.pytorch.org/executorch/stable/extension-tensor.html). On JS side, the TensorPtr holds the following information:
4846

49-
- Int8Array
50-
- Int32Array
51-
- BigInt64Array
52-
- Float32Array
53-
- Float64Array
47+
<details>
48+
<summary>Type definitions</summary>
5449

55-
## Errors
50+
```typescript
51+
interface TensorPtr {
52+
dataPtr: TensorBuffer;
53+
sizes: number[];
54+
scalarType: ScalarType;
55+
}
5656

57-
All functions provided by the `useExecutorchModule` hook are asynchronous and may throw an error. The `ETError` enum includes errors [defined by the ExecuTorch team](https://github.com/pytorch/executorch/blob/release/0.7/runtime/core/error.h) and additional errors specified by our library.
57+
type TensorBuffer =
58+
| ArrayBuffer
59+
| Float32Array
60+
| Float64Array
61+
| Int8Array
62+
| Int16Array
63+
| Int32Array
64+
| Uint8Array
65+
| Uint16Array
66+
| Uint32Array
67+
| BigInt64Array
68+
| BigUint64Array;
69+
70+
enum ScalarType {
71+
BYTE = 0,
72+
CHAR = 1,
73+
SHORT = 2,
74+
INT = 3,
75+
LONG = 4,
76+
HALF = 5,
77+
FLOAT = 6,
78+
DOUBLE = 7,
79+
BOOL = 11,
80+
QINT8 = 12,
81+
QUINT8 = 13,
82+
QINT32 = 14,
83+
QUINT4X2 = 16,
84+
QUINT2X4 = 17,
85+
BITS16 = 22,
86+
FLOAT8E5M2 = 23,
87+
FLOAT8E4M3FN = 24,
88+
FLOAT8E5M2FNUZ = 25,
89+
FLOAT8E4M3FNUZ = 26,
90+
UINT16 = 27,
91+
UINT32 = 28,
92+
UINT64 = 29,
93+
}
94+
```
5895

59-
## Performing inference
96+
</details>
6097

61-
To run model with ExecuTorch Bindings it's essential to specify the shape of the input tensor. However, there's no need to explicitly define the input type, as it will automatically be inferred from the array you pass to `forward` method. However you will still need to explicitly provide shape for the tensor. Outputs from the model, such as classification probabilities, are returned in raw format.
98+
`dataPtr` - Represents a data buffer that will be used to create a tensor on the native side. This can be either an `ArrayBuffer` or a `TypedArray`. If your model takes in a datatype which is not covered by any of the `TypedArray` types, just pass an `ArrayBuffer` here.
99+
100+
`sizes` - Represents a shape of a given tensor, i.e. for a 640x640 RGB image with a batch size of 1, you would need to pass `[1, 3, 640, 640]` here.
101+
102+
`scalarType` - An enum resembling the ExecuTorch's `ScalarType`. For example, if your model was exported with float32 as an input, you will need to pass `ScalarType.FLOAT` here.
62103

63104
## End to end example
64105

@@ -72,6 +113,7 @@ First, import the necessary functions from the `react-native-executorch` package
72113
import {
73114
useExecutorchModule,
74115
STYLE_TRANSFER_CANDY,
116+
ScalarType,
75117
} from 'react-native-executorch';
76118

77119
// Initialize the executorch module with the predefined style transfer model.
@@ -82,22 +124,26 @@ const executorchModule = useExecutorchModule({
82124

83125
### Setting up input parameters
84126

85-
To prepare the input for the model, define the shape of the input tensor. This shape depends on the model's requirements. For the `STYLE_TRANSFER_CANDY` model, we need a tensor of shape `[1, 3, 640, 640]`, corresponding to a batch size of 1, 3 color channels (RGB), and dimensions of 640x640 pixels.
127+
To prepare the model input, define the tensor shape according to your model's requirements (defined by the model export process). For example, the STYLE_TRANSFER_CANDY model expects a tensor with shape `[1, 3, 640, 640]` — representing a batch size of 1, 3 color channels (RGB), and 640×640 pixel dimensions.
86128

87129
```typescript
88-
const shape = [1, 3, 640, 640];
89-
// Create a Float32Array to hold the pixel data of the image,
90-
// which should be preprocessed according to the model's specific needs.
91-
const input = new Float32Array(1 * 3 * 640 * 640); // fill this array with your image data
130+
const inputTensor = {
131+
dataPtr: new Float32Array(1 * 3 * 640 * 640), // or other TypedArray / ArrayBuffer
132+
sizes: [1, 3, 640, 640],
133+
scalarType: ScalarType.FLOAT,
134+
};
92135
```
93136

94137
### Performing inference
95138

139+
After passing input to the forward function, you'll receive an array of TensorPtr objects. Each TensorPtr contains its `dataPtr` as an ArrayBuffer. Since ArrayBuffer represents raw binary data, you'll need to interpret it according to the tensor's underlying data type (e.g., creating a Float32Array view for float32 tensors, Int32Array for int32 tensors, etc.).
140+
96141
```typescript
97142
try {
98143
// Perform the forward operation and receive the stylized image output.
99-
const output = await executorchModule.forward(input, shape);
100-
console.log('Stylization successful. Output Shape:', output.length);
144+
const output = await executorchModule.forward([inputTensor]);
145+
// Interpret the output ArrayBuffer
146+
// foo(output[0].dataPtr);
101147
} catch (error) {
102148
// Log any errors that occur during the forward pass.
103149
console.error('Error during model execution:', error);

0 commit comments

Comments
 (0)