Skip to content

Commit 1aed059

Browse files
committed
docs: apply same changes to 0.8 version
1 parent fc4b73a commit 1aed059

4 files changed

Lines changed: 19 additions & 17 deletions

File tree

docs/versioned_docs/version-0.8.x/01-fundamentals/01-getting-started.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ import TabItem from '@theme/TabItem';
2626

2727
## React Native ExecuTorch
2828

29-
React Native ExecuTorch is our way of bringing ExecuTorch into the React Native world. Our API is built to be simple, declarative, and efficient. Plus, we’ll provide a set of pre-exported models for common use cases, so you won’t have to worry about handling exports yourself. With just a few lines of JavaScript, you’ll be able to run AI models (even LLMs 👀) right on your device—keeping user data private and saving on cloud costs.
29+
React Native ExecuTorch is our way of bringing ExecuTorch into the React Native world. Our API is built to be simple, declarative, and efficient. Additionally, we provide a set of pre-exported models for common use cases, so you don’t have to worry about handling exports yourself. With just a few lines of JavaScript, you can run AI models (even LLMs 👀) right on your device—keeping user data private and saving on cloud costs.
3030

3131
## Compatibility
3232

@@ -122,11 +122,11 @@ yarn <ios | android> -d
122122

123123
Adding new functionality to the library follows a consistent three-step integration pipeline:
124124

125-
1. **Model Serialization:** We export PyTorch models for specific tasks (e.g., object detection) into the \*.pte format, which is optimized for the ExecuTorch runtime.
125+
1. **Model Serialization:** Export PyTorch model for a specific task (e.g. object detection) into the `*.pte` format, which is optimized for the ExecuTorch runtime.
126126

127-
2. **Native Implementation:** We develop a C++ execution layer that interfaces with the ExecuTorch runtime to handle inference. This layer also manages model-dependent logic, such as data pre-processing and post-processing.
127+
2. **Native Implementation:** Develop a C++ execution layer that interfaces with the ExecuTorch runtime to handle inference. This layer also manages model-dependent logic, such as data pre-processing and post-processing.
128128

129-
3. **TS Bindings:** Finally, we implement a TypeScript API that bridges the JavaScript environment to the native C++ logic, providing a clean, typed interface for the end user."
129+
3. **TS Bindings:** Finally, implement a TypeScript API that bridges the JavaScript environment to the native C++ logic, providing a clean, typed interface for the end user.
130130

131131
## Good reads
132132

docs/versioned_docs/version-0.8.x/01-fundamentals/02-loading-models.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,15 +46,15 @@ initExecutorch({
4646

4747
## Loading
4848

49-
**1. Load from React Native assets folder (For Files < 512MB)**
49+
### Load from React Native assets folder (for files < 512MB)
5050

5151
```typescript
5252
useExecutorchModule({
5353
modelSource: require('../assets/llama3_2.pte'),
5454
});
5555
```
5656

57-
**2. Load from remote URL:**
57+
### Load from remote URL
5858

5959
For files larger than 512MB or when you want to keep size of the app smaller, you can load the model from a remote URL (e.g. HuggingFace).
6060

@@ -64,7 +64,7 @@ useExecutorchModule({
6464
});
6565
```
6666

67-
**3. Load from local file system:**
67+
### Load from local file system
6868

6969
If you prefer to delegate the process of obtaining and loading model and tokenizer files to the user, you can use the following method:
7070

docs/versioned_docs/version-0.8.x/02-benchmarks/inference-time.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,9 @@ The values below represent the averages across all runs for the benchmark image.
9191

9292
## Vertical OCR
9393

94-
Notice that the recognizer models, as well as detector's `forward_320` method, were executed between 4 and 21 times during a single recognition.
94+
:::note
95+
Recognizer models, as well as detector's `forward_320` method, were executed between 4 and 21 times during a single recognition.
96+
:::
9597
The values below represent the averages across all runs for the benchmark image.
9698

9799
| Model | iPhone 17 Pro <br /> [ms] | iPhone 16 Pro <br /> [ms] | iPhone SE 3 | Samsung Galaxy S24 <br /> [ms] | OnePlus 12 <br /> [ms] |
@@ -145,7 +147,7 @@ Average time to synthesize speech from an input text of approximately 60 tokens,
145147
## Text Embeddings
146148

147149
:::note
148-
Benchmark times for text embeddings are highly dependent on the sentence length. The numbers above are based on a sentence of around 80 tokens. For shorter or longer sentences, inference time may vary accordingly.
150+
Benchmark times for text embeddings are highly dependent on the sentence length. The numbers below are based on a sentence of around 80 tokens. For shorter or longer sentences, inference time may vary accordingly.
149151
:::
150152

151153
| Model | iPhone 17 Pro (XNNPACK) [ms] | OnePlus 12 (XNNPACK) [ms] |
@@ -194,7 +196,7 @@ slower for very large images, which can increase total time.
194196

195197
## Instance Segmentation
196198

197-
:::info
199+
:::note
198200
Times presented in the tables are measured for YOLO models with input size equal to 512. Other input sizes may yield slower or faster inference times. RF-DETR Nano Seg uses a fixed resolution of 312×312.
199201
:::
200202

docs/versioned_docs/version-0.8.x/02-benchmarks/memory-usage.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ before model initialization.
2020
## Object Detection
2121

2222
:::note
23-
Data presented for YOLO models is based on inference with forward_640 method.
23+
Data presented for YOLO models is based on inference with `forward_640` method.
2424
:::
2525

2626
| Model / Device | iPhone 17 Pro [MB] | Google Pixel 10 [MB] |
@@ -114,20 +114,20 @@ The reported memory usage values include the memory footprint of the Phonemis pa
114114

115115
## Semantic Segmentation
116116

117-
:::info
117+
:::note
118118
Data presented in the following sections is based on inference with non-resized
119119
output. When resize is enabled, expect higher memory usage and inference time
120120
with higher resolutions.
121121
:::
122122

123-
| Model / Device | iPhone 17 Pro [MB] | OnePlus 12 [MB] |
124-
| --------------------------- | :----------------: | :-------------: |
125-
| DEELABV3_RESNET50 (XNNPACK) | 660 | 930 |
123+
| Model / Device | iPhone 17 Pro [MB] | OnePlus 12 [MB] |
124+
| ---------------------------- | :----------------: | :-------------: |
125+
| DEEPLABV3_RESNET50 (XNNPACK) | 660 | 930 |
126126

127127
## Instance Segmentation
128128

129-
:::info
130-
Data presented in the following sections is based on inference with forward_640 method.
129+
:::note
130+
Data presented in the following sections is based on inference with `forward_640` method.
131131
:::
132132

133133
| Model / Device | iPhone 17 Pro [MB] | OnePlus 12 [MB] |

0 commit comments

Comments
 (0)