Skip to content

Commit b74968d

Browse files
authored
docs: fix outdated requirements & pipe characters (#453)
## Description The pipes weren't rendered correctly when being outside the code block, see https://docs.swmansion.com/react-native-executorch/docs/benchmarks/inference-time#streaming-mode. This PR fixes that & additionally removes outdated info about RN Audio Api as a peer dep. ### Type of change - [x] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Documentation update (improves or adds clarity to existing documentation) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
1 parent 9a704ae commit b74968d

4 files changed

Lines changed: 26 additions & 26 deletions

File tree

docs/docs/benchmarks/inference-time.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -68,14 +68,14 @@ Times presented in the tables are measured as consecutive runs of the model. Ini
6868

6969
Notice than for `Whisper` model which has to take as an input 30 seconds audio chunks (for shorter audio it is automatically padded with silence to 30 seconds) `fast` mode has the lowest latency (time from starting transcription to first token returned, caused by streaming algorithm), but the slowest speed. That's why for the lowest latency and the fastest transcription we suggest using `Moonshine` model, if you still want to proceed with `Whisper` use preferably the `balanced` mode.
7070

71-
| Model (mode) | iPhone 16 Pro (XNNPACK) [latency &#124 tokens/s] | iPhone 14 Pro (XNNPACK) [latency &#124 tokens/s] | iPhone SE 3 (XNNPACK) [latency &#124 tokens/s] | Samsung Galaxy S24 (XNNPACK) [latency &#124 tokens/s] | OnePlus 12 (XNNPACK) [latency &#124 tokens/s] |
72-
| ------------------------- | :----------------------------------------------: | :----------------------------------------------: | :--------------------------------------------: | :---------------------------------------------------: | :-------------------------------------------: |
73-
| Moonshine-tiny (fast) | 0.8s &#124 19.0t/s | 1.5s &#124 11.3t/s | 1.5s &#124 10.4t/s | 2.0s &#124 8.8t/s | 1.6s &#124 12.5t/s |
74-
| Moonshine-tiny (balanced) | 2.0s &#124 20.0t/s | 3.2s &#124 12.4t/s | 3.7s &#124 10.4t/s | 4.6s &#124 11.2t/s | 3.4s &#124 14.6t/s |
75-
| Moonshine-tiny (quality) | 4.3s &#124 16.8t/s | 6.6s &#124 10.8t/s | 8.0s &#124 8.9t/s | 7.7s &#124 11.1t/s | 6.8s &#124 13.1t/s |
76-
| Whisper-tiny (fast) | 2.8s &#124 5.5t/s | 3.7s &#124 4.4t/s | 4.4s &#124 3.4t/s | 5.5s &#124 3.1t/s | 5.3s &#124 3.8t/s |
77-
| Whisper-tiny (balanced) | 5.6s &#124 7.9t/s | 7.0s &#124 6.3t/s | 8.3s &#124 5.0t/s | 8.4s &#124 6.7t/s | 7.7s &#124 7.2t/s |
78-
| Whisper-tiny (quality) | 10.3s &#124 8.3t/s | 12.6s &#124 6.8t/s | 7.8s &#124 8.9t/s | 13.5s &#124 7.1t/s | 12.9s &#124 7.5t/s |
71+
| Model (mode) | iPhone 16 Pro (XNNPACK) [latency \| tokens/s] | iPhone 14 Pro (XNNPACK) [latency \| tokens/s] | iPhone SE 3 (XNNPACK) [latency \| tokens/s] | Samsung Galaxy S24 (XNNPACK) [latency \| tokens/s] | OnePlus 12 (XNNPACK) [latency \| tokens/s] |
72+
| ------------------------- | :-------------------------------------------: | :-------------------------------------------: | :-----------------------------------------: | :------------------------------------------------: | :----------------------------------------: |
73+
| Moonshine-tiny (fast) | 0.8s \| 19.0t/s | 1.5s \| 11.3t/s | 1.5s \| 10.4t/s | 2.0s \| 8.8t/s | 1.6s \| 12.5t/s |
74+
| Moonshine-tiny (balanced) | 2.0s \| 20.0t/s | 3.2s \| 12.4t/s | 3.7s \| 10.4t/s | 4.6s \| 11.2t/s | 3.4s \| 14.6t/s |
75+
| Moonshine-tiny (quality) | 4.3s \| 16.8t/s | 6.6s \| 10.8t/s | 8.0s \| 8.9t/s | 7.7s \| 11.1t/s | 6.8s \| 13.1t/s |
76+
| Whisper-tiny (fast) | 2.8s \| 5.5t/s | 3.7s \| 4.4t/s | 4.4s \| 3.4t/s | 5.5s \| 3.1t/s | 5.3s \| 3.8t/s |
77+
| Whisper-tiny (balanced) | 5.6s \| 7.9t/s | 7.0s \| 6.3t/s | 8.3s \| 5.0t/s | 8.4s \| 6.7t/s | 7.7s \| 7.2t/s |
78+
| Whisper-tiny (quality) | 10.3s \| 8.3t/s | 12.6s \| 6.8t/s | 7.8s \| 8.9t/s | 13.5s \| 7.1t/s | 12.9s \| 7.5t/s |
7979

8080
### Encoding
8181

docs/docs/fundamentals/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ If your app still runs on the old architecture, please consider upgrading to the
3535

3636
## Installation
3737

38-
Installation is pretty straightforward, just use your favorite package manager. We use React Native Audio API as a peer dependency to make it possible to load audio for Speech To Text.
38+
Installation is pretty straightforward, just use your favorite package manager.
3939

4040
<Tabs>
4141
<TabItem value="npm" label="NPM">
@@ -46,7 +46,7 @@ Installation is pretty straightforward, just use your favorite package manager.
4646

4747
</TabItem>
4848
<TabItem value="pnpm" label="PNPM">
49-
49+
5050
```
5151
pnpm install react-native-executorch
5252
```

docs/versioned_docs/version-0.3.x/benchmarks/inference-time.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -69,14 +69,14 @@ Times presented in the tables are measured as consecutive runs of the model. Ini
6969

7070
Notice than for `Whisper` model which has to take as an input 30 seconds audio chunks (for shorter audio it is automatically padded with silence to 30 seconds) `fast` mode has the lowest latency (time from starting transcription to first token returned, caused by streaming algorithm), but the slowest speed. That's why for the lowest latency and the fastest transcription we suggest using `Moonshine` model, if you still want to proceed with `Whisper` use preferably the `balanced` mode.
7171

72-
| Model (mode) | iPhone 16 Pro (XNNPACK) [latency &#124 tokens/s] | iPhone 14 Pro (XNNPACK) [latency &#124 tokens/s] | iPhone SE 3 (XNNPACK) [latency &#124 tokens/s] | Samsung Galaxy S24 (XNNPACK) [latency &#124 tokens/s] | OnePlus 12 (XNNPACK) [latency &#124 tokens/s] |
73-
| ------------------------- | :----------------------------------------------: | :----------------------------------------------: | :--------------------------------------------: | :---------------------------------------------------: | :-------------------------------------------: |
74-
| Moonshine-tiny (fast) | 0.8s &#124 19.0t/s | 1.5s &#124 11.3t/s | 1.5s &#124 10.4t/s | 2.0s &#124 8.8t/s | 1.6s &#124 12.5t/s |
75-
| Moonshine-tiny (balanced) | 2.0s &#124 20.0t/s | 3.2s &#124 12.4t/s | 3.7s &#124 10.4t/s | 4.6s &#124 11.2t/s | 3.4s &#124 14.6t/s |
76-
| Moonshine-tiny (quality) | 4.3s &#124 16.8t/s | 6.6s &#124 10.8t/s | 8.0s &#124 8.9t/s | 7.7s &#124 11.1t/s | 6.8s &#124 13.1t/s |
77-
| Whisper-tiny (fast) | 2.8s &#124 5.5t/s | 3.7s &#124 4.4t/s | 4.4s &#124 3.4t/s | 5.5s &#124 3.1t/s | 5.3s &#124 3.8t/s |
78-
| Whisper-tiny (balanced) | 5.6s &#124 7.9t/s | 7.0s &#124 6.3t/s | 8.3s &#124 5.0t/s | 8.4s &#124 6.7t/s | 7.7s &#124 7.2t/s |
79-
| Whisper-tiny (quality) | 10.3s &#124 8.3t/s | 12.6s &#124 6.8t/s | 7.8s &#124 8.9t/s | 13.5s &#124 7.1t/s | 12.9s &#124 7.5t/s |
72+
| Model (mode) | iPhone 16 Pro (XNNPACK) [latency \| tokens/s] | iPhone 14 Pro (XNNPACK) [latency \| tokens/s] | iPhone SE 3 (XNNPACK) [latency \| tokens/s] | Samsung Galaxy S24 (XNNPACK) [latency \| tokens/s] | OnePlus 12 (XNNPACK) [latency \| tokens/s] |
73+
| ------------------------- | :-------------------------------------------: | :-------------------------------------------: | :-----------------------------------------: | :------------------------------------------------: | :----------------------------------------: |
74+
| Moonshine-tiny (fast) | 0.8s \| 19.0t/s | 1.5s \| 11.3t/s | 1.5s \| 10.4t/s | 2.0s \| 8.8t/s | 1.6s \| 12.5t/s |
75+
| Moonshine-tiny (balanced) | 2.0s \| 20.0t/s | 3.2s \| 12.4t/s | 3.7s \| 10.4t/s | 4.6s \| 11.2t/s | 3.4s \| 14.6t/s |
76+
| Moonshine-tiny (quality) | 4.3s \| 16.8t/s | 6.6s \| 10.8t/s | 8.0s \| 8.9t/s | 7.7s \| 11.1t/s | 6.8s \| 13.1t/s |
77+
| Whisper-tiny (fast) | 2.8s \| 5.5t/s | 3.7s \| 4.4t/s | 4.4s \| 3.4t/s | 5.5s \| 3.1t/s | 5.3s \| 3.8t/s |
78+
| Whisper-tiny (balanced) | 5.6s \| 7.9t/s | 7.0s \| 6.3t/s | 8.3s \| 5.0t/s | 8.4s \| 6.7t/s | 7.7s \| 7.2t/s |
79+
| Whisper-tiny (quality) | 10.3s \| 8.3t/s | 12.6s \| 6.8t/s | 7.8s \| 8.9t/s | 13.5s \| 7.1t/s | 12.9s \| 7.5t/s |
8080

8181
### Encoding
8282

docs/versioned_docs/version-0.4.x/benchmarks/inference-time.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -68,14 +68,14 @@ Times presented in the tables are measured as consecutive runs of the model. Ini
6868

6969
Notice than for `Whisper` model which has to take as an input 30 seconds audio chunks (for shorter audio it is automatically padded with silence to 30 seconds) `fast` mode has the lowest latency (time from starting transcription to first token returned, caused by streaming algorithm), but the slowest speed. That's why for the lowest latency and the fastest transcription we suggest using `Moonshine` model, if you still want to proceed with `Whisper` use preferably the `balanced` mode.
7070

71-
| Model (mode) | iPhone 16 Pro (XNNPACK) [latency &#124 tokens/s] | iPhone 14 Pro (XNNPACK) [latency &#124 tokens/s] | iPhone SE 3 (XNNPACK) [latency &#124 tokens/s] | Samsung Galaxy S24 (XNNPACK) [latency &#124 tokens/s] | OnePlus 12 (XNNPACK) [latency &#124 tokens/s] |
72-
| ------------------------- | :----------------------------------------------: | :----------------------------------------------: | :--------------------------------------------: | :---------------------------------------------------: | :-------------------------------------------: |
73-
| Moonshine-tiny (fast) | 0.8s &#124 19.0t/s | 1.5s &#124 11.3t/s | 1.5s &#124 10.4t/s | 2.0s &#124 8.8t/s | 1.6s &#124 12.5t/s |
74-
| Moonshine-tiny (balanced) | 2.0s &#124 20.0t/s | 3.2s &#124 12.4t/s | 3.7s &#124 10.4t/s | 4.6s &#124 11.2t/s | 3.4s &#124 14.6t/s |
75-
| Moonshine-tiny (quality) | 4.3s &#124 16.8t/s | 6.6s &#124 10.8t/s | 8.0s &#124 8.9t/s | 7.7s &#124 11.1t/s | 6.8s &#124 13.1t/s |
76-
| Whisper-tiny (fast) | 2.8s &#124 5.5t/s | 3.7s &#124 4.4t/s | 4.4s &#124 3.4t/s | 5.5s &#124 3.1t/s | 5.3s &#124 3.8t/s |
77-
| Whisper-tiny (balanced) | 5.6s &#124 7.9t/s | 7.0s &#124 6.3t/s | 8.3s &#124 5.0t/s | 8.4s &#124 6.7t/s | 7.7s &#124 7.2t/s |
78-
| Whisper-tiny (quality) | 10.3s &#124 8.3t/s | 12.6s &#124 6.8t/s | 7.8s &#124 8.9t/s | 13.5s &#124 7.1t/s | 12.9s &#124 7.5t/s |
71+
| Model (mode) | iPhone 16 Pro (XNNPACK) [latency \| tokens/s] | iPhone 14 Pro (XNNPACK) [latency \| tokens/s] | iPhone SE 3 (XNNPACK) [latency \| tokens/s] | Samsung Galaxy S24 (XNNPACK) [latency \| tokens/s] | OnePlus 12 (XNNPACK) [latency \| tokens/s] |
72+
| ------------------------- | :-------------------------------------------: | :-------------------------------------------: | :-----------------------------------------: | :------------------------------------------------: | :----------------------------------------: |
73+
| Moonshine-tiny (fast) | 0.8s \| 19.0t/s | 1.5s \| 11.3t/s | 1.5s \| 10.4t/s | 2.0s \| 8.8t/s | 1.6s \| 12.5t/s |
74+
| Moonshine-tiny (balanced) | 2.0s \| 20.0t/s | 3.2s \| 12.4t/s | 3.7s \| 10.4t/s | 4.6s \| 11.2t/s | 3.4s \| 14.6t/s |
75+
| Moonshine-tiny (quality) | 4.3s \| 16.8t/s | 6.6s \| 10.8t/s | 8.0s \| 8.9t/s | 7.7s \| 11.1t/s | 6.8s \| 13.1t/s |
76+
| Whisper-tiny (fast) | 2.8s \| 5.5t/s | 3.7s \| 4.4t/s | 4.4s \| 3.4t/s | 5.5s \| 3.1t/s | 5.3s \| 3.8t/s |
77+
| Whisper-tiny (balanced) | 5.6s \| 7.9t/s | 7.0s \| 6.3t/s | 8.3s \| 5.0t/s | 8.4s \| 6.7t/s | 7.7s \| 7.2t/s |
78+
| Whisper-tiny (quality) | 10.3s \| 8.3t/s | 12.6s \| 6.8t/s | 7.8s \| 8.9t/s | 13.5s \| 7.1t/s | 12.9s \| 7.5t/s |
7979

8080
### Encoding
8181

0 commit comments

Comments
 (0)