Skip to content

Commit e8b05c1

Browse files
authored
Add docs versioning (#368)
## Description Add docs versioning ### Type of change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation) ### Checklist - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [x] My changes generate no new warnings
1 parent b3ae98a commit e8b05c1

84 files changed

Lines changed: 11268 additions & 3971 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/docusaurus.config.js

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,11 @@ const config = {
7171
label: 'Docs',
7272
position: 'right',
7373
},
74+
{
75+
type: 'docsVersionDropdown',
76+
position: 'right',
77+
dropdownActiveClassDisabled: true,
78+
},
7479
{
7580
'href': 'https://discord.gg/ZGqqY55qkP',
7681
'position': 'right',
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"label": "Fundamentals",
3+
"position": 1,
4+
"link": {
5+
"type": "generated-index"
6+
}
7+
}
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
title: Getting Started
3+
sidebar_position: 1
4+
---
5+
6+
import Tabs from '@theme/Tabs';
7+
import TabItem from '@theme/TabItem';
8+
9+
## What is ExecuTorch?
10+
ExecuTorch is a novel AI framework developed by Meta, designed to streamline deploying PyTorch models on a variety of devices, including mobile phones and microcontrollers. This framework enables exporting models into standalone binaries, allowing them to run locally without requiring API calls. ExecuTorch achieves state-of-the-art performance through optimizations and delegates such as CoreML and XNNPack. It provides a seamless export process with robust debugging options, making it easier to resolve issues if they arise.
11+
12+
## React Native ExecuTorch
13+
React Native ExecuTorch is our way of bringing ExecuTorch into the React Native world. Our API is built to be simple, declarative, and efficient. Plus, we’ll provide a set of pre-exported models for common use cases, so you won’t have to worry about handling exports yourself. With just a few lines of JavaScript, you’ll be able to run AI models (even LLMs 👀) right on your device—keeping user data private and saving on cloud costs.
14+
15+
## Installation
16+
Installation is pretty straightforward, just use your favorite package manager.
17+
18+
<Tabs>
19+
<TabItem value="npm" label="NPM">
20+
21+
npm install react-native-executorch
22+
23+
</TabItem>
24+
<TabItem value="pnpm" label="PNPM">
25+
26+
pnpm install react-native-executorch
27+
28+
</TabItem>
29+
<TabItem value="yarn" label="YARN">
30+
31+
yarn add react-native-executorch
32+
33+
</TabItem>
34+
</Tabs>
35+
36+
If you plan on adding your models to the assets instead of fetching them from a url, you also need to add following lines to your `metro.config.js`:
37+
38+
```json
39+
// metro.config.js
40+
...
41+
defaultConfig.resolver.assetExts.push('pte')
42+
defaultConfig.resolver.assetExts.push('bin')
43+
...
44+
```
45+
46+
This allows us to use binaries, such as exported models or tokenizers for LLMs.
47+
48+
:::caution[Caution]
49+
When using Expo, please note that you need to use a custom development build of your app, not the standard Expo Go app. This is because we rely on native modules, which Expo Go doesn’t support.
50+
:::
51+
52+
:::info[Info]
53+
Because we are using ExecuTorch under the hood, you won't be able to build ios app for release with simulator selected as the target device. Make sure to test release builds on real devices.
54+
:::
55+
56+
Running the app with the library:
57+
```bash
58+
yarn run expo:<ios | android> -d
59+
```
60+
61+
## Good reads
62+
If you want to dive deeper into ExecuTorch or our previous work with the framework, we highly encourage you to check out the following resources:
63+
- [ExecuTorch docs](https://pytorch.org/executorch/stable/index.html)
64+
- [Native code for iOS](https://medium.com/swmansion/bringing-native-ai-to-your-mobile-apps-with-executorch-part-i-ios-f1562a4556e8?source=user_profile_page---------0-------------250189c98ccf---------------)
65+
- [Native code for Android](https://medium.com/swmansion/bringing-native-ai-to-your-mobile-apps-with-executorch-part-ii-android-29431b6b9f7f?source=user_profile_page---------2-------------b8e3a5cb1c63---------------)
66+
- [Exporting to Android with XNNPACK](https://medium.com/swmansion/exporting-ai-models-on-android-with-xnnpack-and-executorch-3e70cff51c59?source=user_profile_page---------1-------------b8e3a5cb1c63---------------)
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"label": "Guides",
3+
"position": 2,
4+
"link": {
5+
"type": "generated-index"
6+
}
7+
}
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: Exporting Llama
3+
sidebar_position: 2
4+
---
5+
6+
In order to make the process of export as simple as possible for you, we created a script that runs a Docker container and exports the model.
7+
8+
## Steps to export Llama
9+
### 1. Create an Account
10+
Get a [HuggingFace](https://huggingface.co/) account. This will allow you to download needed files. You can also use the [official Llama website](https://www.llama.com/llama-downloads/).
11+
12+
### 2. Select a Model
13+
Pick the model that suits your needs. Before you download it, you'll need to accept a license. For best performance, we recommend using Spin-Quant or QLoRA versions of the model:
14+
- [Llama 3.2 3B](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/tree/main/original)
15+
- [Llama 3.2 1B](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct/tree/main/original)
16+
- [Llama 3.2 3B Spin-Quant](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct-SpinQuant_INT4_EO8/tree/main)
17+
- [Llama 3.2 1B Spin-Quant](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8/tree/main)
18+
- [Llama 3.2 3B QLoRA](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct-QLORA_INT4_EO8/tree/main)
19+
- [Llama 3.2 1B QLoRA](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct-QLORA_INT4_EO8/tree/main)
20+
21+
### 3. Download Files
22+
Download the `consolidated.00.pth`, `params.json` and `tokenizer.model` files. If you can't see them, make sure to check the `original` directory.
23+
24+
### 4. Rename the Tokenizer File
25+
Rename the `tokenizer.model` file to `tokenizer.bin` as required by the library:
26+
```bash
27+
mv tokenizer.model tokenizer.bin
28+
```
29+
30+
### 5. Run the Export Script
31+
Navigate to the `llama_export` directory and run the following command:
32+
```bash
33+
./build_llama_binary.sh --model-path /path/to/consolidated.00.pth --params-path /path/to/params.json
34+
```
35+
36+
The script will pull a Docker image from docker hub, and then run it to export the model. By default the output (llama3_2.pte file) will be saved in the `llama-export/outputs` directory. However, you can override that behavior with the `--output-path [path]` flag.
37+
38+
:::note[Note]
39+
This Docker image was tested on MacOS with ARM chip. This might not work in other environments.
40+
:::
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
---
2+
title: Running LLMs
3+
sidebar_position: 1
4+
---
5+
6+
React Native ExecuTorch supports Llama 3.2 models, including quantized versions. Before getting started, you’ll need to obtain the .pte binary—a serialized model—and the tokenizer. There are various ways to accomplish this:
7+
8+
- For your convienience, it's best if you use models exported by us, you can get them from our hugging face repository. You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library.
9+
- If you want to export model by yourself,you can use a Docker image that we've prepared. To see how it works, check out [exporting Llama](./exporting-llama.mdx)
10+
- Follow the official [tutorial](https://github.com/pytorch/executorch/blob/fe20be98c/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md) made by ExecuTorch team to build the model and tokenizer yourself
11+
12+
## Initializing
13+
14+
In order to load a model into the app, you need to run the following code:
15+
16+
```typescript
17+
import { useLLM, LLAMA3_2_1B_URL } from 'react-native-executorch';
18+
19+
const llama = useLLM({
20+
modelSource: LLAMA3_2_1B_URL,
21+
tokenizer: require('../assets/tokenizer.bin'),
22+
contextWindowLength: 3,
23+
});
24+
```
25+
26+
The code snippet above fetches the model from the specified URL, loads it into memory, and returns an object with various methods and properties for controlling the model. You can monitor the loading progress by checking the `llama.downloadProgress` and `llama.isModelReady` property, and if anything goes wrong, the `llama.error` property will contain the error message.
27+
28+
:::danger[Danger]
29+
Lower-end devices might not be able to fit LLMs into memory. We recommend using quantized models to reduce the memory footprint.
30+
:::
31+
32+
:::caution[Caution]
33+
Given computational constraints, our architecture is designed to support only one instance of the model runner at the time. Consequently, this means you can have only one active component leveraging `useLLM` concurrently.
34+
:::
35+
36+
### Arguments
37+
38+
**`modelSource`** - A string that specifies the location of the model binary. For more information, take a look at [loading models](#loading-models) section.
39+
40+
**`tokenizer`** - URL to the binary file which contains the tokenizer
41+
42+
**`contextWindowLength`** - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
43+
44+
**`systemPrompt`** - Often used to tell the model what is its purpose, for example - "Be a helpful translator"
45+
46+
### Returns
47+
48+
| Field | Type | Description |
49+
| ------------------- | ---------------------------------- | --------------------------------------------------------------------------------------------------------------- |
50+
| `generate` | `(input: string) => Promise<void>` | Function to start generating a response with the given input string. |
51+
| `response` | `string` | State of the generated response. This field is updated with each token generated by the model |
52+
| `error` | <code>string &#124; null</code> | Contains the error message if the model failed to load |
53+
| `isModelGenerating` | `boolean` | Indicates whether the model is currently generating a response |
54+
| `interrupt` | `() => void` | Function to interrupt the current inference |
55+
| `isModelReady` | `boolean` | Indicates whether the model is ready |
56+
| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
57+
58+
### Loading models
59+
60+
There are three different methods available for loading the model and tokenizer files, depending on their size and location.
61+
62+
**1. Load from React-Native assets folder (For Files < **512MB**)**
63+
64+
```typescript
65+
modelSource: require('../assets/llama3_2.pte');
66+
```
67+
68+
**2. Load from Remote URL:**
69+
70+
For files larger than 512MB or when you want to keep size of the app smaller, you can load the model from a remote URL (e.g. HuggingFace).
71+
72+
```typescript
73+
modelSource: 'https://.../llama3_2.pte';
74+
```
75+
76+
**3. Load from local file system:**
77+
78+
If you prefer to delegate the process of obtaining and loading model and tokenizer files to the user, you can use the following method:
79+
80+
```typescript
81+
modelSource: 'file:://var/mobile/.../llama3_2.pte',
82+
```
83+
84+
:::info[Info]
85+
The downloaded files are stored in documents directory of your application.
86+
:::
87+
88+
### Sending a message
89+
90+
In order to send a message to the model, one can use the following code:
91+
92+
```typescript
93+
const llama = useLLM(
94+
modelSource: LLAMA3_2_1B_URL,
95+
tokenizer: require('../assets/tokenizer.bin'),
96+
);
97+
98+
...
99+
const message = 'Hi, who are you?';
100+
await llama.generate(message);
101+
...
102+
```
103+
104+
### Listening for the response
105+
106+
As you might've noticed, there is no return value from the `runInference` function. Instead, the `.response` field of the model is updated with each token.
107+
This is how you can render the response of the model:
108+
109+
```typescript
110+
...
111+
return (
112+
<Text>{llama.response}</Text>
113+
)
114+
```
115+
116+
Behind the scenes, tokens are generated one by one, and the response property is updated with each token as it’s created. This means that the text component will re-render whenever llama.response gets updated.
117+
118+
Sometimes, you might want to stop the model while it’s generating. To do this, you can use `interrupt()`, which will halt the model and append the current response to its internal conversation state.
119+
120+
There are also cases when you need to check if tokens are being generated, such as to conditionally render a stop button. We’ve made this easy with the `isTokenBeingGenerated` property.
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"label": "Computer Vision",
3+
"position": 3,
4+
"link": {
5+
"type": "generated-index"
6+
}
7+
}
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
---
2+
title: useClassification
3+
sidebar_position: 1
4+
---
5+
6+
Image classification is the process of assigning a label to an image that best describes its contents. For example, when given an image of a puppy, the image classifier should assign the puppy class to that image.
7+
8+
:::info
9+
Usually, the class with the highest probability is the one that is assigned to an image. However, if there are multiple classes with comparatively high probabilities, this may indicate that the model is not confident in its prediction.
10+
:::
11+
12+
:::caution
13+
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-efficientnet-v2-s). You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library
14+
:::
15+
16+
## Reference
17+
18+
```typescript
19+
import { useClassification, EFFICIENTNET_V2_S } from 'react-native-executorch';
20+
21+
const model = useClassification({
22+
modelSource: EFFICIENTNET_V2_S,
23+
});
24+
25+
const imageUri = 'file::///Users/.../cute_puppy.png';
26+
27+
try {
28+
const classesWithProbabilities = await model.forward(imageUri);
29+
} catch (error) {
30+
console.error(error);
31+
}
32+
```
33+
34+
<details>
35+
<summary>Type definitions</summary>
36+
37+
```typescript
38+
interface ClassificationModule {
39+
error: string | null;
40+
isReady: boolean;
41+
isGenerating: boolean;
42+
forward: (input: string) => Promise<{ [category: string]: number }>;
43+
}
44+
```
45+
46+
</details>
47+
48+
### Arguments
49+
50+
**`modelSource`**
51+
A string that specifies the location of the model binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) page.
52+
53+
### Returns
54+
55+
| Field | Type | Description |
56+
| -------------- | ------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------- |
57+
| `forward` | `(input: string) => Promise<{ [category: string]: number }>` | Executes the model's forward pass, where `input` can be a fetchable resource or a Base64-encoded string. |
58+
| `error` | <code>string &#124; null</code> | Contains the error message if the model failed to load. |
59+
| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. |
60+
| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. |
61+
62+
## Running the model
63+
64+
To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The function returns a promise, which can resolve either to error or an object containing categories with their probabilities.
65+
66+
:::info[Info]
67+
Images from external sources are stored in your application's temporary directory.
68+
:::
69+
70+
## Example
71+
72+
```typescript
73+
import { useClassification, EFFICIENTNET_V2_S } from 'react-native-executorch';
74+
75+
function App() {
76+
const model = useClassification({
77+
modulePath: EFFICIENTNET_V2_S,
78+
});
79+
80+
...
81+
const imageUri = 'file:///Users/.../cute_puppy.png';
82+
83+
try {
84+
const classesWithProbabilities = await model.forward(imageUri);
85+
86+
// Extract three classes with the highest probabilities
87+
const topThreeClasses = Object.entries(classesWithProbabilities)
88+
.sort(([, a], [, b]) => b - a)
89+
.slice(0, 3)
90+
.map(([label, score]) => ({ label, score }));
91+
} catch (error) {
92+
console.error(error);
93+
}
94+
...
95+
}
96+
```
97+
98+
## Supported models
99+
100+
| Model | Number of classes | Class list |
101+
| --------------------------------------------------------------------------------------------------------------- | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
102+
| [efficientnet_v2_s](https://pytorch.org/vision/0.20/models/generated/torchvision.models.efficientnet_v2_s.html) | 1000 | [ImageNet1k_v1](https://github.com/software-mansion/react-native-executorch/blob/main/android/src/main/java/com/swmansion/rnexecutorch/models/classification/Constants.kt) |

0 commit comments

Comments
 (0)