Skip to content

Commit 63cdb16

Browse files
authored
chore: make docs folder structure resemble the one that appears on the website (#454)
## Description Changes: - Updated folder structure, now reflects a 1:1 match between the order of files in the project directory and their appearance on the website, as described in the [Docusaurus sidebar - using number prefixes](https://docusaurus.io/docs/next/sidebar/autogenerated#using-number-prefixes). - Adjusted links to point to the correct locations in our repository. ### Type of change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation) ### Checklist - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [x] My changes generate no new warnings
1 parent b74968d commit 63cdb16

44 files changed

Lines changed: 148 additions & 192 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
File renamed without changes.

docs/docs/fundamentals/loading-models.md renamed to docs/docs/01-fundamentals/02-loading-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Loading models
2+
title: Loading Models
33
---
44

55
There are three different methods available for loading model files, depending on their size and location.

docs/docs/faq/frequently-asked-questions.md renamed to docs/docs/01-fundamentals/03-frequently-asked-questions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Frequently asked questions
2+
title: Frequently Asked Questions
33
---
44

55
This section is meant to answer some common community inquiries, especially regarding the ExecuTorch runtime or adding your own models. If you can't see an answer to your question, feel free to open up a [discussion](https://github.com/software-mansion/react-native-executorch/discussions/new/choose).
@@ -10,11 +10,11 @@ Each hook documentation subpage (useClassification, useLLM, etc.) contains a sup
1010

1111
### How can I run my own AI model?
1212

13-
To run your own model, you need to directly access the underlying [ExecuTorch Module API](https://pytorch.org/executorch/stable/extension-module.html). We provide an experimental [React hook](../executorch-bindings/useExecutorchModule.md) along with a [TypeScript alternative](../typescript-api/ExecutorchModule.md), which serve as a way to use the aforementioned API without the need of diving into native code. In order to get a model in a format runnable by the runtime, you'll need to get your hands dirty with some ExecuTorch knowledge. For more guides on exporting models, please refer to the [ExecuTorch tutorials](https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html). Once you obtain your model in a `.pte` format, you can run it with `useExecuTorchModule` and `ExecuTorchModule`.
13+
To run your own model, you need to directly access the underlying [ExecuTorch Module API](https://pytorch.org/executorch/stable/extension-module.html). We provide an experimental [React hook](../02-hooks/03-executorch-bindings/useExecutorchModule.md) along with a [TypeScript alternative](../03-typescript-api/03-executorch-bindings/ExecutorchModule.md), which serve as a way to use the aforementioned API without the need of diving into native code. In order to get a model in a format runnable by the runtime, you'll need to get your hands dirty with some ExecuTorch knowledge. For more guides on exporting models, please refer to the [ExecuTorch tutorials](https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html). Once you obtain your model in a `.pte` format, you can run it with `useExecuTorchModule` and `ExecuTorchModule`.
1414

1515
### Can you do function calling with useLLM?
1616

17-
If your model supports tool calling (i.e. its chat template can process tools) you can use the method explained in [useLLM page](../natural-language-processing/useLLM.md#tool-calling).
17+
If your model supports tool calling (i.e. its chat template can process tools) you can use the method explained on the [useLLM page](../02-hooks/01-natural-language-processing/useLLM.md).
1818

1919
If your model doesn't support it, you can still work around it using context. For details, refer to [this comment](https://github.com/software-mansion/react-native-executorch/issues/173#issuecomment-2775082278).
2020

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"label": "Fundamentals",
3+
"link": {
4+
"type": "generated-index"
5+
}
6+
}
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"label": "Natural Language Processing",
3+
"link": {
4+
"type": "generated-index"
5+
}
6+
}

docs/docs/natural-language-processing/useLLM.md renamed to docs/docs/02-hooks/01-natural-language-processing/useLLM.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ description: "Learn how to use LLMs in your React Native applications with React
2323

2424
React Native ExecuTorch supports a variety of LLMs (checkout our [HuggingFace repository](https://huggingface.co/software-mansion) for model already converted to ExecuTorch format) including Llama 3.2. Before getting started, you’ll need to obtain the .pte binary—a serialized model, the tokenizer and tokenizer config JSON files. There are various ways to accomplish this:
2525

26-
- For your convenience, it's best if you use models exported by us, you can get them from our [HuggingFace repository](https://huggingface.co/software-mansion). You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library.
26+
- For your convenience, it's best if you use models exported by us, you can get them from our [HuggingFace repository](https://huggingface.co/software-mansion). You can also use [constants](https://github.com/software-mansion/react-native-executorch/blob/main/packages/react-native-executorch/src/constants/modelUrls.ts) shipped with our library.
2727
- Follow the official [tutorial](https://github.com/pytorch/executorch/blob/fe20be98c/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md) made by ExecuTorch team to build the model and tokenizer yourself.
2828

2929
:::danger
@@ -59,7 +59,7 @@ The code snippet above fetches the model from the specified URL, loads it into m
5959

6060
### Arguments
6161

62-
**`modelSource`** - `ResourceSource` that specifies the location of the model binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) section.
62+
**`modelSource`** - `ResourceSource` that specifies the location of the model binary. For more information, take a look at [loading models](../../01-fundamentals/02-loading-models.md) section.
6363

6464
**`tokenizerSource`** - `ResourceSource` pointing to the JSON file which contains the tokenizer.
6565

docs/docs/natural-language-processing/useSpeechToText.md renamed to docs/docs/02-hooks/01-natural-language-processing/useSpeechToText.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Currently, we do not support direct microphone input streaming to the model. Ins
2525
:::
2626

2727
:::caution
28-
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-moonshine-tiny). You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library.
28+
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-moonshine-tiny). You can also use [constants](https://github.com/software-mansion/react-native-executorch/blob/main/packages/react-native-executorch/src/constants/modelUrls.ts) shipped with our library.
2929
:::
3030

3131
## Reference
@@ -72,13 +72,13 @@ Given that STT models can process audio no longer than 30 seconds, there is a ne
7272
A literal of `"moonshine" | "whisper" | "whisperMultilingual` which serves as an identifier for which model should be used.
7373

7474
**`encoderSource?`**
75-
A string that specifies the location of a .pte file for the encoder. For further information on passing model sources, check out [Loading Models](https://docs.swmansion.com/react-native-executorch/docs/fundamentals/loading-models). Defaults to [constants](https://github.com/software-mansion/react-native-executorch/blob/main/src/constants/modelUrls.ts) for given model.
75+
A string that specifies the location of a .pte file for the encoder. For further information on passing model sources, check out [Loading Models](../../01-fundamentals/02-loading-models.md). Defaults to [constants](https://github.com/software-mansion/react-native-executorch/blob/main/packages/react-native-executorch/src/constants/modelUrls.ts) for given model.
7676

7777
**`decoderSource?`**
78-
Analogous to the encoderSource, this takes in a string which is a source for the decoder part of the model. Defaults to [constants](https://github.com/software-mansion/react-native-executorch/blob/main/src/constants/modelUrls.ts) for given model.
78+
Analogous to the encoderSource, this takes in a string which is a source for the decoder part of the model. Defaults to [constants](https://github.com/software-mansion/react-native-executorch/blob/main/packages/react-native-executorch/src/constants/modelUrls.ts) for given model.
7979

8080
**`tokenizerSource?`**
81-
A string that specifies the location to the tokenizer for the model. This works just as the encoder and decoder do. Defaults to [constants](https://github.com/software-mansion/react-native-executorch/blob/main/src/constants/modelUrls.ts) for given model.
81+
A string that specifies the location to the tokenizer for the model. This works just as the encoder and decoder do. Defaults to [constants](https://github.com/software-mansion/react-native-executorch/blob/main/packages/react-native-executorch/src/constants/modelUrls.ts) for given model.
8282

8383
**`overlapSeconds?`**
8484
Specifies the length of overlap between consecutive audio chunks (expressed in seconds). Overrides `streamingConfig` argument.
@@ -197,7 +197,7 @@ enum SpeechToTextLanguage {
197197

198198
## Running the model
199199

200-
Before running the model's `transcribe` method be sure to obtain waveform of the audio You wish to transcribe. You need to obtain the waveform from audio on your own (remember to use sampling rate of 16kHz!), in the snippet above we provide an example how you can do that. In the latter case just pass the obtained waveform as argument to the `transcribe` method which returns a promise resolving to the generated tokens when successful. If the model fails during inference the `error` property contains details of the error. If you want to obtain tokens in a streaming fashion, you can also use the sequence property, which is updated with each generated token, similar to the [useLLM](../natural-language-processing/useLLM.md) hook.
200+
Before running the model's `transcribe` method be sure to obtain waveform of the audio You wish to transcribe. You need to obtain the waveform from audio on your own (remember to use sampling rate of 16kHz!), in the snippet above we provide an example how you can do that. In the latter case just pass the obtained waveform as argument to the `transcribe` method which returns a promise resolving to the generated tokens when successful. If the model fails during inference the `error` property contains details of the error. If you want to obtain tokens in a streaming fashion, you can also use the sequence property, which is updated with each generated token, similar to the [useLLM](../../02-hooks/01-natural-language-processing/useLLM.md) hook.
201201

202202
#### Multilingual transcription
203203

docs/docs/natural-language-processing/useTextEmbeddings.md renamed to docs/docs/02-hooks/01-natural-language-processing/useTextEmbeddings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ description: "Learn how to use text embeddings models in your React Native appli
1818
Text Embedding is the process of converting text into a numerical representation. This representation can be used for various natural language processing tasks, such as semantic search, text classification, and clustering.
1919

2020
:::caution
21-
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-all-MiniLM-L6-v2). You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library.
21+
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-all-MiniLM-L6-v2). You can also use [constants](https://github.com/software-mansion/react-native-executorch/blob/main/packages/react-native-executorch/src/constants/modelUrls.ts) shipped with our library.
2222
:::
2323

2424
## Reference
@@ -45,7 +45,7 @@ try {
4545
### Arguments
4646

4747
**`modelSource`**
48-
A string that specifies the location of the model binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) page.
48+
A string that specifies the location of the model binary. For more information, take a look at [loading models](../../01-fundamentals/02-loading-models.md) page.
4949

5050
**`tokenizerSource`**
5151
A string that specifies the location of the tokenizer JSON file.

docs/docs/natural-language-processing/useTokenizer.md renamed to docs/docs/02-hooks/01-natural-language-processing/useTokenizer.md

File renamed without changes.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"label": "Computer Vision",
3+
"link": {
4+
"type": "generated-index"
5+
}
6+
}

0 commit comments

Comments
 (0)