You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: add bare React Native LLM chat example app (#763)
## Description
Adds a new bare React Native example application demonstrating LLM chat
functionality using Executorch. This example provides a complete
reference implementation for running local LLM inference on mobile
devices through a simple chat interface.
**Key features:**
- Local LLM inference using Executorch
- Simple chat UI with message history
- Model loading and inference pipeline
- Error handling and user feedback
- Compatible with both iOS and Android
### Introduces a breaking change?
- [ ] Yes
- [x] No
### Type of change
- [ ] Bug fix (change which fixes an issue)
- [x] New feature (change which adds functionality)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
- [ ] Other (chores, tests, code style improvements etc.)
### Tested on
- [x] iOS
- [x] Android
### Testing instructions
1. Navigate to the example directory:
`cd apps/bare_rn`
2. Install dependencies:
`yarn install`
3. Run on iOS:
`npx pod-install`
`yarn ios`
Or run on Android:
`yarn android`
4. Verify the app launches and displays the chat interface
5. Test message sending and model inference (requires model file setup)
### Screenshots
[
<img width="1040" height="1037" alt="Screenshot 2026-01-28 at 02 06 29"
src="https://github.com/user-attachments/assets/3774f4d2-ccc0-414b-85e7-3e26b06249ad"
/>
](url)
### Related issues
This PR provides an example app for PR #759
### Checklist
- [x] I have performed a self-review of my code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have updated the documentation accordingly
- [x] My changes generate no new warnings
### Additional notes
This example app was generated using `npx
@react-native-community/cli@latest init bare_rn --version 0.81.5 --pm
yarn` and follows bare React Native project structure (not Expo). It
serves as a foundational example for developers to understand how to
integrate Executorch for on-device LLM inference in their React Native
applications.
**Why this example is not included in the yarn workspace:**
The bare React Native example is maintained outside the yarn workspace
structure due to fundamental architectural differences and specific
technical issues:
1. Native Module Resolution Issues with Background Downloader:
Using the workspace monorepo breaks the Android app's integration with
`@kesha-antonov/react-native-background-downloader`. The monorepo's
package hoisting and workspace resolution interferes with the native
module's ability to properly register and resolve its native components.
This causes runtime failures when attempting to download AI models in
the background, which is a critical feature for this LLM chat example.
2. Dependency Isolation: Bare React Native projects have distinct native
dependency chains (iOS Pods, Android Gradle) that conflict with the
monorepo's package management. The monorepo uses workspaces and hoisting
strategies optimized for JavaScript/TypeScript packages, which can
interfere with native module resolution.
---------
Co-authored-by: Mateusz Słuszniak <mateusz.sluszniak@swmansion.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Mateusz Sluszniak <56299341+msluszniak@users.noreply.github.com>
Co-authored-by: Norbert Klockiewicz <Nklockiewicz12@gmail.com>
-[React Native ExecuTorch is created by Software Mansion](#react-native-executorch-is-created-by-software-mansion)
51
55
52
56
</details>
53
57
54
-
## :yin_yang:Supported Versions
58
+
## Supported Versions
55
59
56
60
The minimal supported version are:
57
61
* iOS 17.0
@@ -61,17 +65,17 @@ The minimal supported version are:
61
65
> [!IMPORTANT]
62
66
> React Native ExecuTorch supports only the [New React Native architecture](https://reactnative.dev/architecture/landing-page).
63
67
64
-
## :earth_africa:Real-world Example
68
+
## Real-world Example
65
69
66
70
React Native ExecuTorch is powering [Private Mind](https://privatemind.swmansion.com/), a privacy-first mobile AI app available on [App Store](https://apps.apple.com/gb/app/private-mind/id6746713439) and [Google Play](https://play.google.com/store/apps/details?id=com.swmansion.privatemind).
67
71
68
72
<imgwidth="2720"height="1085"alt="Private Mind promo"src="https://github.com/user-attachments/assets/b12296fe-19ac-48fc-9726-da9242700346" />
69
73
70
-
## :llama:**Quickstart - Running Llama**
74
+
## Quickstart - Running Llama
71
75
72
76
**Get started with AI-powered text generation in 3 easy steps!**
We currently host a few example [apps](https://github.com/software-mansion/react-native-executorch/tree/main/apps) demonstrating use cases of our library:
134
138
135
139
-`llm` - Chat application showcasing use of LLMs
136
140
-`speech` - Speech to Text & Text to Speech task implementations
137
141
-`computer-vision` - Computer vision related tasks
138
142
-`text-embeddings` - Computing text representations for semantic search
143
+
-`bare_rn` - LLM chat example for bare React Native (without Expo)
139
144
140
-
If you would like to run demo app, navigate to its project directory and install dependencies with:
141
-
142
-
```bash
143
-
yarn
144
-
```
145
-
146
-
Then, depending on the platform, choose either iOS or Android:
145
+
If you would like to run a demo app, navigate to its project directory. Then install dependencies and run app with:
147
146
148
147
```bash
149
-
yarn expo run:< ios | android >
148
+
yarn && yarn < ios | android >
150
149
```
151
150
152
151
> [!WARNING]
153
152
> Running LLMs requires a significant amount of RAM. If you are encountering unexpected app crashes, try to increase the amount of RAM allocated to the emulator.
154
153
155
-
## :robot:Ready-made Models
154
+
## Ready-made Models
156
155
157
156
Our library has a number of ready-to-use AI models; a complete list is available in the documentation. If you're interested in running your own AI model, you need to first export it to the `.pte` format. Instructions on how to do this are available in the [Python API](https://docs.pytorch.org/executorch/stable/using-executorch-export.html) and [optimum-executorch README](https://github.com/huggingface/optimum-executorch?tab=readme-ov-file#option-2-export-and-load-separately).
158
157
159
-
## :books:Documentation
158
+
## Documentation
160
159
161
160
Check out how our library can help you build your React Native AI features by visiting our docs:
This library is licensed under [The MIT License](./LICENSE).
167
166
168
-
## :soon:What's Next?
167
+
## What's Next?
169
168
170
169
To learn about our upcoming plans and developments, please visit our [milestones](https://github.com/software-mansion/react-native-executorch/milestones).
0 commit comments