Skip to content

Commit 8de5b11

Browse files
authored
docs: Remove no longer valid warning in useLLM hook doc (#608)
## Description Update warning in versioned doc and removes warning from the current version about using multiple instances of model runner in `useLLM` hook. ### Introduces a breaking change? - [ ] Yes - [x] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [x] Documentation update (improves or adds clarity to existing documentation) - [ ] Other (chores, tests, code style improvements etc.) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions Build documentation and check that information is correct ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues Closes #607 ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
1 parent 178a8d4 commit 8de5b11

2 files changed

Lines changed: 1 addition & 5 deletions

File tree

  • docs
    • docs/02-hooks/01-natural-language-processing
    • versioned_docs/version-0.5.x/02-hooks/01-natural-language-processing

docs/docs/02-hooks/01-natural-language-processing/useLLM.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,10 +30,6 @@ React Native ExecuTorch supports a variety of LLMs (checkout our [HuggingFace re
3030
Lower-end devices might not be able to fit LLMs into memory. We recommend using quantized models to reduce the memory footprint.
3131
:::
3232

33-
:::caution
34-
Given computational constraints, our architecture is designed to support only one instance of the model runner at the time. Consequently, this means you can have only one active component leveraging `useLLM` concurrently.
35-
:::
36-
3733
## Initializing
3834

3935
In order to load a model into the app, you need to run the following code:

docs/versioned_docs/version-0.5.x/02-hooks/01-natural-language-processing/useLLM.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Lower-end devices might not be able to fit LLMs into memory. We recommend using
3131
:::
3232

3333
:::caution
34-
Given computational constraints, our architecture is designed to support only one instance of the model runner at the time. Consequently, this means you can have only one active component leveraging `useLLM` concurrently.
34+
Up to version 0.5.3, our architecture was designed to support only one instance of the model runner at a time. As a consequence, only one active component could leverage `useLLM` concurrently. Starting with version 0.5.3, this limitation has been removed
3535
:::
3636

3737
## Initializing

0 commit comments

Comments
 (0)