Skip to content

Commit c680266

Browse files
authored
docs: Update docs (#154)
## Description - Changed a word from "two examples" to "few examples", so it makes sense. Few is better than three, so it's foolproof longer. - Changed method name in `useLLM` docs, I think `runInference` is an old name for `generate` and you forgot to change it. ### Type of change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation)
1 parent 87d0fec commit c680266

2 files changed

Lines changed: 2 additions & 2 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ The minimal supported version is 17.0 for iOS and Android 13.
8080

8181
https://github.com/user-attachments/assets/27ab3406-c7f1-4618-a981-6c86b53547ee
8282

83-
We currently host two example apps demonstrating use cases of our library:
83+
We currently host a few example apps demonstrating use cases of our library:
8484

8585
- examples/speech-to-text - Whisper and Moonshine models ready for transcription tasks
8686
- examples/computer-vision - computer vision related tasks

docs/docs/llms/useLLM.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ await llama.generate(message);
144144

145145
## Listening for the response
146146

147-
As you might've noticed, there is no return value from the `runInference` function. Instead, the `.response` field of the model is updated with each token.
147+
As you might've noticed, there is no return value from the `generate` function. Instead, the `.response` field of the model is updated with each token.
148148
This is how you can render the response of the model:
149149

150150
```typescript

0 commit comments

Comments
 (0)