Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ The minimal supported version is 17.0 for iOS and Android 13.

https://github.com/user-attachments/assets/27ab3406-c7f1-4618-a981-6c86b53547ee

We currently host two example apps demonstrating use cases of our library:
We currently host few example apps demonstrating use cases of our library:
Comment thread
msluszniak marked this conversation as resolved.
Outdated

- examples/speech-to-text - Whisper and Moonshine models ready for transcription tasks
- examples/computer-vision - computer vision related tasks
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/llms/useLLM.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ await llama.generate(message);

## Listening for the response

As you might've noticed, there is no return value from the `runInference` function. Instead, the `.response` field of the model is updated with each token.
As you might've noticed, there is no return value from the `generate` function. Instead, the `.response` field of the model is updated with each token.
This is how you can render the response of the model:

```typescript
Expand Down