Skip to content

Commit 6b2d458

Browse files
authored
docs: LLM improvements 0.4.x (#408)
## Description Merge to main after releasing LLM patch ### Type of change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation)
1 parent 473f5b7 commit 6b2d458

2 files changed

Lines changed: 36 additions & 24 deletions

File tree

docs/versioned_docs/version-0.4.x/natural-language-processing/useLLM.md

Lines changed: 21 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ The code snippet above fetches the model from the specified URL, loads it into m
7474
| `generate()` | `(messages: Message[], tools?: LLMTool[]) => Promise<void>` | Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
7575
| `interrupt()` | `() => void` | Function to interrupt the current inference. |
7676
| `response` | `string` | State of the generated response. This field is updated with each token generated by the model. |
77+
| `token` | `string` | The most recently generated token. |
7778
| `isReady` | `boolean` | Indicates whether the model is ready. |
7879
| `isGenerating` | `boolean` | Indicates whether the model is currently generating a response. |
7980
| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
@@ -102,6 +103,7 @@ const useLLM: ({
102103
interface LLMType {
103104
messageHistory: Message[];
104105
response: string;
106+
token: string;
105107
isReady: boolean;
106108
isGenerating: boolean;
107109
downloadProgress: number;
@@ -171,7 +173,7 @@ const llm = useLLM({
171173
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
172174
});
173175

174-
const handleGenerate = async () => {
176+
const handleGenerate = () => {
175177
const chat = [
176178
{ role: 'system', content: 'You are a helpful assistant' },
177179
{ role: 'user', content: 'Hi!' },
@@ -180,11 +182,11 @@ const handleGenerate = async () => {
180182
];
181183

182184
// Chat completion
183-
await llm.generate(chat);
184-
console.log('Llama says:', llm.response);
185+
llm.generate(chat);
185186
};
186187

187188
return (
189+
<Button onPress={handleGenerate} title="Generate!" />
188190
<Text>{llm.response}</Text>
189191
)
190192
```
@@ -228,7 +230,7 @@ const llm = useLLM({
228230
tokenizerConfigSource: HAMMER2_1_1_5B_TOKENIZER_CONFIG,
229231
});
230232

231-
const handleGenerate = async () => {
233+
const handleGenerate = () => {
232234
const chat = [
233235
{
234236
role: 'system',
@@ -241,11 +243,18 @@ const handleGenerate = async () => {
241243
];
242244

243245
// Chat completion
244-
await llm.generate(chat, TOOL_DEFINITIONS);
245-
console.log('Hammer says:', llm.response);
246-
247-
// Parse response and call functions accordingly
246+
llm.generate(chat, TOOL_DEFINITIONS);
248247
};
248+
249+
useEffect(() => {
250+
// Parse response and call tools accordingly
251+
// ...
252+
}, [llm.response])
253+
254+
return (
255+
<Button onPress={handleGenerate} title="Generate!" />
256+
<Text>{llm.response}</Text>
257+
)
249258
```
250259

251260
## Managed LLM Chat
@@ -282,9 +291,9 @@ const llm = useLLM({
282291
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
283292
});
284293

285-
const send = async () => {
294+
const send = () => {
286295
const message = 'Hi, who are you?';
287-
await llm.sendMessage(message);
296+
llm.sendMessage(message);
288297
};
289298
```
290299

@@ -351,9 +360,9 @@ useEffect(() => {
351360
});
352361
}, []);
353362

354-
const send = async () => {
363+
const send = () => {
355364
const message = `Hi, what's the weather like in Cracow right now?`;
356-
await llm.sendMessage(message);
365+
llm.sendMessage(message);
357366
};
358367
```
359368

0 commit comments

Comments
 (0)