Skip to content

Commit 6f7dd18

Browse files
chore: add llm config example to docs (#884)
## Description This PR updates docs with full LLM configuration example. ### Introduces a breaking change? - [ ] Yes - [x] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [x] Documentation update (improves or adds clarity to existing documentation) - [ ] Other (chores, tests, code style improvements etc.) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions Check out docs and check out `useLLM` section (there are no examples for `LLMModule` at all, so I did not add this there) ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
1 parent 2b6ea20 commit 6f7dd18

4 files changed

Lines changed: 152 additions & 3 deletions

File tree

docs/docs/03-hooks/01-natural-language-processing/useLLM.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,81 @@ To configure model (i.e. change system prompt, load initial conversation history
210210

211211
- [`topp`](../../06-api-reference/interfaces/GenerationConfig.md#topp) - Only samples from the smallest set of tokens whose cumulative probability exceeds topp.
212212

213+
### Model configuration example
214+
215+
```tsx
216+
import { useEffect } from 'react';
217+
import {
218+
MessageCountContextStrategy,
219+
DEFAULT_SYSTEM_PROMPT,
220+
ToolCall,
221+
useLLM,
222+
LLAMA3_2_1B_SPINQUANT,
223+
} from 'react-native-executorch';
224+
225+
const TOOL_DEFINITIONS: LLMTool[] = [
226+
{
227+
name: 'get_weather',
228+
description: 'Get/check weather in given location.',
229+
parameters: {
230+
type: 'dict',
231+
properties: {
232+
location: {
233+
type: 'string',
234+
description: 'Location where user wants to check weather',
235+
},
236+
},
237+
required: ['location'],
238+
},
239+
},
240+
];
241+
242+
const getWeather = async (_call: ToolCall) => {
243+
return 'The weather is great!';
244+
};
245+
246+
const executeTool: (call: ToolCall) => Promise<string | null> = async (
247+
call
248+
) => {
249+
switch (call.toolName) {
250+
case 'get_weather':
251+
return await getWeather(call);
252+
default:
253+
console.error(`Wrong function! We don't handle it!`);
254+
return null;
255+
}
256+
};
257+
258+
const llm = useLLM({ model: LLAMA3_2_1B_SPINQUANT });
259+
260+
const { configure } = llm;
261+
useEffect(() => {
262+
configure({
263+
chatConfig: {
264+
systemPrompt: `${DEFAULT_SYSTEM_PROMPT} Current time and date: ${new Date().toString()}`,
265+
initialMessageHistory: [
266+
{
267+
role: 'user',
268+
content: 'What is the current time and date?',
269+
},
270+
],
271+
contextStrategy: new MessageCountContextStrategy(6),
272+
},
273+
toolsConfig: {
274+
tools: TOOL_DEFINITIONS,
275+
executeToolCallback: executeTool,
276+
displayToolCalls: true,
277+
},
278+
generationConfig: {
279+
outputTokenBatchSize: 15,
280+
batchTimeInterval: 100,
281+
temperature: 0.7,
282+
topp: 0.9,
283+
},
284+
});
285+
}, [configure]);
286+
```
287+
213288
### Sending a message
214289

215290
In order to send a message to the model, one can use the following code:

docs/docs/06-api-reference/interfaces/LLMConfig.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Object configuring chat management, contains following properties:
1818

1919
`initialMessageHistory` - An array of `Message` objects that represent the conversation history. This can be used to provide initial context to the model.
2020

21-
`contextWindowLength` - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
21+
`contextStrategy` - Defines a strategy for managing the conversation context window and message history
2222

2323
---
2424

packages/react-native-executorch/src/types/llm.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ export interface LLMConfig {
138138
*
139139
* `initialMessageHistory` - An array of `Message` objects that represent the conversation history. This can be used to provide initial context to the model.
140140
*
141-
* `contextWindowLength` - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
141+
* `contextStrategy` - Defines a strategy for managing the conversation context window and message history
142142
*/
143143
chatConfig?: Partial<ChatConfig>;
144144

skills/canary/react-native-executorch/references/reference-llms.md

Lines changed: 75 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,6 @@ useEffect(() => {
5151
llm.configure({
5252
chatConfig: {
5353
systemPrompt: 'You are a helpful assistant',
54-
contextWindowLength: 10,
5554
},
5655
generationConfig: {
5756
temperature: 0.7,
@@ -67,6 +66,81 @@ llm.sendMessage('Hello!');
6766
console.log(llm.messageHistory);
6867
```
6968

69+
## Model configuration
70+
71+
```tsx
72+
import { useEffect } from 'react';
73+
import {
74+
MessageCountContextStrategy,
75+
DEFAULT_SYSTEM_PROMPT,
76+
ToolCall,
77+
useLLM,
78+
LLAMA3_2_1B_SPINQUANT,
79+
} from 'react-native-executorch';
80+
81+
const TOOL_DEFINITIONS: LLMTool[] = [
82+
{
83+
name: 'get_weather',
84+
description: 'Get/check weather in given location.',
85+
parameters: {
86+
type: 'dict',
87+
properties: {
88+
location: {
89+
type: 'string',
90+
description: 'Location where user wants to check weather',
91+
},
92+
},
93+
required: ['location'],
94+
},
95+
},
96+
];
97+
98+
const getWeather = async (_call: ToolCall) => {
99+
return 'The weather is great!';
100+
};
101+
102+
const executeTool: (call: ToolCall) => Promise<string | null> = async (
103+
call
104+
) => {
105+
switch (call.toolName) {
106+
case 'get_weather':
107+
return await getWeather(call);
108+
default:
109+
console.error(`Wrong function! We don't handle it!`);
110+
return null;
111+
}
112+
};
113+
114+
const llm = useLLM({ model: LLAMA3_2_1B_SPINQUANT });
115+
116+
const { configure } = llm;
117+
useEffect(() => {
118+
configure({
119+
chatConfig: {
120+
systemPrompt: `${DEFAULT_SYSTEM_PROMPT} Current time and date: ${new Date().toString()}`,
121+
initialMessageHistory: [
122+
{
123+
role: 'user',
124+
content: 'What is the current time and date?',
125+
},
126+
],
127+
contextStrategy: new MessageCountContextStrategy(6),
128+
},
129+
toolsConfig: {
130+
tools: TOOL_DEFINITIONS,
131+
executeToolCallback: executeTool,
132+
displayToolCalls: true,
133+
},
134+
generationConfig: {
135+
outputTokenBatchSize: 15,
136+
batchTimeInterval: 100,
137+
temperature: 0.7,
138+
topp: 0.9,
139+
},
140+
});
141+
}, [configure]);
142+
```
143+
70144
## Tool Calling
71145

72146
```tsx

0 commit comments

Comments
 (0)