Skip to content

Commit e827156

Browse files
chmjkbclaudeNorbertKlockiewicz
authored
refactor!: add static factories for remaining models (#937)
## Description This PR refactors the existing codebase to match what has been recently done with object detection & semantic segmentation. Namely, it changes the model initialization flow from `new Module() -> module.load()` to a single, static factory, for example: `fromModelName(modelName)` All affected modules now follow the same two-method factory API: - **`Module.fromModelName(namedSources, onDownloadProgress?)`** — for built-in models. Requires a `modelName` field (typed literal union, e.g. `LLMModelName`, `SpeechToTextModelName`) that will be used for telemetry once that infrastructure is in place. - **`Module.fromCustomModel(...sources, onDownloadProgress?)`** — for user-provided model binaries. Does **not** expose a `modelName` parameter — users with custom models shouldn't need to know about or satisfy a type constraint that exists purely for internal tracking. Note: for now `fromCustomModel` internally delegates to `fromModelName` with a `'custom'` placeholder. The two methods will diverge meaningfully once telemetry is wired up. ### Modules migrated | Module | Notes | |---|---| | `ClassificationModule` | factory + `fromCustomModel` | | `StyleTransferModule` | factory + `fromCustomModel` | | `ImageEmbeddingsModule` | factory + `fromCustomModel` | | `VADModule` | factory + `fromCustomModel` | | `TextEmbeddingsModule` | factory + `fromCustomModel` | | `OCRModule` | factory + `fromCustomModel`, typed `OCRModelName` | | `VerticalOCRModule` | factory + `fromCustomModel` | | `LLMModule` | factory + `fromCustomModel`, typed `LLMModelName` | | `SpeechToTextModule` | factory + `fromCustomModel`, typed `SpeechToTextModelName` | | `TextToImageModule` | factory + `fromCustomModel`, typed `TextToImageModelName` | | `ObjectDetectionModule` | `fromCustomConfig` → `fromCustomModel` rename | | `SemanticSegmentationModule` | `fromCustomConfig` → `fromCustomModel` rename | `TextToSpeechModule` is **not** included — handled separately. ### Hook updates Hooks for modules that have a controller (`OCRModule`, `VerticalOCRModule`, `LLMModule`) already used the controller directly and remain unchanged in structure. Hooks for the remaining migrated modules (`useSpeechToText`, `useTextToImage`) were updated to use the factory instead of `new Module() + load()`. All hooks now include `modelName` in their `useEffect` dependency arrays, enabling correct reload behavior when switching between built-in models. ### Typed model name unions Each module now has a corresponding typed union (e.g. `LLMModelName`, `SpeechToTextModelName`, `OCRModelName`) derived from the existing model constants. This replaces the previous use of untyped `string` and provides compile-time safety when passing model identifiers. ### Introduces a breaking change? - [x] Yes - [ ] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [ ] Documentation update (improves or adds clarity to existing documentation) - [x] Other (chores, tests, code style improvements etc.) ### Tested on - [x] iOS - [x] Android ### Testing instructions - [x] Run demo apps and check if they work. - [x] Try to pass model name that is incorrect to check if error is correctly risen. - [x] Try using `fromCustomModel` method. ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [x] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. --> --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: Norbert Klockiewicz <Nklockiewicz12@gmail.com>
1 parent a95a7c1 commit e827156

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+1626
-813
lines changed

.cspell-wordlist.txt

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,4 +128,15 @@ detr
128128
metaprogramming
129129
ktlint
130130
lefthook
131-
espeak
131+
espeak
132+
NCHW
133+
həlˈO
134+
wˈɜɹld
135+
mˈæn
136+
dˈʌzᵊnt
137+
tɹˈʌst
138+
hɪmsˈɛlf
139+
nˈɛvəɹ
140+
ɹˈiᵊli
141+
ˈɛniwˌʌn
142+
ˈɛls

docs/docs/04-typescript-api/01-natural-language-processing/LLMModule.md

Lines changed: 51 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,13 @@ TypeScript API implementation of the [useLLM](../../03-hooks/01-natural-language
1515
```typescript
1616
import { LLMModule, LLAMA3_2_1B_QLORA } from 'react-native-executorch';
1717

18-
// Creating an instance
19-
const llm = new LLMModule({
20-
tokenCallback: (token) => console.log(token),
21-
messageHistoryCallback: (messages) => console.log(messages),
22-
});
23-
24-
// Loading the model
25-
await llm.load(LLAMA3_2_1B_QLORA, (progress) => console.log(progress));
18+
// Creating an instance and loading the model
19+
const llm = await LLMModule.fromModelName(
20+
LLAMA3_2_1B_QLORA,
21+
(progress) => console.log(progress),
22+
(token) => console.log(token),
23+
(messages) => console.log(messages),
24+
);
2625

2726
// Running the model - returns the generated response
2827
const response = await llm.sendMessage('Hello, World!');
@@ -41,30 +40,26 @@ All methods of `LLMModule` are explained in details here: [LLMModule API Referen
4140

4241
## Loading the model
4342

44-
To create a new instance of `LLMModule`, use the [constructor](../../06-api-reference/classes/LLMModule.md#constructor) with optional callbacks:
45-
46-
- [`tokenCallback`](../../06-api-reference/classes/LLMModule.md#tokencallback) - Function called on every generated token.
47-
48-
- [`messageHistoryCallback`](../../06-api-reference/classes/LLMModule.md#messagehistorycallback) - Function called on every finished message.
49-
50-
Then, to load the model, use the [`load`](../../06-api-reference/classes/LLMModule.md#load) method. It accepts an object with the following fields:
43+
Use the static [`fromModelName`](../../06-api-reference/classes/LLMModule.md#frommodelname) factory method:
5144

52-
- [`model`](../../06-api-reference/classes/LLMModule.md#model) - Object containing:
53-
- [`modelSource`](../../06-api-reference/classes/LLMModule.md#modelsource) - The location of the used model.
54-
55-
- [`tokenizerSource`](../../06-api-reference/classes/LLMModule.md#tokenizersource) - The location of the used tokenizer.
56-
57-
- [`tokenizerConfigSource`](../../06-api-reference/classes/LLMModule.md#tokenizerconfigsource) - The location of the used tokenizer config.
45+
```typescript
46+
const llm = await LLMModule.fromModelName(
47+
LLAMA3_2_3B, // model config constant
48+
onDownloadProgress, // optional, progress 0–1
49+
tokenCallback, // optional, called on every token
50+
messageHistoryCallback // optional, called when generation finishes
51+
);
52+
```
5853

59-
- [`onDownloadProgressCallback`](../../06-api-reference/classes/LLMModule.md#ondownloadprogresscallback) - Callback to track download progress.
54+
The model config object contains `modelSource`, `tokenizerSource`, `tokenizerConfigSource`, and optional `capabilities`. Pass one of the built-in constants (e.g. `LLAMA3_2_3B`) or construct it manually.
6055

61-
This method returns a promise, which can resolve to an error or void.
56+
This method returns a promise resolving to an `LLMModule` instance.
6257

6358
For more information on loading resources, take a look at [loading models](../../01-fundamentals/02-loading-models.md) page.
6459

6560
## Listening for download progress
6661

67-
To subscribe to the download progress event, you can pass the [`onDownloadProgressCallback`](../../06-api-reference/classes/LLMModule.md#ondownloadprogresscallback) function to the [`load`](../../06-api-reference/classes/LLMModule.md#load) method. This function is called whenever the download progress changes.
62+
To subscribe to the download progress event, you can pass the `onDownloadProgress` callback as the second argument to [`fromModelName`](../../06-api-reference/classes/LLMModule.md#frommodelname). This function is called whenever the download progress changes.
6863

6964
## Running the model
7065

@@ -116,25 +111,26 @@ To configure model (i.e. change system prompt, load initial conversation history
116111

117112
## Vision-Language Models (VLM)
118113

119-
Some models support multimodal input — text and images together. To use them, pass `capabilities` in the model object when calling [`load`](../../06-api-reference/classes/LLMModule.md#load):
114+
Some models support multimodal input — text and images together. To use them, pass `capabilities` in the model object when calling [`fromModelName`](../../06-api-reference/classes/LLMModule.md#frommodelname):
120115

121116
```typescript
122117
import { LLMModule, LFM2_VL_1_6B_QUANTIZED } from 'react-native-executorch';
123118

124-
const llm = new LLMModule({
125-
tokenCallback: (token) => console.log(token),
126-
});
127-
128-
await llm.load(LFM2_VL_1_6B_QUANTIZED);
119+
const llm = await LLMModule.fromModelName(
120+
LFM2_VL_1_6B_QUANTIZED,
121+
undefined,
122+
(token) => console.log(token)
123+
);
129124
```
130125

131126
The `capabilities` field is already set on the model constant. You can also construct the model object explicitly:
132127

133128
```typescript
134-
await llm.load({
135-
modelSource: '...',
136-
tokenizerSource: '...',
137-
tokenizerConfigSource: '...',
129+
const llm = await LLMModule.fromModelName({
130+
modelName: 'lfm2.5-vl-1.6b-quantized',
131+
modelSource: require('./path/to/model.pte'),
132+
tokenizerSource: require('./path/to/tokenizer.json'),
133+
tokenizerConfigSource: require('./path/to/tokenizer_config.json'),
138134
capabilities: ['vision'],
139135
});
140136
```
@@ -161,6 +157,27 @@ const chat: Message[] = [
161157
const response = await llm.generate(chat);
162158
```
163159

160+
## Using a custom model
161+
162+
Use [`fromCustomModel`](../../06-api-reference/classes/LLMModule.md#fromcustommodel) to load your own exported LLM instead of a built-in preset:
163+
164+
```typescript
165+
import { LLMModule } from 'react-native-executorch';
166+
167+
const llm = await LLMModule.fromCustomModel(
168+
'https://example.com/model.pte',
169+
'https://example.com/tokenizer.json',
170+
'https://example.com/tokenizer_config.json',
171+
(progress) => console.log(progress),
172+
(token) => console.log(token),
173+
(messages) => console.log(messages)
174+
);
175+
```
176+
177+
### Required model contract
178+
179+
The `.pte` model binary must be exported following the [ExecuTorch LLM export process](https://docs.pytorch.org/executorch/1.1/llm/export-llm.html). The native runner expects the standard ExecuTorch text-generation interface — KV-cache management, prefill/decode phases, and logit sampling are all handled by the runtime.
180+
164181
## Deleting the model from memory
165182

166183
To delete the model from memory, you can use the [`delete`](../../06-api-reference/classes/LLMModule.md#delete) method.

docs/docs/04-typescript-api/01-natural-language-processing/SpeechToTextModule.md

Lines changed: 31 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,12 @@ TypeScript API implementation of the [useSpeechToText](../../03-hooks/01-natural
1414
```typescript
1515
import { SpeechToTextModule, WHISPER_TINY_EN } from 'react-native-executorch';
1616

17-
const model = new SpeechToTextModule();
18-
await model.load(WHISPER_TINY_EN, (progress) => {
19-
console.log(progress);
20-
});
17+
const model = await SpeechToTextModule.fromModelName(
18+
WHISPER_TINY_EN,
19+
(progress) => {
20+
console.log(progress);
21+
}
22+
);
2123

2224
// Standard transcription (returns string)
2325
const text = await model.transcribe(waveform);
@@ -40,18 +42,17 @@ All methods of `SpeechToTextModule` are explained in details here: [`SpeechToTex
4042

4143
## Loading the model
4244

43-
Create an instance of [`SpeechToTextModule`](../../06-api-reference/classes/SpeechToTextModule.md) and use the [`load`](../../06-api-reference/classes/SpeechToTextModule.md#load) method. It accepts an object with the following fields:
44-
45-
- [`model`](../../06-api-reference/classes/SpeechToTextModule.md#model) - Object containing:
46-
- [`isMultilingual`](../../06-api-reference/interfaces/SpeechToTextModelConfig.md#ismultilingual) - Flag indicating if model is multilingual.
45+
Use the static [`fromModelName`](../../06-api-reference/classes/SpeechToTextModule.md#frommodelname) factory method. It accepts an object with the following fields:
4746

48-
- [`modelSource`](../../06-api-reference/interfaces/SpeechToTextModelConfig.md#modelsource) - The location of the used model (bundled encoder + decoder functionality).
47+
- [`isMultilingual`](../../06-api-reference/interfaces/SpeechToTextModelConfig.md#ismultilingual) - Flag indicating if model is multilingual.
48+
- [`modelSource`](../../06-api-reference/interfaces/SpeechToTextModelConfig.md#modelsource) - The location of the used model (bundled encoder + decoder functionality).
49+
- [`tokenizerSource`](../../06-api-reference/interfaces/SpeechToTextModelConfig.md#tokenizersource) - The location of the used tokenizer.
4950

50-
- [`tokenizerSource`](../../06-api-reference/interfaces/SpeechToTextModelConfig.md#tokenizersource) - The location of the used tokenizer.
51+
And an optional second argument:
5152

52-
- [`onDownloadProgressCallback`](../../06-api-reference/classes/SpeechToTextModule.md#ondownloadprogresscallback) - Callback to track download progress.
53+
- `onDownloadProgress` - Callback to track download progress.
5354

54-
This method returns a promise, which can resolve to an error or void.
55+
This method returns a promise resolving to a `SpeechToTextModule` instance.
5556

5657
For more information on loading resources, take a look at [loading models](../../01-fundamentals/02-loading-models.md) page.
5758

@@ -66,10 +67,12 @@ If you aim to obtain a transcription in other languages than English, use the mu
6667
```typescript
6768
import { SpeechToTextModule, WHISPER_TINY } from 'react-native-executorch';
6869

69-
const model = new SpeechToTextModule();
70-
await model.load(WHISPER_TINY, (progress) => {
71-
console.log(progress);
72-
});
70+
const model = await SpeechToTextModule.fromModelName(
71+
WHISPER_TINY,
72+
(progress) => {
73+
console.log(progress);
74+
}
75+
);
7376

7477
const transcription = await model.transcribe(spanishAudio, { language: 'es' });
7578
```
@@ -121,10 +124,12 @@ import * as FileSystem from 'expo-file-system';
121124

122125
const transcribeAudio = async () => {
123126
// Initialize with the model config
124-
const model = new SpeechToTextModule();
125-
await model.load(WHISPER_TINY_EN, (progress) => {
126-
console.log(progress);
127-
});
127+
const model = await SpeechToTextModule.fromModelName(
128+
WHISPER_TINY_EN,
129+
(progress) => {
130+
console.log(progress);
131+
}
132+
);
128133

129134
// Download the audio file
130135
const { uri } = await FileSystem.downloadAsync(
@@ -163,10 +168,12 @@ import { SpeechToTextModule, WHISPER_TINY_EN } from 'react-native-executorch';
163168
import { AudioManager, AudioRecorder } from 'react-native-audio-api';
164169

165170
// Load the model
166-
const model = new SpeechToTextModule();
167-
await model.load(WHISPER_TINY_EN, (progress) => {
168-
console.log(progress);
169-
});
171+
const model = await SpeechToTextModule.fromModelName(
172+
WHISPER_TINY_EN,
173+
(progress) => {
174+
console.log(progress);
175+
}
176+
);
170177

171178
// Configure audio session
172179
AudioManager.setAudioSessionOptions({

docs/docs/04-typescript-api/01-natural-language-processing/TextEmbeddingsModule.md

Lines changed: 7 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,9 @@ import {
1717
ALL_MINILM_L6_V2,
1818
} from 'react-native-executorch';
1919

20-
// Creating an instance
21-
const textEmbeddingsModule = new TextEmbeddingsModule();
22-
23-
// Loading the model
24-
await textEmbeddingsModule.load(ALL_MINILM_L6_V2);
20+
// Creating an instance and loading the model
21+
const textEmbeddingsModule =
22+
await TextEmbeddingsModule.fromModelName(ALL_MINILM_L6_V2);
2523

2624
// Running the model
2725
const embedding = await textEmbeddingsModule.forward('Hello World!');
@@ -33,15 +31,12 @@ All methods of `TextEmbeddingsModule` are explained in details here: [`TextEmbed
3331

3432
## Loading the model
3533

36-
To load the model, use the [`load`](../../06-api-reference/classes/TextEmbeddingsModule.md#load) method. It accepts an object:
37-
38-
- [`model`](../../06-api-reference/classes/TextEmbeddingsModule.md#model) - Object containing:
39-
- [`modelSource`](../../06-api-reference/classes/TextEmbeddingsModule.md#modelsource) - Location of the used model.
40-
- [`tokenizerSource`](../../06-api-reference/classes/TextEmbeddingsModule.md#tokenizersource) - Location of the used tokenizer.
34+
Use the static [`fromModelName`](../../06-api-reference/classes/TextEmbeddingsModule.md#frommodelname) factory method. It accepts a model config object (e.g. `ALL_MINILM_L6_V2`) containing:
4135

42-
- [`onDownloadProgressCallback`](../../06-api-reference/classes/TextEmbeddingsModule.md#ondownloadprogresscallback) - Callback to track download progress.
36+
- [`modelSource`](../../06-api-reference/classes/TextEmbeddingsModule.md#modelsource) - Location of the used model.
37+
- [`tokenizerSource`](../../06-api-reference/classes/TextEmbeddingsModule.md#tokenizersource) - Location of the used tokenizer.
4338

44-
This method returns a promise, which can resolve to an error or void.
39+
And an optional `onDownloadProgress` callback. It returns a promise resolving to a `TextEmbeddingsModule` instance.
4540

4641
For more information on loading resources, take a look at [loading models](../../01-fundamentals/02-loading-models.md) page.
4742

docs/docs/04-typescript-api/01-natural-language-processing/TextToSpeechModule.md

Lines changed: 18 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -19,15 +19,9 @@ import {
1919
KOKORO_VOICE_AF_HEART,
2020
} from 'react-native-executorch';
2121

22-
const model = new TextToSpeechModule();
23-
await model.load(
24-
{
25-
model: KOKORO_MEDIUM,
26-
voice: KOKORO_VOICE_AF_HEART,
27-
},
28-
(progress) => {
29-
console.log(progress);
30-
}
22+
const model = await TextToSpeechModule.fromModelName(
23+
{ model: KOKORO_MEDIUM, voice: KOKORO_VOICE_AF_HEART },
24+
(progress) => console.log(progress)
3125
);
3226

3327
await model.forward(text, 1.0);
@@ -39,15 +33,15 @@ All methods of `TextToSpeechModule` are explained in details here: [`TextToSpeec
3933

4034
## Loading the model
4135

42-
To initialize the module, create an instance and call the [`load`](../../06-api-reference/classes/TextToSpeechModule.md#load) method with the following parameters:
36+
Use the static [`fromModelName`](../../06-api-reference/classes/TextToSpeechModule.md#frommodelname) factory method with the following parameters:
4337

44-
- [`config`](../../06-api-reference/classes/TextToSpeechModule.md#config) - Object containing:
45-
- [`model`](../../06-api-reference/interfaces/TextToSpeechConfig.md#model) - Model configuration.
46-
- [`voice`](../../06-api-reference/interfaces/TextToSpeechConfig.md#voice) - Voice configuration.
38+
- [`config`](../../06-api-reference/interfaces/TextToSpeechConfig.md) - Object containing:
39+
- [`model`](../../06-api-reference/interfaces/TextToSpeechConfig.md#model) - Model configuration (e.g. `KOKORO_MEDIUM`).
40+
- [`voice`](../../06-api-reference/interfaces/TextToSpeechConfig.md#voice) - Voice configuration (e.g. `KOKORO_VOICE_AF_HEART`).
4741

48-
- [`onDownloadProgressCallback`](../../06-api-reference/classes/TextToSpeechModule.md#ondownloadprogresscallback) - Callback to track download progress.
42+
- [`onDownloadProgress`](../../06-api-reference/classes/TextToSpeechModule.md#frommodelname) - Optional callback to track download progress (value between 0 and 1).
4943

50-
This method returns a promise that resolves once the assets are downloaded and loaded into memory.
44+
This method returns a promise that resolves to a `TextToSpeechModule` instance once the assets are downloaded and loaded into memory.
5145

5246
For more information on resource sources, see [loading models](../../01-fundamentals/02-loading-models.md).
5347

@@ -83,15 +77,13 @@ import {
8377
} from 'react-native-executorch';
8478
import { AudioContext } from 'react-native-audio-api';
8579

86-
const tts = new TextToSpeechModule();
80+
const tts = await TextToSpeechModule.fromModelName({
81+
model: KOKORO_MEDIUM,
82+
voice: KOKORO_VOICE_AF_HEART,
83+
});
8784
const audioContext = new AudioContext({ sampleRate: 24000 });
8885

8986
try {
90-
await tts.load({
91-
model: KOKORO_MEDIUM,
92-
voice: KOKORO_VOICE_AF_HEART,
93-
});
94-
9587
const waveform = await tts.forward('Hello from ExecuTorch!', 1.0);
9688

9789
// Create audio buffer and play
@@ -117,11 +109,12 @@ import {
117109
} from 'react-native-executorch';
118110
import { AudioContext } from 'react-native-audio-api';
119111

120-
const tts = new TextToSpeechModule();
112+
const tts = await TextToSpeechModule.fromModelName({
113+
model: KOKORO_MEDIUM,
114+
voice: KOKORO_VOICE_AF_HEART,
115+
});
121116
const audioContext = new AudioContext({ sampleRate: 24000 });
122117

123-
await tts.load({ model: KOKORO_MEDIUM, voice: KOKORO_VOICE_AF_HEART });
124-
125118
try {
126119
for await (const chunk of tts.stream({
127120
text: 'This is a streaming test, with a sample input.',
@@ -155,9 +148,7 @@ import {
155148
KOKORO_VOICE_AF_HEART,
156149
} from 'react-native-executorch';
157150

158-
const tts = new TextToSpeechModule();
159-
160-
await tts.load({
151+
const tts = await TextToSpeechModule.fromModelName({
161152
model: KOKORO_MEDIUM,
162153
voice: KOKORO_VOICE_AF_HEART,
163154
});

docs/docs/04-typescript-api/01-natural-language-processing/VADModule.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,9 @@ TypeScript API implementation of the [useVAD](../../03-hooks/01-natural-language
1414
```typescript
1515
import { VADModule, FSMN_VAD } from 'react-native-executorch';
1616

17-
const model = new VADModule();
18-
await model.load(FSMN_VAD, (progress) => {
19-
console.log(progress);
20-
});
17+
const model = await VADModule.fromModelName(FSMN_VAD, (progress) =>
18+
console.log(progress)
19+
);
2120

2221
await model.forward(waveform);
2322
```
@@ -28,14 +27,15 @@ All methods of `VADModule` are explained in details here: [`VADModule` API Refer
2827

2928
## Loading the model
3029

31-
To initialize the module, create an instance and call the [`load`](../../06-api-reference/classes/VADModule.md#load) method with the following parameters:
30+
To create a ready-to-use instance, call the static [`fromModelName`](../../06-api-reference/classes/VADModule.md#frommodelname) factory with the following parameters:
3231

33-
- [`model`](../../06-api-reference/classes/VADModule.md#model) - Object containing:
34-
- [`modelSource`](../../06-api-reference/classes/VADModule.md#modelsource) - Location of the used model.
32+
- `namedSources` - Object containing:
33+
- `modelName` - Model name identifier.
34+
- `modelSource` - Location of the model binary.
3535

36-
- [`onDownloadProgressCallback`](../../06-api-reference/classes/VADModule.md#ondownloadprogresscallback) - Callback to track download progress.
36+
- `onDownloadProgress` - Optional callback to track download progress (value between 0 and 1).
3737

38-
This method returns a promise, which can resolve to an error or void.
38+
The factory returns a promise that resolves to a loaded `VADModule` instance.
3939

4040
For more information on loading resources, take a look at [loading models](../../01-fundamentals/02-loading-models.md) page.
4141

0 commit comments

Comments
 (0)