Defined in: modules/computer_vision/ImageEmbeddingsModule.ts:13
Module for generating image embeddings from input images.
VisionModule<Float32Array>
generateFromFrame: (
frameData, ...args) =>any
Defined in: modules/BaseModule.ts:53
Process a camera frame directly for real-time inference.
This method is bound to a native JSI function after calling load(),
making it worklet-compatible and safe to call from VisionCamera's
frame processor thread.
Performance characteristics:
- Zero-copy path: When using
frame.getNativeBuffer()from VisionCamera v5, frame data is accessed directly without copying (fastest, recommended). - Copy path: When using
frame.toArrayBuffer(), pixel data is copied from native to JS, then accessed from native code (slower, fallback).
Usage with VisionCamera:
const frameOutput = useFrameOutput({
pixelFormat: 'rgb',
onFrame(frame) {
'worklet';
// Zero-copy approach (recommended)
const nativeBuffer = frame.getNativeBuffer();
const result = model.generateFromFrame(
{ nativeBuffer: nativeBuffer.pointer, width: frame.width, height: frame.height },
...args
);
nativeBuffer.release();
frame.dispose();
}
});Frame data object with either nativeBuffer (zero-copy) or data (ArrayBuffer)
...any[]
Additional model-specific arguments (e.g., threshold, options)
any
Model-specific output (e.g., detections, classifications, embeddings)
Frame for frame data format details
VisionModule.generateFromFrame
nativeModule:
any=null
Defined in: modules/BaseModule.ts:16
Internal
Native module instance (JSI Host Object)
VisionModule.nativeModule
get runOnFrame(): (
frame, ...args) =>TOutput
Defined in: modules/computer_vision/VisionModule.ts:61
Synchronous worklet function for real-time VisionCamera frame processing.
Only available after the model is loaded.
Use this for VisionCamera frame processing in worklets.
For async processing, use forward() instead.
const model = new ClassificationModule();
await model.load({ modelSource: MODEL });
// Use the functional form of setState to store the worklet — passing it
// directly would cause React to invoke it immediately as an updater fn.
const [runOnFrame, setRunOnFrame] = useState(null);
setRunOnFrame(() => model.runOnFrame);
const frameOutput = useFrameOutput({
onFrame(frame) {
'worklet';
if (!runOnFrame) return;
const result = runOnFrame(frame, isFrontCamera);
frame.dispose();
}
});If the model is not loaded.
A worklet function for frame processing.
(
frame, ...args):TOutput
...any[]
TOutput
VisionModule.runOnFrame
delete():
void
Defined in: modules/BaseModule.ts:81
Unloads the model from memory and releases native resources.
Always call this method when you're done with a model to prevent memory leaks.
void
VisionModule.delete
forward(
input):Promise<Float32Array<ArrayBufferLike>>
Defined in: modules/computer_vision/ImageEmbeddingsModule.ts:72
Executes the model's forward pass with automatic input type detection.
Supports two input types:
- String path/URI: File path, URL, or Base64-encoded string
- PixelData: Raw pixel data from image libraries (e.g., NitroImage)
Note: For VisionCamera frame processing, use runOnFrame instead.
This method is async and cannot be called in worklet context.
Image source (string path or PixelData object)
string | PixelData
Promise<Float32Array<ArrayBufferLike>>
A Promise that resolves to the model output.
// String path (async)
const result1 = await model.forward('file:///path/to/image.jpg');
// Pixel data (async)
const result2 = await model.forward({
dataPtr: new Uint8Array(pixelBuffer),
sizes: [480, 640, 3],
scalarType: ScalarType.BYTE
});
// For VisionCamera frames, use runOnFrame in worklet:
const frameOutput = useFrameOutput({
onFrame(frame) {
'worklet';
if (!model.runOnFrame) return;
const result = model.runOnFrame(frame);
}
});VisionModule.forward
protectedforwardET(inputTensor):Promise<TensorPtr[]>
Defined in: modules/BaseModule.ts:62
Internal
Runs the model's forward method with the given input tensors. It returns the output tensors that mimic the structure of output from ExecuTorch.
Array of input tensors.
Promise<TensorPtr[]>
Array of output tensors.
VisionModule.forwardET
getInputShape(
methodName,index):Promise<number[]>
Defined in: modules/BaseModule.ts:72
Gets the input shape for a given method and index.
string
method name
number
index of the argument which shape is requested
Promise<number[]>
The input shape as an array of numbers.
VisionModule.getInputShape
staticfromCustomModel(modelSource,onDownloadProgress?):Promise<ImageEmbeddingsModule>
Defined in: modules/computer_vision/ImageEmbeddingsModule.ts:62
Creates an image embeddings instance with a user-provided model binary. Use this when working with a custom-exported model that is not one of the built-in presets.
A fetchable resource pointing to the model binary.
(progress) => void
Optional callback to monitor download progress, receiving a value between 0 and 1.
Promise<ImageEmbeddingsModule>
A Promise resolving to an ImageEmbeddingsModule instance.
The native model contract for this method is not formally defined and may change between releases. Refer to the native source code for the current expected tensor interface.
staticfromModelName(namedSources,onDownloadProgress?):Promise<ImageEmbeddingsModule>
Defined in: modules/computer_vision/ImageEmbeddingsModule.ts:24
Creates an image embeddings instance for a built-in model.
An object specifying which built-in model to load and where to fetch it from.
(progress) => void
Optional callback to monitor download progress, receiving a value between 0 and 1.
Promise<ImageEmbeddingsModule>
A Promise resolving to an ImageEmbeddingsModule instance.