You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/02-benchmarks/inference-time.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,11 +43,21 @@ processing. Resizing is typically fast for small images but may be noticeably
43
43
slower for very large images, which can increase total time.
44
44
:::
45
45
46
+
:::warning
47
+
Times presented in the tables are measured for forward method with input size equal to 512. Other input sizes may yield slower or faster inference times.
48
+
:::
49
+
46
50
| Model / Device | iPhone 17 Pro [ms]| Google Pixel 10 [ms]|
Copy file name to clipboardExpand all lines: docs/docs/03-hooks/02-computer-vision/useObjectDetection.md
+25-13Lines changed: 25 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,13 +61,18 @@ You need more details? Check the following resources:
61
61
-`error` - An error object if the model failed to load or encountered a runtime error.
62
62
-`downloadProgress` - A value between 0 and 1 representing the download progress of the model binary.
63
63
-`forward` - A function to run inference on an image.
64
+
-`getAvailableInputSizes` - A function that returns available input sizes for multi-method models (YOLO). Returns `undefined` for single-method models.
64
65
65
66
## Running the model
66
67
67
68
To run the model, use the [`forward`](../../06-api-reference/interfaces/ObjectDetectionType.md#forward) method. It accepts two arguments:
68
69
69
70
-`input` (required) - The image to process. Can be a remote URL, a local file URI, a base64-encoded image (whole URI or only raw base64), or a [`PixelData`](../../06-api-reference/interfaces/PixelData.md) object (raw RGB pixel buffer).
70
-
-`detectionThreshold` (optional) - A number between 0 and 1 representing the minimum confidence score for a detection to be included in the results. Defaults to `0.7`.
71
+
-`options` (optional) - An [`ObjectDetectionOptions`](../../06-api-reference/interfaces/ObjectDetectionOptions.md) object with the following properties:
72
+
-`detectionThreshold` (optional) - A number between 0 and 1 representing the minimum confidence score. Defaults to model-specific value (typically `0.7`).
73
+
-`iouThreshold` (optional) - IoU threshold for non-maximum suppression (0-1). Defaults to model-specific value (typically `0.55`).
74
+
-`inputSize` (optional) - For multi-method models like YOLO, specify the input resolution (`384`, `512`, or `640`). Defaults to `384` for YOLO models.
75
+
-`classesOfInterest` (optional) - Array of class labels to filter detections. Only detections matching these classes will be returned.
71
76
72
77
`forward` returns a promise resolving to an array of [`Detection`](../../06-api-reference/interfaces/Detection.md) objects, each containing:
73
78
@@ -78,11 +83,11 @@ To run the model, use the [`forward`](../../06-api-reference/interfaces/ObjectDe
| Model | Number of classes | Class list | Multi-size Support |\n| ----------------------------------------------------------------------------------------------------------------------------- | ----------------- | ------------------------------------------------------------ | ------------------ |
121
+
|[SSDLite320 MobileNetV3 Large](https://huggingface.co/software-mansion/react-native-executorch-ssdlite320-mobilenet-v3-large)| 91 |[COCO](../../06-api-reference/enumerations/CocoLabel.md)| No |
122
+
|[RF-DETR Nano](https://huggingface.co/software-mansion/react-native-executorch-rf-detr-nano)| 80 |[COCO](../../06-api-reference/enumerations/CocoLabel.md)| No |
YOLO models support multiple input sizes (384px, 512px, 640px). Smaller sizes are faster but less accurate, while larger sizes are more accurate but slower. Choose based on your speed/accuracy requirements.
Copy file name to clipboardExpand all lines: docs/docs/04-typescript-api/02-computer-vision/ObjectDetectionModule.md
+15-1Lines changed: 15 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,12 +43,26 @@ For more information on loading resources, take a look at [loading models](../..
43
43
To run the model, use the [`forward`](../../06-api-reference/classes/ObjectDetectionModule.md#forward) method. It accepts two arguments:
44
44
45
45
-`input` (required) - The image to process. Can be a remote URL, a local file URI, a base64-encoded image (whole URI or only raw base64), or a [`PixelData`](../../06-api-reference/interfaces/PixelData.md) object (raw RGB pixel buffer).
46
-
-`detectionThreshold` (optional) - A number between 0 and 1. Defaults to `0.7`.
46
+
-`options` (optional) - An [`ObjectDetectionOptions`](../../06-api-reference/interfaces/ObjectDetectionOptions.md) object with:
-`iouThreshold` (optional) - IoU threshold for NMS (0-1). Defaults to model-specific value.
49
+
-`inputSize` (optional) - For YOLO models: `384`, `512`, or `640`. Defaults to `384`.
50
+
-`classesOfInterest` (optional) - Array of class labels to filter detections.
47
51
48
52
The method returns a promise resolving to an array of [`Detection`](../../06-api-reference/interfaces/Detection.md) objects, each containing the bounding box, label, and confidence score.
49
53
50
54
For real-time frame processing, use [`runOnFrame`](../../03-hooks/02-computer-vision/visioncamera-integration.md) instead.
51
55
56
+
### Example with Options
57
+
58
+
```typescript
59
+
const detections =awaitmodel.forward(imageUri, {
60
+
detectionThreshold: 0.5,
61
+
inputSize: 640, // YOLO models only
62
+
classesOfInterest: ['PERSON', 'CAR'],
63
+
});
64
+
```
65
+
52
66
## Using a custom model
53
67
54
68
Use [`fromCustomModel`](../../06-api-reference/classes/ObjectDetectionModule.md#fromcustommodel) to load your own exported model binary instead of a built-in preset.
0 commit comments