Skip to content

Commit 9b77da7

Browse files
committed
Update docs
1 parent 42aaf02 commit 9b77da7

File tree

5 files changed

+66
-14
lines changed

5 files changed

+66
-14
lines changed

docs/docs/02-benchmarks/inference-time.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,11 +43,21 @@ processing. Resizing is typically fast for small images but may be noticeably
4343
slower for very large images, which can increase total time.
4444
:::
4545

46+
:::warning
47+
Times presented in the tables are measured for forward method with input size equal to 512. Other input sizes may yield slower or faster inference times.
48+
:::
49+
4650
| Model / Device | iPhone 17 Pro [ms] | Google Pixel 10 [ms] |
4751
| :-------------------------------------------- | :----------------: | :------------------: |
4852
| SSDLITE_320_MOBILENET_V3_LARGE (XNNPACK FP32) | 20 | 18 |
4953
| SSDLITE_320_MOBILENET_V3_LARGE (Core ML FP32) | 18 | - |
5054
| SSDLITE_320_MOBILENET_V3_LARGE (Core ML FP16) | 8 | - |
55+
| RF_DETR_NANO (XNNPACK FP32) | TBD | TBD |
56+
| YOLO26N (XNNPACK FP32) | TBD | TBD |
57+
| YOLO26S (XNNPACK FP32) | TBD | TBD |
58+
| YOLO26M (XNNPACK FP32) | TBD | TBD |
59+
| YOLO26L (XNNPACK FP32) | TBD | TBD |
60+
| YOLO26X (XNNPACK FP32) | TBD | TBD |
5161

5262
## Style Transfer
5363

docs/docs/02-benchmarks/memory-usage.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,21 @@ loaded and actively running inference, relative to the baseline app memory
2525
before model initialization.
2626
:::
2727

28+
:::warning
29+
Data presented for YOLO models is based on inference with forward_640 method.
30+
:::
31+
2832
| Model / Device | iPhone 17 Pro [MB] | Google Pixel 10 [MB] |
2933
| --------------------------------------------- | :----------------: | :------------------: |
3034
| SSDLITE_320_MOBILENET_V3_LARGE (XNNPACK FP32) | 94 | 104 |
3135
| SSDLITE_320_MOBILENET_V3_LARGE (Core ML FP32) | 83 | - |
3236
| SSDLITE_320_MOBILENET_V3_LARGE (Core ML FP16) | 62 | - |
37+
| RF_DETR_NANO (XNNPACK FP32) | TBD | TBD |
38+
| YOLO26N (XNNPACK FP32) | TBD | TBD |
39+
| YOLO26S (XNNPACK FP32) | TBD | TBD |
40+
| YOLO26M (XNNPACK FP32) | TBD | TBD |
41+
| YOLO26L (XNNPACK FP32) | TBD | TBD |
42+
| YOLO26X (XNNPACK FP32) | TBD | TBD |
3343

3444
## Style Transfer
3545

docs/docs/02-benchmarks/model-size.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,12 @@ title: Model Size
1313
| Model | XNNPACK FP32 [MB] | Core ML FP32 [MB] | Core ML FP16 [MB] |
1414
| ------------------------------ | :---------------: | :---------------: | :---------------: |
1515
| SSDLITE_320_MOBILENET_V3_LARGE | 13.9 | 15.6 | 8.46 |
16+
| RF_DETR_NANO | 112 | - | - |
17+
| YOLO26N | 10.3 | - | - |
18+
| YOLO26S | 38.6 | - | - |
19+
| YOLO26M | 82.3 | - | - |
20+
| YOLO26L | 100 | - | - |
21+
| YOLO26X | 224 | - | - |
1622

1723
## Instance Segmentation
1824

docs/docs/03-hooks/02-computer-vision/useObjectDetection.md

Lines changed: 25 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -61,13 +61,18 @@ You need more details? Check the following resources:
6161
- `error` - An error object if the model failed to load or encountered a runtime error.
6262
- `downloadProgress` - A value between 0 and 1 representing the download progress of the model binary.
6363
- `forward` - A function to run inference on an image.
64+
- `getAvailableInputSizes` - A function that returns available input sizes for multi-method models (YOLO). Returns `undefined` for single-method models.
6465

6566
## Running the model
6667

6768
To run the model, use the [`forward`](../../06-api-reference/interfaces/ObjectDetectionType.md#forward) method. It accepts two arguments:
6869

6970
- `input` (required) - The image to process. Can be a remote URL, a local file URI, a base64-encoded image (whole URI or only raw base64), or a [`PixelData`](../../06-api-reference/interfaces/PixelData.md) object (raw RGB pixel buffer).
70-
- `detectionThreshold` (optional) - A number between 0 and 1 representing the minimum confidence score for a detection to be included in the results. Defaults to `0.7`.
71+
- `options` (optional) - An [`ObjectDetectionOptions`](../../06-api-reference/interfaces/ObjectDetectionOptions.md) object with the following properties:
72+
- `detectionThreshold` (optional) - A number between 0 and 1 representing the minimum confidence score. Defaults to model-specific value (typically `0.7`).
73+
- `iouThreshold` (optional) - IoU threshold for non-maximum suppression (0-1). Defaults to model-specific value (typically `0.55`).
74+
- `inputSize` (optional) - For multi-method models like YOLO, specify the input resolution (`384`, `512`, or `640`). Defaults to `384` for YOLO models.
75+
- `classesOfInterest` (optional) - Array of class labels to filter detections. Only detections matching these classes will be returned.
7176

7277
`forward` returns a promise resolving to an array of [`Detection`](../../06-api-reference/interfaces/Detection.md) objects, each containing:
7378

@@ -78,11 +83,11 @@ To run the model, use the [`forward`](../../06-api-reference/interfaces/ObjectDe
7883
## Example
7984

8085
```typescript
81-
import { useObjectDetection, RF_DETR_NANO } from 'react-native-executorch';
86+
import { useObjectDetection, YOLO26N } from 'react-native-executorch';
8287

8388
function App() {
8489
const model = useObjectDetection({
85-
model: RF_DETR_NANO,
90+
model: YOLO26N,
8691
});
8792

8893
const handleDetect = async () => {
@@ -91,13 +96,12 @@ function App() {
9196
const imageUri = 'file:///Users/.../photo.jpg';
9297

9398
try {
94-
const detections = await model.forward(imageUri, 0.5);
99+
const detections = await model.forward(imageUri, {
100+
detectionThreshold: 0.5,
101+
inputSize: 640,
102+
});
95103

96-
for (const detection of detections) {
97-
console.log('Label:', detection.label);
98-
console.log('Score:', detection.score);
99-
console.log('Bounding box:', detection.bbox);
100-
}
104+
console.log('Detected:', detections.length, 'objects');
101105
} catch (error) {
102106
console.error(error);
103107
}
@@ -113,7 +117,15 @@ See the full guide: [VisionCamera Integration](./visioncamera-integration.md).
113117

114118
## Supported models
115119

116-
| Model | Number of classes | Class list |
117-
| ----------------------------------------------------------------------------------------------------------------------------- | ----------------- | -------------------------------------------------------- |
118-
| [SSDLite320 MobileNetV3 Large](https://huggingface.co/software-mansion/react-native-executorch-ssdlite320-mobilenet-v3-large) | 91 | [COCO](../../06-api-reference/enumerations/CocoLabel.md) |
119-
| [RF-DETR Nano](https://huggingface.co/software-mansion/react-native-executorch-rf-detr-nano) | 80 | [COCO](../../06-api-reference/enumerations/CocoLabel.md) |
120+
| Model | Number of classes | Class list | Multi-size Support |\n| ----------------------------------------------------------------------------------------------------------------------------- | ----------------- | ------------------------------------------------------------ | ------------------ |
121+
| [SSDLite320 MobileNetV3 Large](https://huggingface.co/software-mansion/react-native-executorch-ssdlite320-mobilenet-v3-large) | 91 | [COCO](../../06-api-reference/enumerations/CocoLabel.md) | No |
122+
| [RF-DETR Nano](https://huggingface.co/software-mansion/react-native-executorch-rf-detr-nano) | 80 | [COCO](../../06-api-reference/enumerations/CocoLabel.md) | No |
123+
| [YOLO26N](https://huggingface.co/software-mansion/react-native-executorch-yolo26) | 80 | [COCO YOLO](../../06-api-reference/enumerations/CocoLabel.md) | Yes (384/512/640) |
124+
| [YOLO26S](https://huggingface.co/software-mansion/react-native-executorch-yolo26) | 80 | [COCO YOLO](../../06-api-reference/enumerations/CocoLabel.md) | Yes (384/512/640) |
125+
| [YOLO26M](https://huggingface.co/software-mansion/react-native-executorch-yolo26) | 80 | [COCO YOLO](../../06-api-reference/enumerations/CocoLabel.md) | Yes (384/512/640) |
126+
| [YOLO26L](https://huggingface.co/software-mansion/react-native-executorch-yolo26) | 80 | [COCO YOLO](../../06-api-reference/enumerations/CocoLabel.md) | Yes (384/512/640) |
127+
| [YOLO26X](https://huggingface.co/software-mansion/react-native-executorch-yolo26) | 80 | [COCO YOLO](../../06-api-reference/enumerations/CocoLabel.md) | Yes (384/512/640) |
128+
129+
:::tip
130+
YOLO models support multiple input sizes (384px, 512px, 640px). Smaller sizes are faster but less accurate, while larger sizes are more accurate but slower. Choose based on your speed/accuracy requirements.
131+
:::

docs/docs/04-typescript-api/02-computer-vision/ObjectDetectionModule.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,12 +43,26 @@ For more information on loading resources, take a look at [loading models](../..
4343
To run the model, use the [`forward`](../../06-api-reference/classes/ObjectDetectionModule.md#forward) method. It accepts two arguments:
4444

4545
- `input` (required) - The image to process. Can be a remote URL, a local file URI, a base64-encoded image (whole URI or only raw base64), or a [`PixelData`](../../06-api-reference/interfaces/PixelData.md) object (raw RGB pixel buffer).
46-
- `detectionThreshold` (optional) - A number between 0 and 1. Defaults to `0.7`.
46+
- `options` (optional) - An [`ObjectDetectionOptions`](../../06-api-reference/interfaces/ObjectDetectionOptions.md) object with:
47+
- `detectionThreshold` (optional) - Minimum confidence score (0-1). Defaults to model-specific value.
48+
- `iouThreshold` (optional) - IoU threshold for NMS (0-1). Defaults to model-specific value.
49+
- `inputSize` (optional) - For YOLO models: `384`, `512`, or `640`. Defaults to `384`.
50+
- `classesOfInterest` (optional) - Array of class labels to filter detections.
4751

4852
The method returns a promise resolving to an array of [`Detection`](../../06-api-reference/interfaces/Detection.md) objects, each containing the bounding box, label, and confidence score.
4953

5054
For real-time frame processing, use [`runOnFrame`](../../03-hooks/02-computer-vision/visioncamera-integration.md) instead.
5155

56+
### Example with Options
57+
58+
```typescript
59+
const detections = await model.forward(imageUri, {
60+
detectionThreshold: 0.5,
61+
inputSize: 640, // YOLO models only
62+
classesOfInterest: ['PERSON', 'CAR'],
63+
});
64+
```
65+
5266
## Using a custom model
5367

5468
Use [`fromCustomModel`](../../06-api-reference/classes/ObjectDetectionModule.md#fromcustommodel) to load your own exported model binary instead of a built-in preset.

0 commit comments

Comments
 (0)