Skip to content

Commit b71c041

Browse files
committed
Update docs
1 parent 7b876df commit b71c041

File tree

5 files changed

+33
-27
lines changed

5 files changed

+33
-27
lines changed

apps/computer-vision/app/index.tsx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,6 @@ export default function Home() {
4141
>
4242
<Text style={styles.buttonText}>Object Detection Live</Text>
4343
</TouchableOpacity>
44-
<TouchableOpacity style={styles.button}></TouchableOpacity>
4544
<TouchableOpacity
4645
style={styles.button}
4746
onPress={() => router.navigate('instance_segmentation/')}

docs/docs/02-benchmarks/memory-usage.md

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -169,17 +169,23 @@ with higher resolutions.
169169

170170
## Instance Segmentation
171171

172+
:::info
173+
All the below benchmarks were performed on iPhone 17 Pro (iOS) and OnePlus 12
174+
(Android).
175+
:::
176+
172177
:::warning
173178
Data presented in the following sections is based on inference with forward_640 method.
174179
:::
175180

176-
| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] |
177-
| ----------- | ---------------------- | ------------------ |
178-
| YOLO26N_SEG | 92 | 668 |
179-
| YOLO26S_SEG | 220 | 712 |
180-
| YOLO26M_SEG | 570 | 815 |
181-
| YOLO26L_SEG | 680 | 1024 |
182-
| YOLO26X_SEG | 1410 | 1450 |
181+
| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] |
182+
| ---------------- | ---------------------- | ------------------ |
183+
| YOLO26N_SEG | 92 | 668 |
184+
| YOLO26S_SEG | 220 | 712 |
185+
| YOLO26M_SEG | 570 | 815 |
186+
| YOLO26L_SEG | 680 | 1024 |
187+
| YOLO26X_SEG | 1410 | 1450 |
188+
| RF_DETR_NANO_SEG | 620 | 603 |
183189

184190
## Text to image
185191

docs/docs/02-benchmarks/model-size.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,17 @@ title: Model Size
1414
| ------------------------------ | :---------------: | :---------------: | :---------------: |
1515
| SSDLITE_320_MOBILENET_V3_LARGE | 13.9 | 15.6 | 8.46 |
1616

17+
## Instance Segmentation
18+
19+
| Model | XNNPACK [MB] |
20+
| ---------------- | :----------: |
21+
| YOLO26N_SEG | 11.6 |
22+
| YOLO26S_SEG | 42.3 |
23+
| YOLO26M_SEG | 95.4 |
24+
| YOLO26L_SEG | 113 |
25+
| YOLO26X_SEG | 252 |
26+
| RF_DETR_NANO_SEG | 124 |
27+
1728
## Style Transfer
1829

1930
| Model | XNNPACK FP32 [MB] | XNNPACK INT8 [MB] | Core ML FP32 [MB] | Core ML FP16 [MB] |

docs/docs/03-hooks/02-computer-vision/useInstanceSegmentation.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -118,10 +118,11 @@ function App() {
118118
YOLO models use the [`CocoLabelYolo`](../../06-api-reference/enumerations/CocoLabelYolo.md) enum (80 classes, 0-indexed), which differs from [`CocoLabel`](../../06-api-reference/enumerations/CocoLabel.md) used by RF-DETR and SSDLite object detection models (91 classes, 1-indexed). When filtering with `classesOfInterest`, use the label names from `CocoLabelYolo`.
119119
:::
120120

121-
| Model | Number of classes | Class list | Available input sizes |
122-
| ----------- | ----------------- | ------------------------------------------------------------------- | --------------------- |
123-
| yolo26n-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
124-
| yolo26s-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
125-
| yolo26m-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
126-
| yolo26l-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
127-
| yolo26x-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
121+
| Model | Number of classes | Class list | Available input sizes |
122+
| --------------- | ----------------- | ------------------------------------------------------------------- | --------------------- |
123+
| yolo26n-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
124+
| yolo26s-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
125+
| yolo26m-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
126+
| yolo26l-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
127+
| yolo26x-seg | 80 | [COCO (YOLO)](../../06-api-reference/enumerations/CocoLabelYolo.md) | 384, 512, 640 |
128+
| rfdetr-nano-seg | 91 | [COCO](../../06-api-reference/enumerations/CocoLabel.md) | N/A |

packages/react-native-executorch/common/rnexecutorch/models/VisionModel.h

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -90,25 +90,14 @@ class VisionModel : public BaseModel {
9090
*/
9191
virtual cv::Mat preprocess(const cv::Mat &image) const;
9292

93-
/// Cached input tensor shape (getAllInputShapes()[0]).
94-
/// Set once by each subclass constructor to avoid per-frame metadata lookups.
95-
std::vector<int32_t> modelInputShape_;
96-
9793
/**
9894
* @brief Get the spatial dimensions of the model input.
9995
*
10096
* By default, returns the last two dimensions of modelInputShape_.
10197
* Subclasses may override this for models with dynamic or multiple input
10298
* sizes.
10399
*/
104-
virtual cv::Size modelInputSize() const {
105-
if (modelInputShape_.size() < 2) {
106-
return {0, 0};
107-
}
108-
return cv::Size(modelInputShape_[modelInputShape_.size() - 1],
109-
modelInputShape_[modelInputShape_.size() - 2]);
110-
}
111-
100+
virtual cv::Size modelInputSize() const;
112101
/**
113102
* @brief Extract an RGB cv::Mat from a VisionCamera frame
114103
*

0 commit comments

Comments
 (0)