You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -37,10 +37,12 @@ Use `runOnFrame` when you need to process every camera frame. Use `forward` for
37
37
38
38
## How it works
39
39
40
-
VisionCamera v5 delivers frames via [`useFrameOutput`](https://react-native-vision-camera-v5-docs.vercel.app/docs/frame-output). Inside the `onFrame` worklet you call `runOnFrame(frame)` synchronously, then use `scheduleOnRN` from `react-native-worklets` to post the result back to React state on the main thread.
40
+
VisionCamera v5 delivers frames via [`useFrameOutput`](https://react-native-vision-camera-v5-docs.vercel.app/docs/frame-output). Inside the `onFrame` worklet you call `runOnFrame(frame, isFrontCamera)` synchronously, then use `scheduleOnRN` from `react-native-worklets` to post the result back to React state on the main thread.
41
+
42
+
The `isFrontCamera` parameter tells the native side whether the front camera is active so it can correctly mirror the results. The library handles all device orientation rotation internally — results are always returned in screen-space coordinates regardless of how the user holds their device.
41
43
42
44
:::warning
43
-
You **must** set `pixelFormat: 'rgb'` in `useFrameOutput`. Our extraction pipeline expect RGB pixel data — any other format (e.g. the default `yuv`) will produce incorrect results.
45
+
You **must** set `pixelFormat: 'rgb'` in `useFrameOutput`. Our extraction pipeline expects RGB pixel data — any other format (e.g. the default `yuv`) will produce incorrect results.
44
46
:::
45
47
46
48
:::warning
@@ -51,11 +53,27 @@ You **must** set `pixelFormat: 'rgb'` in `useFrameOutput`. Our extraction pipeli
51
53
Always call `frame.dispose()` after processing to release the frame buffer. Wrap your inference in a `try/finally` to ensure it's always called even if `runOnFrame` throws.
52
54
:::
53
55
54
-
## Full example (Classification)
56
+
## Camera configuration
57
+
58
+
The `Camera` component requires specific props for correct orientation handling:
59
+
60
+
```tsx
61
+
<Camera
62
+
device={device}
63
+
outputs={[frameOutput]}
64
+
isActive
65
+
orientationSource="device"
66
+
/>
67
+
```
68
+
69
+
-**`orientationSource="device"`** — ensures frame orientation metadata reflects the physical device orientation, which the library uses to rotate model inputs and outputs correctly.
70
+
-**Do not set `enablePhysicalBufferRotation`** — this prop must remain `false` (the default). If enabled, VisionCamera pre-rotates the pixel buffer, which conflicts with the library's own orientation handling and produces incorrect results.
for (const [label, score] ofObject.entries(scores)) {
92
-
if ((scoreasnumber) >bestScore) {
93
-
bestScore=scoreasnumber;
94
-
best=label;
95
-
}
96
-
}
97
-
scheduleOnRN(setTopLabel, best);
111
+
if (!detRof) return;
112
+
const isFrontCamera =false; // using back camera
113
+
const result =detRof(frame, isFrontCamera, 0.5);
114
+
if (result) {
115
+
scheduleOnRN(updateDetections, result);
98
116
}
99
117
} finally {
100
118
frame.dispose();
101
119
}
102
120
},
103
-
[runOnFrame]
121
+
[detRof, updateDetections]
104
122
),
105
123
});
106
124
@@ -112,30 +130,80 @@ export default function App() {
112
130
if (!device) returnnull;
113
131
114
132
return (
115
-
<>
133
+
<Viewstyle={styles.container}>
116
134
<Camera
117
-
style={styles.camera}
135
+
style={StyleSheet.absoluteFill}
118
136
device={device}
119
137
outputs={[frameOutput]}
120
138
isActive
139
+
orientationSource="device"
121
140
/>
122
-
<Textstyle={styles.label}>{topLabel}</Text>
123
-
</>
141
+
{detections.map((det, i) => (
142
+
<Textkey={i}style={styles.label}>
143
+
{det.label}{(det.score*100).toFixed(1)}%
144
+
</Text>
145
+
))}
146
+
</View>
124
147
);
125
148
}
126
149
127
150
const styles =StyleSheet.create({
128
-
camera: { flex: 1 },
151
+
container: { flex: 1 },
129
152
label: {
130
153
position: 'absolute',
131
154
bottom: 40,
132
155
alignSelf: 'center',
133
156
color: 'white',
134
-
fontSize: 20,
157
+
fontSize: 16,
135
158
},
136
159
});
137
160
```
138
161
162
+
For a complete example showing how to render bounding boxes, segmentation masks, OCR overlays, and style transfer results on top of the camera preview, see the [example app's VisionCamera tasks](https://github.com/software-mansion/react-native-executorch/tree/main/apps/computer-vision/components/vision_camera).
163
+
164
+
## Handling front/back camera
165
+
166
+
When switching between front and back cameras, you need to pass the correct `isFrontCamera` value to `runOnFrame`. Since worklets cannot read React state directly, use a `Synchronizable` from `react-native-worklets`:
If you use the TypeScript Module API (e.g. `ClassificationModule`) directly instead of a hook, `runOnFrame` is a worklet function and **cannot** be passed directly to `useState` — React would invoke it as a state initializer. Use the functional updater form `() => module.runOnFrame`:
@@ -154,6 +222,7 @@ export default function App() {
#### Bounding boxes or masks are rotated / misaligned
268
+
269
+
Make sure you have set `orientationSource="device"` on the `Camera` component. Without it, the frame orientation metadata won't match the actual device orientation, causing misaligned results.
270
+
271
+
Also verify that `enablePhysicalBufferRotation` is **not** set to `true` — this conflicts with the library's orientation handling.
272
+
273
+
#### Results look wrong or scrambled
191
274
192
275
You forgot to set `pixelFormat: 'rgb'`. The default VisionCamera pixel format is `yuv` — our frame extraction works only with RGB data.
193
276
194
-
### App freezes or camera drops frames
277
+
#### Results are mirrored on front camera
278
+
279
+
You are not passing `isFrontCamera: true` when using the front camera. See [Handling front/back camera](#handling-frontback-camera) above.
280
+
281
+
#### App freezes or camera drops frames
195
282
196
283
Your model's inference time exceeds the frame interval. Enable `dropFramesWhileBusy: true` in `useFrameOutput`, or move inference off the worklet thread using VisionCamera's [async frame processing](https://react-native-vision-camera-v5-docs.vercel.app/docs/async-frame-processing).
197
284
198
-
### Memory leak / crash after many frames
285
+
####Memory leak / crash after many frames
199
286
200
287
You are not calling `frame.dispose()`. Always dispose the frame in a `finally` block.
201
288
202
-
### `runOnFrame` is always null
289
+
####`runOnFrame` is always null
203
290
204
291
The model hasn't finished loading yet. Guard with `if (!runOnFrame) return` inside `onFrame`, or check `model.isReady` before enabling the camera.
205
292
206
-
### TypeError: `module.runOnFrame` is not a function (Module API)
293
+
####TypeError: `module.runOnFrame` is not a function (Module API)
207
294
208
295
You passed `module.runOnFrame` directly to `setState` instead of `() => module.runOnFrame`. React invoked it as a state initializer — see the [Module API section](#using-the-module-api) above.
0 commit comments