You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -36,10 +36,12 @@ Use `runOnFrame` when you need to process every camera frame. Use `forward` for
36
36
37
37
## How it works
38
38
39
-
VisionCamera v5 delivers frames via [`useFrameOutput`](https://react-native-vision-camera-v5-docs.vercel.app/docs/frame-output). Inside the `onFrame` worklet you call `runOnFrame(frame)` synchronously, then use `scheduleOnRN` from `react-native-worklets` to post the result back to React state on the main thread.
39
+
VisionCamera v5 delivers frames via [`useFrameOutput`](https://react-native-vision-camera-v5-docs.vercel.app/docs/frame-output). Inside the `onFrame` worklet you call `runOnFrame(frame, isFrontCamera)` synchronously, then use `scheduleOnRN` from `react-native-worklets` to post the result back to React state on the main thread.
40
+
41
+
The `isFrontCamera` parameter tells the native side whether the front camera is active so it can correctly mirror the results. The library handles all device orientation rotation internally — results are always returned in screen-space coordinates regardless of how the user holds their device.
40
42
41
43
:::warning
42
-
You **must** set `pixelFormat: 'rgb'` in `useFrameOutput`. Our extraction pipeline expect RGB pixel data — any other format (e.g. the default `yuv`) will produce incorrect results.
44
+
You **must** set `pixelFormat: 'rgb'` in `useFrameOutput`. Our extraction pipeline expects RGB pixel data — any other format (e.g. the default `yuv`) will produce incorrect results.
43
45
:::
44
46
45
47
:::warning
@@ -50,11 +52,27 @@ You **must** set `pixelFormat: 'rgb'` in `useFrameOutput`. Our extraction pipeli
50
52
Always call `frame.dispose()` after processing to release the frame buffer. Wrap your inference in a `try/finally` to ensure it's always called even if `runOnFrame` throws.
51
53
:::
52
54
53
-
## Full example (Classification)
55
+
## Camera configuration
56
+
57
+
The `Camera` component requires specific props for correct orientation handling:
58
+
59
+
```tsx
60
+
<Camera
61
+
device={device}
62
+
outputs={[frameOutput]}
63
+
isActive
64
+
orientationSource="device"
65
+
/>
66
+
```
67
+
68
+
-**`orientationSource="device"`** — ensures frame orientation metadata reflects the physical device orientation, which the library uses to rotate model inputs and outputs correctly.
69
+
-**Do not set `enablePhysicalBufferRotation`** — this prop must remain `false` (the default). If enabled, VisionCamera pre-rotates the pixel buffer, which conflicts with the library's own orientation handling and produces incorrect results.
for (const [label, score] ofObject.entries(scores)) {
91
-
if ((scoreasnumber) >bestScore) {
92
-
bestScore=scoreasnumber;
93
-
best=label;
94
-
}
95
-
}
96
-
scheduleOnRN(setTopLabel, best);
110
+
if (!detRof) return;
111
+
const isFrontCamera =false; // using back camera
112
+
const result =detRof(frame, isFrontCamera, 0.5);
113
+
if (result) {
114
+
scheduleOnRN(updateDetections, result);
97
115
}
98
116
} finally {
99
117
frame.dispose();
100
118
}
101
119
},
102
-
[runOnFrame]
120
+
[detRof, updateDetections]
103
121
),
104
122
});
105
123
@@ -111,30 +129,80 @@ export default function App() {
111
129
if (!device) returnnull;
112
130
113
131
return (
114
-
<>
132
+
<Viewstyle={styles.container}>
115
133
<Camera
116
-
style={styles.camera}
134
+
style={StyleSheet.absoluteFill}
117
135
device={device}
118
136
outputs={[frameOutput]}
119
137
isActive
138
+
orientationSource="device"
120
139
/>
121
-
<Textstyle={styles.label}>{topLabel}</Text>
122
-
</>
140
+
{detections.map((det, i) => (
141
+
<Textkey={i}style={styles.label}>
142
+
{det.label}{(det.score*100).toFixed(1)}%
143
+
</Text>
144
+
))}
145
+
</View>
123
146
);
124
147
}
125
148
126
149
const styles =StyleSheet.create({
127
-
camera: { flex: 1 },
150
+
container: { flex: 1 },
128
151
label: {
129
152
position: 'absolute',
130
153
bottom: 40,
131
154
alignSelf: 'center',
132
155
color: 'white',
133
-
fontSize: 20,
156
+
fontSize: 16,
134
157
},
135
158
});
136
159
```
137
160
161
+
For a complete example showing how to render bounding boxes, segmentation masks, OCR overlays, and style transfer results on top of the camera preview, see the [example app's VisionCamera tasks](https://github.com/software-mansion/react-native-executorch/tree/main/apps/computer-vision/components/vision_camera).
162
+
163
+
## Handling front/back camera
164
+
165
+
When switching between front and back cameras, you need to pass the correct `isFrontCamera` value to `runOnFrame`. Since worklets cannot read React state directly, use a `Synchronizable` from `react-native-worklets`:
If you use the TypeScript Module API (e.g. `ClassificationModule`) directly instead of a hook, `runOnFrame` is a worklet function and **cannot** be passed directly to `useState` — React would invoke it as a state initializer. Use the functional updater form `() => module.runOnFrame`:
@@ -153,6 +221,7 @@ export default function App() {
#### Bounding boxes or masks are rotated / misaligned
267
+
268
+
Make sure you have set `orientationSource="device"` on the `Camera` component. Without it, the frame orientation metadata won't match the actual device orientation, causing misaligned results.
269
+
270
+
Also verify that `enablePhysicalBufferRotation` is **not** set to `true` — this conflicts with the library's orientation handling.
271
+
272
+
#### Results look wrong or scrambled
190
273
191
274
You forgot to set `pixelFormat: 'rgb'`. The default VisionCamera pixel format is `yuv` — our frame extraction works only with RGB data.
192
275
193
-
### App freezes or camera drops frames
276
+
#### Results are mirrored on front camera
277
+
278
+
You are not passing `isFrontCamera: true` when using the front camera. See [Handling front/back camera](#handling-frontback-camera) above.
279
+
280
+
#### App freezes or camera drops frames
194
281
195
282
Your model's inference time exceeds the frame interval. Enable `dropFramesWhileBusy: true` in `useFrameOutput`, or move inference off the worklet thread using VisionCamera's [async frame processing](https://react-native-vision-camera-v5-docs.vercel.app/docs/async-frame-processing).
196
283
197
-
### Memory leak / crash after many frames
284
+
####Memory leak / crash after many frames
198
285
199
286
You are not calling `frame.dispose()`. Always dispose the frame in a `finally` block.
200
287
201
-
### `runOnFrame` is always null
288
+
####`runOnFrame` is always null
202
289
203
290
The model hasn't finished loading yet. Guard with `if (!runOnFrame) return` inside `onFrame`, or check `model.isReady` before enabling the camera.
204
291
205
-
### TypeError: `module.runOnFrame` is not a function (Module API)
292
+
####TypeError: `module.runOnFrame` is not a function (Module API)
206
293
207
294
You passed `module.runOnFrame` directly to `setState` instead of `() => module.runOnFrame`. React invoked it as a state initializer — see the [Module API section](#using-the-module-api) above.
0 commit comments