Commit 28aede1
committed
fix(rocm): use PyTorch+HIP for inference instead of ONNX
Ultralytics' ONNX loader only supports CUDAExecutionProvider (NVIDIA).
On ROCm, it falls back to CPU even though ROCMExecutionProvider is
available. PyTorch + HIP runs natively on AMD GPUs via device='cuda'.
- Change ROCm BackendSpec: onnx → pytorch (skip ONNX export entirely)
- Set YOLO_AUTOINSTALL=0 in detect.py to prevent ultralytics from
auto-installing onnxruntime-gpu (NVIDIA) at runtime1 parent 2d32d52 commit 28aede1
File tree
3 files changed
+10
-6
lines changed- skills
- detection/yolo-detection-2026/scripts
- lib
3 files changed
+10
-6
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
15 | 15 | | |
16 | 16 | | |
17 | 17 | | |
| 18 | + | |
18 | 19 | | |
19 | 20 | | |
20 | 21 | | |
21 | 22 | | |
22 | 23 | | |
23 | 24 | | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
24 | 28 | | |
25 | 29 | | |
26 | 30 | | |
| |||
Lines changed: 3 additions & 3 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
51 | 51 | | |
52 | 52 | | |
53 | 53 | | |
54 | | - | |
55 | | - | |
56 | | - | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
57 | 57 | | |
58 | 58 | | |
59 | 59 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
51 | 51 | | |
52 | 52 | | |
53 | 53 | | |
54 | | - | |
55 | | - | |
56 | | - | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
57 | 57 | | |
58 | 58 | | |
59 | 59 | | |
| |||
0 commit comments