- My other TensorRT project of YOLOv8 tasks: YOLOv8 detection, key points, segmentation, tracking
-
Based on
TensorRT-v8, deployYOLOv8+ByteTrack; -
Support
Jetsonseries, alsoLinux x86_64;
Main work I have done:
- Refer to tensorrtx ,model:
.pth->.engine,extract the inference part of the code, encapsulated into C++ classes, easy to call other projects ; - Preprocessing replaced with my own CUDA programming preprocessing;
- Post-processing removed CUDA programming because it was not significantly faster in tests compared to CPU post-processing ;
- The post-processed NMS greatly reduces conf_thres hyperparameters due to the principle of
ByteTracktracking, which is very important ; YOLOv8inference compiles to a dynamic link library to decouple projects;- Reference official ByteTrack TensorRT deploy , modify its interface to the
YOLOdetector; ByteTrackalso compiles to a dynamic link library, further decoupling projects;- Add category filtering function, you can set the category you want to track in
main.cppline 8 .
- Base requirements:
TensorRT 8.0+OpenCV 3.4.0+
- My running environment on
Jetson Nanois as follows:
- The burned system image is
Jetpack 4.6.1,original environment is as follows:
| CUDA | cuDNN | TensorRT | OpenCV |
|---|---|---|---|
| 10.2 | 8.2 | 8.2.1 | 4.1.1 |
- Install Eigen
apt install libeigen3-devGet the serialized file of TensorRT, suffix.engine
- First get the wts format model file, link: yolov8s.wts , code:gsqm
- Then follow these steps:
cd {TensorRT-YOLOv8-ByteTrack}/tensorrtx-yolov8/
mkdir build
cd build
cp {path/to/yolov8s.wts} .
cmake ..
make
./yolov8 -s yolov8s.wts yolov8s.engine s
cd ../../
mkdir yolo/engine
cp tensorrtx-yolov8/build/yolov8s.engine yolo/engine- Follow these steps
mkdir build
cd build
cmake ..
make
./main ../videos/demo.mp4 # The path to your own videoThen the result.mp4 will be in the build directory, is to track the effect of the video file
If you want the tracked video to play in real time, you can uncomment line 94 of main.cpp.
