The main 10-section course is designed around a clean learning spine:
- foundations
- methods
- tasks
- training
- deployment
- Jetson system concepts
- frontier outlook
An AI NVR project is extremely valuable, but it works best as an extension after the learner has already completed the main sequence. That is why it appears here as an appendix rather than as one of the main 10 sections.
Build an AI NVR workflow on reComputer that combines:
- camera ingestion
- live inference
- tracking
- event logic
- recording and storage
- operator-facing visualization
This appendix is useful because it combines ideas from multiple earlier chapters:
4.2: image and video input4.5: vision tasks such as detection and tracking4.6: model usage and evaluation4.7: deployment formats and optimization4.8: pipeline thinking4.9: DeepStream and Jetson services
VSTfor stream onboarding and video handlingDeepStream Perceptionfor inferenceAnalyticsfor ROI, counting, or line-crossing logicRedisfor metadataIngressfor service access
- prepare the Jetson baseline
- verify JPS services
- add one or more camera streams
- launch inference
- confirm tracking or event logic
- inspect recordings and storage behavior
- validate the operator experience
sudo systemctl start jetson-redis
sudo systemctl start jetson-ingress
sudo systemctl start jetson-vst- Which parts of the project depend on model quality, and which depend on system design?
- What would fail first in a real deployment: the model, the stream, the storage, or the operator workflow?
- How would you extend this appendix into a production pilot?
This appendix gives learners a concrete project that connects the course's theory and system practice. It is not the center of the curriculum, but it is a powerful extension for learners who want to see how a complete edge vision application comes together.