This repository supports memory-based navigation research built around outdoor visual experience from a small RC platform: synchronized video and positioning cues, logged to HDF5, then analyzed offline to find decision-point landmarks in steering data and to organize corresponding imagery.
Autonomous navigation that reuses past experience needs repeatable, interpretable segments of a run (e.g., approach to a turn) tied to both perception (camera frames) and control (steering). The work here focuses on:
- Recording aligned streams (GoPro-class camera over UDP, GPS on serial, steering inside HDF5 from the vehicle logger).
- Segmenting runs by detecting landmarks in the steering signal and associating windows of frames with those events.
- Experimenting with richer signal processing and clustering on the same HDF5 representation.
- Autonomous Navigation — Core and extended analysis code: landmark detection, false-positive handling experiments, combined-dataset variants, and early image-reader evolution preserved under
archive/phase1-image-reader-evolution/. - GoPro — Acquisition-side probes (simple
goprocamtests) and GoPro-embedded GPS CSV exports that document an alternate GPS path alongside the custom serial+NMEA pipeline insrc/capture_gopro_gps.py.
Together they form one timeline: sensing and logging → landmark extraction → optional deeper signal/ML experiments.
- Not a full SLAM or end-to-end autonomy stack.
- Not a guaranteed reproducible paper artifact without your HDF5 datasets and hardware profile.
- Not an installable Python package; scripts are the interface.
For structure and file locations, see repo-map.md and timeline.md.