Skip to content

Latest commit

 

History

History
26 lines (16 loc) · 1.84 KB

File metadata and controls

26 lines (16 loc) · 1.84 KB

Research overview

This repository supports memory-based navigation research built around outdoor visual experience from a small RC platform: synchronized video and positioning cues, logged to HDF5, then analyzed offline to find decision-point landmarks in steering data and to organize corresponding imagery.

Problem

Autonomous navigation that reuses past experience needs repeatable, interpretable segments of a run (e.g., approach to a turn) tied to both perception (camera frames) and control (steering). The work here focuses on:

  1. Recording aligned streams (GoPro-class camera over UDP, GPS on serial, steering inside HDF5 from the vehicle logger).
  2. Segmenting runs by detecting landmarks in the steering signal and associating windows of frames with those events.
  3. Experimenting with richer signal processing and clustering on the same HDF5 representation.

How the college folders fit

  • Autonomous Navigation — Core and extended analysis code: landmark detection, false-positive handling experiments, combined-dataset variants, and early image-reader evolution preserved under archive/phase1-image-reader-evolution/.
  • GoProAcquisition-side probes (simple goprocam tests) and GoPro-embedded GPS CSV exports that document an alternate GPS path alongside the custom serial+NMEA pipeline in src/capture_gopro_gps.py.

Together they form one timeline: sensing and logginglandmark extractionoptional deeper signal/ML experiments.

What this repo is not

  • Not a full SLAM or end-to-end autonomy stack.
  • Not a guaranteed reproducible paper artifact without your HDF5 datasets and hardware profile.
  • Not an installable Python package; scripts are the interface.

For structure and file locations, see repo-map.md and timeline.md.