|
| 1 | +# NAV2 Ground Consistency Demo |
| 2 | + |
| 3 | +A tutorial demonstrating terrain-aware navigation using 3D lidar ground segmentation with Nav2's [ground consistency costmap layer](https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin). Learn how to classify terrain into traversable ground and obstacles, then use that classification to build smarter costmaps for safer navigation. |
| 4 | + |
| 5 | + |
| 6 | + |
| 7 | +## Overview |
| 8 | + |
| 9 | +This tutorial demonstrates a complete workflow for terrain-aware navigation: |
| 10 | + |
| 11 | +1. **Gazebo Simulation** - A Husky robot with VLP-16 lidar in realistic terrain |
| 12 | +2. **Ground Segmentation** - Classifies every lidar point as ground or obstacle |
| 13 | +3. **Height-Based Filtering** - Distinguishes actual blocking obstacles from overhead structures |
| 14 | +4. **Nav2 Integration** - Uses segmentation to build cost maps that understand terrain |
| 15 | +5. **Smart Navigation** - Robot avoids real obstacles while safely navigating complex terrain |
| 16 | + |
| 17 | +## Requirements |
| 18 | + |
| 19 | +It is assumed you have ROS 2 Jazzy installed. To install all required dependencies and clone necessary repositories, run: |
| 20 | + |
| 21 | +```bash |
| 22 | +mkdir -p src && cd src |
| 23 | +git clone https://github.com/haider8645/nav2_ground_consistency_demo.git |
| 24 | +``` |
| 25 | + |
| 26 | +```bash |
| 27 | +cd nav2_ground_consistency_demo |
| 28 | +bash install_dependencies.bash ~/my_custom_workspace |
| 29 | +``` |
| 30 | + |
| 31 | +This script will: |
| 32 | +- Install required ROS 2 packages (Gazebo, visualization, navigation) |
| 33 | +- Clone all necessary repositories (KISS-ICP, ground segmentation, etc.) |
| 34 | +- Provide next steps for building |
| 35 | + |
| 36 | +## Ground Segmentation Overview |
| 37 | + |
| 38 | +**What is Ground Segmentation?** |
| 39 | + |
| 40 | +Ground segmentation is the first step in terrain-aware navigation. It takes raw 3D lidar point cloud data and classifies every point into two categories: |
| 41 | + |
| 42 | +- **Ground Points**: Points that lie on the terrain surface (traversable) |
| 43 | +- **Non-Ground Points**: Points above the terrain surface (potential obstacles) |
| 44 | + |
| 45 | +This is done by analyzing the geometry of the point cloud - points that are approximately at the same height and form a plane are ground, while points sticking up above this plane are classified as obstacles. The algorithm doesn't require a pre-built map; it works in real-time as the robot moves. |
| 46 | + |
| 47 | +**Why is this important?** Without ground segmentation, a navigation system can't tell the difference between actual obstacles and normal terrain variations (like slopes or small rocks). Ground consistency uses this classification to make smarter navigation decisions. |
| 48 | + |
| 49 | +## Ground Consistency Overview |
| 50 | + |
| 51 | +### What is Ground Consistency? |
| 52 | + |
| 53 | +Traditional occupancy grids treat all obstacles equally. A 10cm tall rock and a 2m tall wall both show up as "occupied." This causes problems: |
| 54 | + |
| 55 | +- **On slopes**: Real traversable ground is marked as obstacles |
| 56 | +- **Under structures**: Bridges and overpasses block navigation unnecessarily |
| 57 | +- **In clutter**: Small harmless debris at ground level makes navigation overly cautious |
| 58 | + |
| 59 | +**Ground consistency** solves this by adding a height dimension to costmaps. It answers: *"Is this obstacle actually blocking my robot, or is it something I can navigate over/under?"* |
| 60 | + |
| 61 | +### How It Works |
| 62 | + |
| 63 | +The ground consistency layer works in three main phases: |
| 64 | + |
| 65 | +**Phase 1: Collect Evidence** |
| 66 | +- Each grid cell remembers two types of evidence: |
| 67 | + - How much ground evidence it has seen (from ground points) |
| 68 | + - How much obstacle evidence it has seen (from obstacle points) |
| 69 | +- Every time new lidar data arrives, the cell's evidence scores go up |
| 70 | +- Obstacle evidence is weighted slightly heavier than ground evidence (obstacles are harder to dismiss) |
| 71 | +- This creates a confidence score: how likely is this cell actually an obstacle? |
| 72 | + |
| 73 | +**Phase 2: Check If It's Really Blocking** |
| 74 | +Before marking something as "don't go here", the layer asks: |
| 75 | +1. Is there really an obstacle here? (check confidence score) |
| 76 | +2. How high is it compared to the actual ground? |
| 77 | +3. Is the robot tall enough to pass under it? |
| 78 | + |
| 79 | +To find the ground level: |
| 80 | +- First, look at actual ground points in the same cell |
| 81 | +- If none exist, check nearby cells and average their ground heights (neighborhood voting - disabled by default, can be enabled via `ground_neighbor_search_cells` parameter) |
| 82 | +- If still nothing, assume it's dangerous and mark it as "don't go" |
| 83 | + |
| 84 | +Then compare heights: |
| 85 | +- If the obstacle is taller than the robot → it's just overhead → mark as **safe to cross** |
| 86 | +- If the obstacle is very small → ignore it → mark as **safe to cross** |
| 87 | +- If the obstacle is at robot height → it blocks the robot → mark as **don't go** |
| 88 | + |
| 89 | +**Phase 3: Forget Old Observations** |
| 90 | +Each update cycle, old evidence fades: |
| 91 | +- Ground evidence fades faster (false ground classifications get forgotten quickly) |
| 92 | +- Obstacle evidence fades slower (real obstacles stay in memory longer) |
| 93 | +- This makes navigation smooth: sensor noise doesn't confuse the map, but real obstacles remain known |
| 94 | + |
| 95 | +**Result: Smart Navigation Map** |
| 96 | +- Cells marked as **safe**: Empty (not visible in costmap, the robot can freely navigate through) |
| 97 | +- Cells marked as **danger** (magenta): Real blocking obstacles at robot height (lethal) |
| 98 | +- Cells slowly changing colors: Evidence is fading and the cell's classification is transitioning (you can watch the map update in real-time as old observations are forgotten) |
| 99 | + |
| 100 | +## Tutorial Steps |
| 101 | + |
| 102 | +### Step 0: Verify Installation |
| 103 | + |
| 104 | +Test that everything is installed correctly: |
| 105 | + |
| 106 | +```bash |
| 107 | +# Check ROS 2 packages |
| 108 | +cd ~/my_custom_workspace && source install/setup.bash |
| 109 | +ros2 pkg list | grep -E "ground_segmentation|kiss_icp|nav2_ground_consistency" |
| 110 | +``` |
| 111 | + |
| 112 | +You should see: |
| 113 | +- `ground_segmentation_ros2` |
| 114 | +- `kiss_icp` |
| 115 | +- `nav2_ground_consistency_costmap_plugin` |
| 116 | +- `nav2_ground_consistency_demo` |
| 117 | + |
| 118 | +### Step 1: Launch Gazebo Simulation |
| 119 | + |
| 120 | +In **Terminal 1**, start the Gazebo simulation with the Husky robot: |
| 121 | + |
| 122 | +Note: Gazebo will take some time to download the baylands terrain and husky model |
| 123 | +```bash |
| 124 | +source install/setup.bash |
| 125 | +ros2 launch nav2_ground_consistency_demo start.launch.py |
| 126 | +``` |
| 127 | + |
| 128 | +You should see: |
| 129 | +- Gazebo opens with Baylands terrain |
| 130 | +- Husky robot appears on the ground |
| 131 | +- ROS topics `/husky/scan/points`, `/husky/imu` being published |
| 132 | + |
| 133 | + |
| 134 | + |
| 135 | +Verify the lidar is working: |
| 136 | + |
| 137 | +```bash |
| 138 | +ros2 topic hz /husky/scan/points |
| 139 | +``` |
| 140 | + |
| 141 | +You should see ~5-10 Hz updates based on your computer specifications. You will need to set your rviz 'Frame Rate' to this value to avoid issues with rviz visualizations. |
| 142 | + |
| 143 | + |
| 144 | + |
| 145 | +### Step 2: Launch Navigation Stack |
| 146 | + |
| 147 | +In **Terminal 2**, launch the stack with the ground consistency layer (this includes ground segmentation): |
| 148 | + |
| 149 | +On some systems, the `ground_segmentation_ros2_node` may fail to start with: `error while loading shared libraries: libjawt.so: cannot open shared object file` |
| 150 | + |
| 151 | +Fix this error by using the following: |
| 152 | + |
| 153 | +```bash |
| 154 | +export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which javac)))) |
| 155 | +echo "$JAVA_HOME/lib" | sudo tee /etc/ld.so.conf.d/java.conf |
| 156 | +echo "$JAVA_HOME/lib/server" | sudo tee -a /etc/ld.so.conf.d/java.conf |
| 157 | +sudo ldconfig |
| 158 | +``` |
| 159 | + |
| 160 | +```bash |
| 161 | +source install/setup.bash |
| 162 | +ros2 launch nav2_ground_consistency_demo full_stack.launch.py |
| 163 | +``` |
| 164 | + |
| 165 | +```bash |
| 166 | +ros2 lifecycle set /controller_server configure |
| 167 | +ros2 lifecycle set /controller_server activate |
| 168 | +``` |
| 169 | + |
| 170 | + |
| 171 | +This starts: |
| 172 | +- **Ground Segmentation** - Classifies lidar points in real-time |
| 173 | +- **KISS-ICP Odometry** - Estimates robot pose from lidar |
| 174 | +- **Controller Server** - Path tracking and obstacle avoidance |
| 175 | +- **RViz2** - Visualization (auto-starts) |
| 176 | +- **Lifecycle Transitions** - Automatic state management |
| 177 | + |
| 178 | +You should see RViz open with: |
| 179 | +- Robot footprint (yellow rectangle) |
| 180 | +- Lidar pointcloud (cyan points) |
| 181 | +- Local costmap |
| 182 | +- Ground consistency layer visualization |
| 183 | + |
| 184 | +**Verify that everything is working** in **Terminal 3**: |
| 185 | + |
| 186 | +```bash |
| 187 | +# Check KISS-ICP odometry |
| 188 | +ros2 topic hz /tf |
| 189 | + |
| 190 | +# Check ground segmentation output |
| 191 | +ros2 topic hz /ground_segmentation/ground_points |
| 192 | +ros2 topic hz /ground_segmentation/obstacle_points |
| 193 | +``` |
| 194 | + |
| 195 | +You should see: |
| 196 | +- `/tf` publishing at ~5-10 Hz (odometry transforms) |
| 197 | +- Ground points publishing at ~5-10 Hz (from ground segmentation) |
| 198 | +- Obstacle points publishing at ~5-10 Hz (from ground segmentation) |
| 199 | + |
| 200 | +If any topic shows "0 Hz", something is not working - check the logs in the launch terminal for errors. |
| 201 | + |
| 202 | +### Step 3: Move the Robot |
| 203 | + |
| 204 | +To move the robot around and see ground consistency in action: |
| 205 | + |
| 206 | +In **Gazebo**: |
| 207 | +1. Click the menu (⋮) in the top-right corner |
| 208 | +2. Search for "Teleop" and select the teleop widget |
| 209 | +3. Change the topic field to `/model/husky/cmd_vel` |
| 210 | +4. Use the sliders or buttons to move the robot forward/backward and rotate |
| 211 | + |
| 212 | +Watch the costmap in RViz update in real-time as the robot moves: |
| 213 | +- **On flat ground**: Costmap shows mostly empty (safe) |
| 214 | +- **On slopes**: Ground segmentation classifies the slope as traversable (not blocked) |
| 215 | +- **Over small obstacles**: Debris below robot height doesn't appear as lethal obstacles |
| 216 | +- **Under overpasses**: If your world has them, the robot can navigate underneath |
| 217 | + |
| 218 | +Key observations: |
| 219 | +- Magenta cells (lethal) appear only for real blocking obstacles at robot height |
| 220 | +- As the robot moves away from an area, the evidence fades and the magenta cells gradually disappear |
| 221 | +- Watch the costmap dynamically update as sensor data flows in |
| 222 | + |
| 223 | +## Conclusion |
| 224 | + |
| 225 | +You now understand: |
| 226 | + |
| 227 | +1. ✅ **Ground Segmentation**: How lidar points are classified as ground or obstacles |
| 228 | +2. ✅ **Height Filtering**: How the costmap distinguishes navigable vs blocking obstacles |
| 229 | +3. ✅ **Evidence Accumulation**: How temporal decay makes navigation robust and reactive |
| 230 | +4. ✅ **Nav2 Integration**: How ground consistency layers improve costmap quality |
| 231 | +5. ✅ **Real-World Application**: How to tune parameters for your terrain and robot |
| 232 | + |
| 233 | +## Extended Topics |
| 234 | + |
| 235 | +### Debugging |
| 236 | + |
| 237 | +**No lidar points visible in RViz:** |
| 238 | +```bash |
| 239 | +ros2 topic echo /husky/scan/points |
| 240 | +``` |
| 241 | + |
| 242 | +**Ground segmentation very slow:** |
| 243 | +- Enable downsampling in `parameters.yaml` |
| 244 | +- Reduce lidar resolution in simulation config |
| 245 | + |
| 246 | +**No ground points detected:** |
| 247 | +- Check `lidar_to_ground` parameter in `config/gseg3d_config.yaml` |
| 248 | +- This should match your lidar's height above the ground (negative value) |
| 249 | +- If incorrect, all points will be classified as obstacles |
| 250 | +- Verify with: `ros2 topic echo /ground_segmentation/ground_points` |
| 251 | + |
| 252 | +**Robot doesn't avoid obstacles:** |
| 253 | +- Check that full_stack.launch.py is running and ground segmentation is publishing |
| 254 | +- Verify costmap is receiving points: `ros2 topic hz /ground_segmentation/obstacle_points` |
| 255 | +- Check Nav2 logs for errors |
| 256 | + |
| 257 | +### Hardware Adaptation |
| 258 | + |
| 259 | +The demo is tuned for a Husky with VLP-16 lidar. To adapt to your robot: |
| 260 | + |
| 261 | +1. **Update robot model** in `simulation/models/COSTAR_HUSKY_SENSOR_CONFIG_1/model.sdf` |
| 262 | +2. **Update lidar and IMU static transforms** in `simulation/start.launch.py` static transform |
| 263 | +3. **Update robot dimensions** in `config/nav2_config.yaml` (robot_height, etc.) |
| 264 | +4. **Retune ground segmentation** parameters in `config/gseg3d_config.yaml` for your lidar's FOV and point density |
| 265 | + |
| 266 | +## References |
| 267 | + |
| 268 | +- [NAV2 Documentation](https://docs.nav2.org/) |
| 269 | +- [Ground Consistency Layer Plugin](https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin) |
| 270 | +- [Ground Segmentation ROS2](https://github.com/dfki-ric/ground_segmentation_ros2) |
| 271 | +- [KISS-ICP Odometry](https://github.com/prbonn/kiss-icp) |
| 272 | +- [Gazebo Simulator](https://gazebosim.org/) |
| 273 | +- [Clearpath Husky Robot](https://clearpathrobotics.com/husky) |
| 274 | +- [ROS 2 Jazzy Documentation](https://docs.ros.org/en/jazzy/) |
| 275 | + |
| 276 | +### Husky Robot Model |
| 277 | + |
| 278 | +This demo uses a local copy of the COSTAR_HUSKY_SENSOR_CONFIG_1 model from Gazebo Fuel. The local copy has been modified to enable IMU orientation output (`enable_orientation=1`) for proper quaternion-based orientation estimates, which is required for ground segmentation with IMU integration. A DiffDrive plugin was included in the robot model which executes motion commands sent to the robot. |
| 279 | + |
| 280 | +**Original Model Citation:** |
| 281 | +```bibtex |
| 282 | +@online{GazeboFuel-OpenRobotics-COSTAR_HUSKY_SENSOR_CONFIG_1, |
| 283 | + title={COSTAR_HUSKY_SENSOR_CONFIG_1}, |
| 284 | + organization={Open Robotics}, |
| 285 | + date={2023}, |
| 286 | + month={September}, |
| 287 | + day={3}, |
| 288 | + author={OpenRobotics}, |
| 289 | + url={https://fuel.gazebosim.org/1.0/OpenRobotics/models/COSTAR_HUSKY_SENSOR_CONFIG_1}, |
| 290 | +} |
| 291 | +``` |
| 292 | + |
| 293 | +### Funding |
| 294 | + |
| 295 | +Developed at the Robotics Innovation Center (DFKI), Bremen. Supported by Robdekon2 (50RA1406), German Federal Ministry for Research and Technology. |
| 296 | + |
| 297 | + |
| 298 | + |
| 299 | + |
| 300 | + |
| 301 | + |
| 302 | + |
| 303 | + |
0 commit comments