Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions plugins/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,11 @@ Costmap Layers
| | | costmap with different isolated |
| | | combinations of costmap layers |
+--------------------------------+------------------------+----------------------------------+
| `Ground Consistency Layer`_ | Muhammad Haider Khan | Height-aware costmap layer using |
| | Lodhi | 3D ground segmentation. Pair |
| | | with Inflation Layer for terrain-|
| | | aware navigation. |
+--------------------------------+------------------------+----------------------------------+

.. _Voxel Layer: https://github.com/ros-navigation/navigation2/tree/main/nav2_costmap_2d/plugins/voxel_layer.cpp
.. _Static Layer: https://github.com/ros-navigation/navigation2/tree/main/nav2_costmap_2d/plugins/static_layer.cpp
Expand All @@ -92,6 +97,7 @@ Costmap Layers
.. _Non-Persistent Voxel Layer: https://github.com/SteveMacenski/nonpersistent_voxel_layer
.. _Denoise Layer: https://github.com/ryzhikovas/navigation2/tree/feature-costmap2d-denoise/nav2_costmap_2d/plugins/denoise_layer.cpp
.. _Plugin Container Layer: https://github.com/ros-navigation/navigation2/tree/main/nav2_costmap_2d/plugins/plugin_container_layer.cpp
.. _Ground Consistency Layer: https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin

Costmap Filters
===============
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
363 changes: 363 additions & 0 deletions tutorials/docs/navigation2_with_ground_consistency_layer.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,363 @@
==========================================
NAV2 Ground Consistency Demo
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps change to Ground Terrain Segmentation using 3D Lidar?

ALso the ==== has to match the char length of the title to render properly

==========================================

A tutorial demonstrating terrain-aware navigation using 3D lidar ground segmentation with Nav2's `ground consistency costmap layer <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_. Learn how to classify terrain into traversable ground and obstacles, then use that classification to build smarter costmaps for safer navigation.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A tutorial demonstrating terrain-aware navigation using 3D lidar ground segmentation with Nav2's `ground consistency costmap layer <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_. Learn how to classify terrain into traversable ground and obstacles, then use that classification to build smarter costmaps for safer navigation.
This tutorial demonstrates terrain-aware navigation using 3D lidar ground segmentation with the `ground consistency costmap layer <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_. This allows users to classify terrain into traversable ground and obstacles for intelligent outdoor navigation in non-planar environments for smarter costmaps for safer navigation.


.. image:: images/Navigation2_with_Ground_Consistency/ground_consistency_layer.gif
:alt: Ground Consistency Layer Demo

Overview
========

This tutorial demonstrates a complete workflow for terrain-aware navigation:

1. **Gazebo Simulation** - A Husky robot with VLP-16 lidar in realistic terrain
2. **Ground Segmentation** - Classifies every lidar point as ground or obstacle
3. **Height-Based Filtering** - Distinguishes actual blocking obstacles from overhead structures
4. **Nav2 Integration** - Uses segmentation to build cost maps that understand terrain
5. **Smart Navigation** - Robot avoids real obstacles while safely navigating complex terrain

Requirements
============

It is assumed you have ROS 2 Jazzy installed. To install all required dependencies and clone necessary repositories, run:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other tutorials & link to the getting started guide if they don't have Nav2 setup/installed yet.

Then: Then, install the required depedencies:


.. code-block:: bash

cd ~/path_to_my_custom_workspace
mkdir -p src && cd src
git clone https://github.com/ros-navigation/navigation2_tutorials.git && cd navigation2_tutorials
git checkout -t remotes/origin/jazzy

.. code-block:: bash

cd nav2_ground_consistency_demo
bash install_dependencies.bash ~/path_to_my_custom_workspace
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be updated/fixed. We should be using rosdep for any dependencies that are installed in apt. If we need to build other things from source, use .repos files.

Overall this installation should basically read:

  • Install ROS 2 / Nav2 --> if not link to the getting started pages (see other tutorials)
  • Then, install the source code for the tutorial used --> clone & rosdep & repos install


This script will:

- Install required ROS 2 packages (Gazebo, visualization, navigation)
- Clone all necessary repositories (KISS-ICP, ground segmentation, etc.)
- Provide next steps for building

Ground Segmentation Overview
============================

**What is Ground Segmentation?**

Ground segmentation is the first step in terrain-aware navigation. It takes raw 3D lidar point cloud data and classifies every point into two categories:

- **Ground Points**: Points that lie on the terrain surface (traversable)
- **Non-Ground Points**: Points above the terrain surface (potential obstacles)
Comment on lines +49 to +52
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Ground segmentation is the first step in terrain-aware navigation. It takes raw 3D lidar point cloud data and classifies every point into two categories:
- **Ground Points**: Points that lie on the terrain surface (traversable)
- **Non-Ground Points**: Points above the terrain surface (potential obstacles)
Ground segmentation is the first step in terrain-aware navigation. It takes sensor data and classifies its readings into at least two categories: ground and non-ground. This can be from raw 3D lidar point clouds as used in this tutorial, but also ground segmentation may be the result of RGBD cameras, monocular vision, or other source using geometric of AI based technologies.


This is done by analyzing the geometry of the point cloud - points that are approximately at the same height and form a plane are ground, while points sticking up above this plane are classified as obstacles. The algorithm doesn't require a pre-built map; it works in real-time as the robot moves.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This is done by analyzing the geometry of the point cloud - points that are approximately at the same height and form a plane are ground, while points sticking up above this plane are classified as obstacles. The algorithm doesn't require a pre-built map; it works in real-time as the robot moves.
This demonstration analyzes the geometry of a 3D lidar point cloud using the XYZ library (provide a link TODO). Points that are approximately at the same height and form a plane are ground, while points sticking up above this plane are classified as obstacles. The algorithm doesn't require a pre-built map; it works in real-time as the robot moves.

I'd add a paragraph after this briefly talking about your library / method. Keep it high level, but some detail may be nice :-)


**Why is this important?** Without ground segmentation, a navigation system can't tell the difference between actual obstacles and normal terrain variations (like slopes or small rocks). Ground consistency uses this classification to make smarter navigation decisions.

Ground Consistency Overview
===========================

What is Ground Consistency?
---------------------------

Traditional occupancy grids treat all obstacles equally. A 10cm tall rock and a 2m tall wall both show up as "occupied." This causes problems:

- **On slopes**: Real traversable ground is marked as obstacles
- **Under structures**: Bridges and overpasses block navigation unnecessarily
- **In clutter**: Small harmless debris at ground level makes navigation overly cautious

**Ground consistency** solves this by adding a height dimension to costmaps. It answers: *"Is this obstacle actually blocking my robot, or is it something I can navigate over/under?"*
Comment on lines +64 to +70
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this not just go under the Ground Segmentation Overview section?


How It Works
------------
Comment on lines +72 to +73
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is way too TMI. Please distill this into 2 short paragraphs instead. You don't need to describe the entire algorithm, just generally how to think about it and how this works.


The ground consistency layer works in three main phases:

**Phase 1: Collect Evidence**

- Each grid cell remembers two types of evidence:

- How much ground evidence it has seen (from ground points)
- How much obstacle evidence it has seen (from obstacle points)

- Every time new lidar data arrives, the cell's evidence scores go up
- Obstacle evidence is weighted slightly heavier than ground evidence (obstacles are harder to dismiss)
- This creates a confidence score: how likely is this cell actually an obstacle?

**Phase 2: Check If It's Really Blocking**

Before marking something as "don't go here", the layer asks:

1. Is there really an obstacle here? (check confidence score)
2. How high is it compared to the actual ground?
3. Is the robot tall enough to pass under it?

To find the ground level:

- First, look at actual ground points in the same cell
- If none exist, check nearby cells and average their ground heights (neighborhood voting - disabled by default, can be enabled via ``ground_neighbor_search_cells`` parameter)
- If still nothing, assume it's dangerous and mark it as "don't go"

Then compare heights:

- If the obstacle is taller than the robot → it's just overhead → mark as **safe to cross**
- If the obstacle is very small → ignore it → mark as **safe to cross**
- If the obstacle is at robot height → it blocks the robot → mark as **don't go**

**Phase 3: Forget Old Observations**

Each update cycle, old evidence fades:

- Ground evidence fades faster (false ground classifications get forgotten quickly)
- Obstacle evidence fades slower (real obstacles stay in memory longer)
- This makes navigation smooth: sensor noise doesn't confuse the map, but real obstacles remain known

**Result: Smart Navigation Map**

- Cells marked as **safe**: Empty (not visible in costmap, the robot can freely navigate through)
- Cells marked as **danger** (magenta): Real blocking obstacles at robot height (lethal)
- Cells slowly changing colors: Evidence is fading and the cell's classification is transitioning (you can watch the map update in real-time as old observations are forgotten)

Tutorial Steps
==============

Step 0: Verify Installation
----------------------------

Test that everything is installed correctly:

.. code-block:: bash

# Check ROS 2 packages
ros2 pkg list | grep -E "ground_segmentation|kiss_icp|nav2_ground_consistency"

You should see:

- ``ground_segmentation_ros2``
- ``kiss_icp``
- ``nav2_ground_consistency_costmap_plugin``
- ``nav2_ground_consistency_demo``

Step 1: Launch Gazebo Simulation
---------------------------------

In **Terminal 1**, start the Gazebo simulation with the Husky robot:

Note: Gazebo will take some time to download the baylands terrain and husky model

.. code-block:: bash

source install/setup.bash
ros2 launch nav2_ground_consistency_demo start.launch.py

You should see:

- Gazebo opens with Baylands terrain
- Husky robot appears on the ground
- ROS topics ``/husky/scan/points``, ``/husky/imu`` being published

.. image:: images/Navigation2_with_Ground_Consistency/husky_in_baylands.png
:alt: Husky in Baylands

Verify the lidar is working:

.. code-block:: bash

ros2 topic hz /husky/scan/points

You should see ~5-10 Hz updates based on your computer specifications. You will need to set your rviz 'Frame Rate' to this value to avoid issues with rviz visualizations.

.. image:: images/Navigation2_with_Ground_Consistency/rviz_frame_settings.png
:alt: RViz Frame Settings
Comment on lines +171 to +172
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this please, TMI


Step 2: Launch Navigation Stack
--------------------------------

In **Terminal 2**, launch the stack with the ground consistency layer (this includes ground segmentation):

On some systems, the ``ground_segmentation_ros2_node`` may fail to start with: ``error while loading shared libraries: libjawt.so: cannot open shared object file``

Fix this error by using the following:

.. code-block:: bash

export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which javac))))
echo "$JAVA_HOME/lib" | sudo tee /etc/ld.so.conf.d/java.conf
echo "$JAVA_HOME/lib/server" | sudo tee -a /etc/ld.so.conf.d/java.conf
sudo ldconfig
Comment on lines +183 to +188
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This... shouldn't be necessary. Why?


.. code-block:: bash

source install/setup.bash
ros2 launch nav2_ground_consistency_demo full_stack.launch.py

.. code-block:: bash

ros2 lifecycle set /controller_server configure
ros2 lifecycle set /controller_server activate
Comment on lines +190 to +198
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should change if/when using Nav2. You also wouldn't need to manually enable the controller server


This starts:

- **Ground Segmentation** - Classifies lidar points in real-time
- **KISS-ICP Odometry** - Estimates robot pose from lidar
- **Controller Server** - Path tracking and obstacle avoidance
- **RViz2** - Visualization (auto-starts)
- **Lifecycle Transitions** - Automatic state management

You should see RViz open with:

- Robot footprint (yellow rectangle)
- Lidar pointcloud (cyan points)
- Local costmap
- Ground consistency layer visualization

**Verify that everything is working** in **Terminal 3**:

.. code-block:: bash

# Check KISS-ICP odometry
ros2 topic hz /tf

# Check ground segmentation output
ros2 topic hz /ground_segmentation/ground_points
ros2 topic hz /ground_segmentation/obstacle_points

You should see:

- ``/tf`` publishing at ~5-10 Hz (odometry transforms)
- Ground points publishing at ~5-10 Hz (from ground segmentation)
- Obstacle points publishing at ~5-10 Hz (from ground segmentation)

If any topic shows "0 Hz", something is not working - check the logs in the launch terminal for errors.

Step 3: Move the Robot
----------------------

To move the robot around and see ground consistency in action:

In **Gazebo**:

1. Click the menu (⋮) in the top-right corner
2. Search for "Teleop" and select the teleop widget
3. Change the topic field to ``/model/husky/cmd_vel``
4. Use the sliders or buttons to move the robot forward/backward and rotate
Comment on lines +234 to +244
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use Nav2 via the navigat to pose in rviz instead, this demo is to show how to use this layer with nav2, so you should use nav2 😉


Watch the costmap in RViz update in real-time as the robot moves:

- **On flat ground**: Costmap shows mostly empty (safe)
- **On slopes**: Ground segmentation classifies the slope as traversable (not blocked)
- **Over small obstacles**: Debris below robot height doesn't appear as lethal obstacles
- **Under overpasses**: If your world has them, the robot can navigate underneath

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a good place to add a youtuve video or two embedded

Key observations:

- Magenta cells (lethal) appear only for real blocking obstacles at robot height
- As the robot moves away from an area, the evidence fades and the magenta cells gradually disappear
- Watch the costmap dynamically update as sensor data flows in

Conclusion
==========

You now understand:

1. ✅ **Ground Segmentation**: How lidar points are classified as ground or obstacles
2. ✅ **Height Filtering**: How the costmap distinguishes navigable vs blocking obstacles
3. ✅ **Evidence Accumulation**: How temporal decay makes navigation robust and reactive
4. ✅ **Nav2 Integration**: How ground consistency layers improve costmap quality
5. ✅ **Real-World Application**: How to tune parameters for your terrain and robot

Extended Topics
===============

Debugging
---------

**No lidar points visible in RViz:**

.. code-block:: bash

ros2 topic echo /husky/scan/points

**Ground segmentation very slow:**

- Enable downsampling in ``parameters.yaml``
- Reduce lidar resolution in simulation config

**No ground points detected:**

- Check ``lidar_to_ground`` parameter in ``config/gseg3d_config.yaml``
- This should match your lidar's height above the ground (negative value)
- If incorrect, all points will be classified as obstacles
- Verify with: ``ros2 topic echo /ground_segmentation/ground_points``

**Robot doesn't avoid obstacles:**

- Check that full_stack.launch.py is running and ground segmentation is publishing
- Verify costmap is receiving points: ``ros2 topic hz /ground_segmentation/obstacle_points``
- Check Nav2 logs for errors

Hardware Adaptation
-------------------

The demo is tuned for a Husky with VLP-16 lidar. To adapt to your robot:

1. **Update robot model** in ``simulation/models/COSTAR_HUSKY_SENSOR_CONFIG_1/model.sdf``
2. **Update lidar and IMU static transforms** in ``simulation/start.launch.py`` static transform
3. **Update robot dimensions** in ``config/nav2_config.yaml`` (robot_height, etc.)
4. **Retune ground segmentation** parameters in ``config/gseg3d_config.yaml`` for your lidar's FOV and point density

References
==========

- `NAV2 Documentation <https://docs.nav2.org/>`_
- `Ground Consistency Layer Plugin <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_
- `Ground Segmentation ROS2 <https://github.com/dfki-ric/ground_segmentation_ros2>`_
- `KISS-ICP Odometry <https://github.com/prbonn/kiss-icp>`_
- `Gazebo Simulator <https://gazebosim.org/>`_
- `Clearpath Husky Robot <https://clearpathrobotics.com/husky>`_
- `ROS 2 Jazzy Documentation <https://docs.ros.org/en/jazzy/>`_

KISS-ICP Odometry
-----------------

KISS-ICP is used as an odometry solution in the demo.

.. code-block:: bibtex

@article{vizzo2023ral,
author = {Vizzo, Ignacio and Guadagnino, Tiziano and Mersch, Benedikt and Wiesmann, Louis and Behley, Jens and Stachniss, Cyrill},
title = {{KISS-ICP: In Defense of Point-to-Point ICP -- Simple, Accurate, and Robust Registration If Done the Right Way}},
journal = {IEEE Robotics and Automation Letters (RA-L)},
pages = {1029--1036},
doi = {10.1109/LRA.2023.3236571},
volume = {8},
number = {2},
year = {2023},
codeurl = {https://github.com/PRBonn/kiss-icp},
}
Comment on lines +321 to +338
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove please, this isn't an academic paper


Husky Robot Model
-----------------

This demo uses a local copy of the COSTAR_HUSKY_SENSOR_CONFIG_1 model from Gazebo Fuel. The local copy has been modified to enable IMU orientation output (``enable_orientation=1``) for proper quaternion-based orientation estimates, which is required for ground segmentation with IMU integration. A DiffDrive plugin was included in the robot model which executes motion commands sent to the robot.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh?


**Original Model Citation:**

.. code-block:: bibtex

@online{GazeboFuel-OpenRobotics-COSTAR_HUSKY_SENSOR_CONFIG_1,
title={COSTAR_HUSKY_SENSOR_CONFIG_1},
organization={Open Robotics},
date={2023},
month={September},
day={3},
author={OpenRobotics},
url={https://fuel.gazebosim.org/1.0/OpenRobotics/models/COSTAR_HUSKY_SENSOR_CONFIG_1},
}
Comment on lines +345 to +357
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this


Funding
-------

Developed at the Robotics Innovation Center (DFKI), Bremen. Supported by Robdekon2 (50RA1406), German Federal Ministry for Research and Technology.

1 change: 1 addition & 0 deletions tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,4 @@ Nav2 Tutorials
docs/navigation2_with_vector_objects.rst
docs/roscon_uk_2025_state_estimation.rst
docs/using_3laws_supervisor.rst
docs/navigation2_with_ground_consistency_layer.rst
Loading