Skip to content

Commit d2f38ee

Browse files
pepisgEricoMegertonynajjarmach0312SteveMacenski
authored
Add semantic segmentation tutorial (#876)
* remove duplicated max_obstacle_height on voxel_layer in costmap2d config example (#874) Signed-off-by: EricoMeger <ericomeger9@gmail.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * bt_log_idle_transitions docs (#875) * Add docs Signed-off-by: Tony Najjar <tony.najjar@dexory.com> * revert Signed-off-by: Tony Najjar <tony.najjar@dexory.com> * . Signed-off-by: Tony Najjar <tony.najjar@dexory.com> * add console Signed-off-by: Tony Najjar <tony.najjar@dexory.com> --------- Signed-off-by: Tony Najjar <tony.najjar@dexory.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * docs: Update collision monitor tutorial for holonomic robot (#871) * Update velocity polygon linear_min/max Description Update collision monitor tutorial for holonomic robot Signed-off-by: Jaerak Son <sjr9017@naver.com> * [collision_monitor] Update velocity polygon angles and refine documentation - Updated `direction_start_angle` and `direction_end_angle` parameters in YAML to match the reference diagram. - Refined `using_collision_monitor.rst` based on review feedback (removed redundant non-negative description). Signed-off-by: Jaerak Son <sjr9017@naver.com> --------- Signed-off-by: Jaerak Son <sjr9017@naver.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * first draft semantic segmentation plugin Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * readme fixes Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * localization note Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * fix codespell Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * comments and clarifications Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * Update tutorials/docs/navigation2_with_semantic_segmentation.rst Co-authored-by: Steve Macenski <stevenmacenski@gmail.com> Signed-off-by: Pedro Alejandro González <71234974+pepisg@users.noreply.github.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * Update tutorials/docs/navigation2_with_semantic_segmentation.rst Co-authored-by: Steve Macenski <stevenmacenski@gmail.com> Signed-off-by: Pedro Alejandro González <71234974+pepisg@users.noreply.github.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * Update tutorials/docs/navigation2_with_semantic_segmentation.rst Co-authored-by: Steve Macenski <stevenmacenski@gmail.com> Signed-off-by: Pedro Alejandro González <71234974+pepisg@users.noreply.github.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> * Update tutorials/docs/navigation2_with_semantic_segmentation.rst Co-authored-by: Steve Macenski <stevenmacenski@gmail.com> Signed-off-by: Pedro Alejandro González <71234974+pepisg@users.noreply.github.com> * fix link Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> --------- Signed-off-by: EricoMeger <ericomeger9@gmail.com> Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co> Signed-off-by: Tony Najjar <tony.najjar@dexory.com> Signed-off-by: Jaerak Son <sjr9017@naver.com> Signed-off-by: Pedro Alejandro González <71234974+pepisg@users.noreply.github.com> Co-authored-by: Érico Meger <86668447+EricoMeger@users.noreply.github.com> Co-authored-by: Tony Najjar <tony.najjar.1997@gmail.com> Co-authored-by: Jaerak Son <40343097+mach0312@users.noreply.github.com> Co-authored-by: Steve Macenski <stevenmacenski@gmail.com>
1 parent 2cdef93 commit d2f38ee

7 files changed

Lines changed: 244 additions & 0 deletions

File tree

1.8 MB
Loading
754 KB
Loading
82.1 KB
Loading
4.74 MB
Loading
16.6 MB
Loading
Lines changed: 243 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
.. _navigation2_with_semantic_segmentation:
2+
3+
Navigating with Semantic Segmentation
4+
*************************************
5+
6+
- `Overview`_
7+
- `Requirements`_
8+
- `Semantic Segmentation Overview`_
9+
- `Tutorial Steps`_
10+
- `Conclusion`_
11+
12+
Overview
13+
========
14+
15+
This tutorial demonstrates how to use semantic segmentation in costmaps with stereo cameras, using a custom `semantic_segmentation_layer plugin <https://github.com/kiwicampus/semantic_segmentation_layer>`_ and a pre-trained segmentation model that works on Gazebo's Baylands world. It was written by Pedro Gonzalez at `robot.com <https://robot.com/>`_.
16+
17+
.. image:: images/Navigation2_with_segmentation/video.gif
18+
:width: 90%
19+
:align: center
20+
21+
Requirements
22+
============
23+
24+
It is assumed ROS2 and Nav2 dependent packages are installed or built locally. Additionally, you will need:
25+
26+
.. code-block:: bash
27+
28+
source /opt/ros/<ros2-distro>/setup.bash
29+
sudo apt install ros-$ROS_DISTRO-nav2-minimal-tb4*
30+
sudo apt install ros-$ROS_DISTRO-ros-gz-sim
31+
sudo apt install ros-$ROS_DISTRO-ros-gz-interfaces
32+
33+
You will also need to compile the semantic_segmentation_layer package. To do it, clone the repo to your ROS 2 workspace source, checkout to the appropriate branch and build the package:
34+
35+
.. code-block:: bash
36+
37+
# on your workspace source replace rolling with your ROS distro. branches are available for humble, jazzy and rolling.
38+
git clone -b rolling https://github.com/kiwicampus/semantic_segmentation_layer.git
39+
cd <your workspace path>
40+
colcon build --symlink-install # on your workspace path
41+
42+
The code for this tutorial is hosted in the `nav2_semantic_segmentation_demo <https://github.com/ros-navigation/navigation2_tutorials/tree/master/nav2_semantic_segmentation_demo>`_ directory. It's highly recommended that you clone and build these packages when setting up your development environment.
43+
44+
Finally, you will need:
45+
46+
- **ONNX Runtime**: For running the semantic segmentation model inference
47+
- **OpenCV**: For image processing
48+
49+
We will install these through the tutorial.
50+
51+
NOTE: The semantic segmentation layer plugin is currently requires the depth and color images to be fully aligned, such as those from stereo or depth cameras. However, AI-based depth estimators may be used to create depth from monocular cameras.
52+
53+
Semantic Segmentation Overview
54+
==============================
55+
56+
What is Semantic Segmentation?
57+
-------------------------------
58+
59+
Semantic segmentation is a computer vision task that assigns a class label to every pixel in an image. Unlike object detection, which identifies and localizes objects with bounding boxes, semantic segmentation provides pixel-level understanding of the scene.
60+
61+
Modern semantic segmentation is typically solved using deep learning, specifically convolutional neural networks (CNNs) or vision transformers. These models are trained on large datasets of images where each pixel has been labeled with its corresponding class.
62+
During training, the model learns to recognize patterns and features that distinguish different classes (e.g., the texture of grass vs. the smooth surface of a sidewalk). Common architectures include U-Net, DDRNet, and SegFormer.
63+
64+
As said above, a pre-trained model is included in this tutorial, so you can skip the training part and go directly to the integration with Nav2.
65+
However, if you want to train your own model, you can use the `Simple Segmentation Toolkit <https://github.com/pepisg/simple_segmentation_toolkit>`_ to easily prototype one with SAM-based auto-labeling (no manual annotation required).
66+
67+
.. image:: images/Navigation2_with_segmentation/segmentation_example.png
68+
:width: 600px
69+
:align: center
70+
:alt: Example of semantic segmentation showing original image and segmented mask
71+
72+
Once trained, the output of a semantic segmentation model is typically an image with the same size as the input, where each pixel holds the probability of that pixel belonging to each class.
73+
For instance, the model provided in this tutorial has 3 classes: sidewalk, grass, and background; hence its raw output is a 3-channel tensor, where each channel corresponds to the probability of the pixel belonging to that class.
74+
Note that a model with more classes (ex: 100 classes) would output a 100-channel tensor. At the end, the class with the highest probability is selected for each pixel, and a confidence value is calculated as the probability of the class that was selected.
75+
That logic is usually performed downstream the inference itself, and in this tutorial it is performed by a ROS2 semantic segmentation node.
76+
77+
A perfectly working model should have a confidence value of 1 for the class that was selected, and 0 for the other classes; however, this is rarely the case. Pixels with lower confidence usually correspond to classifications that may be wrong.
78+
For that reason, both the class and the confidence are important inputs for deciding how to assign a cost to a pixel, and both are taken into account by the semantic segmentation layer. You can refer to its `README <https://github.com/kiwicampus/semantic_segmentation_layer>`_ for a detailed explanation on how this is done.
79+
80+
81+
Tutorial Steps
82+
==============
83+
84+
0- Setup Simulation Environment
85+
-------------------------------
86+
87+
To navigate using semantic segmentation, we first need to set up a simulation environment with a robot equipped with a camera sensor. For this tutorial, we will use the Baylands outdoor world in Gazebo with a TurtleBot 4 robot.
88+
Everything is already set up in the `nav2_semantic_segmentation_demo <https://github.com/ros-navigation/navigation2_tutorials/tree/master/nav2_semantic_segmentation_demo>`_ directory, so clone the repo and build it if you haven't already:
89+
90+
.. code-block:: bash
91+
92+
# On your workspace source folder
93+
git clone https://github.com/ros-navigation/navigation2_tutorials.git
94+
cd <your workspace path>
95+
colcon build --symlink-install --packages-up-to semantic_segmentation_sim
96+
97+
source install/setup.bash
98+
99+
Test that the simulation launches correctly:
100+
101+
.. code-block:: bash
102+
103+
ros2 launch semantic_segmentation_sim simulation_launch.py headless:=0
104+
105+
You should see Gazebo launch with the TurtleBot 4 in the Baylands world.
106+
107+
.. image:: images/Navigation2_with_segmentation/gazebo_baylands.png
108+
:width: 700px
109+
:align: center
110+
:alt: Gazebo Baylands world
111+
112+
1- Setup Semantic Segmentation Inference Node
113+
---------------------------------------------
114+
115+
The semantic segmentation node performs real-time inference on camera images using an ONNX model. It subscribes to camera images, runs inference, and publishes segmentation masks, confidence maps, and label information.
116+
To run the semantic segmentation node, you need to install the dependencies from the `requirements.txt <https://github.com/ros-navigation/navigation2_tutorials/blob/master/nav2_semantic_segmentation_demo/semantic_segmentation_node/requirements.txt>`_ file in the semantic_segmentation_node package:
117+
118+
.. code-block:: bash
119+
120+
pip install -r <your workspace path>/src/navigation2_tutorials/nav2_semantic_segmentation_demo/semantic_segmentation_node/requirements.txt --break-system-packages
121+
122+
123+
The segmentation node is configured through an ontology YAML file that defines:
124+
125+
- **Classes to detect**: Each class has a name and color for visualization. Classes should be defined in the same order as the model output. 0 is always the background class.
126+
- **Model settings**: Device (CPU/CUDA), image preprocessing parameters. We use the CPU for inference for greater compatibility; however, if you have a GPU you can install `onnxruntime-gpu <https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements>`_ and its dependencies according to your hardware, and set the device to cuda.
127+
128+
An example configuration file (`config/ontology.yaml`):
129+
130+
.. code-block:: yaml
131+
132+
ontology:
133+
classes:
134+
- name: sidewalk
135+
color: [255, 0, 0] # BGR format
136+
- name: grass
137+
color: [0, 255, 0] # BGR format
138+
139+
model:
140+
device: cpu # cuda or cpu
141+
142+
The node publishes several topics:
143+
144+
- ``/segmentation/mask``: Segmentation mask image (mono8, pixel values = class IDs)
145+
- ``/segmentation/confidence``: Confidence map (mono8, 0-255)
146+
- ``/segmentation/label_info``: Label information message with class metadata
147+
- ``/segmentation/overlay``: Visual overlay showing segmentation on original image (optional)
148+
149+
Launch the segmentation node (with simulation running):
150+
151+
.. code-block:: bash
152+
153+
ros2 run semantic_segmentation_node segmentation_node
154+
155+
Verify that segmentation topics are being published:
156+
157+
.. code-block:: bash
158+
159+
ros2 topic list | grep segmentation
160+
ros2 topic echo /segmentation/label_info --once
161+
162+
You should see the label information message with the classes defined in your ontology.
163+
164+
2- Configure Nav2 with Semantic Segmentation Layer
165+
--------------------------------------------------
166+
167+
Now we need to configure Nav2 to use the semantic segmentation layer in its costmaps. This involves adding the layer plugin to both the global and local costmaps and configuring the cost assignment for different segmentation classes. Key parameters include:
168+
169+
- **Observation Sources**: Defines which camera/segmentation topics to subscribe to
170+
- **Class Types**: Defines terrain categories (traversable, intermediate, danger)
171+
- **Cost Assignment**: Maps semantic classes to navigation costs
172+
- **Temporal Parameters**: Controls how long observations persist in the costmap
173+
174+
Currently, the costmap plugin works only with pointclouds from a stereo camera, which are aligned to the color image and thus with the segmentation mask.
175+
176+
Here's an example configuration for the local costmap:
177+
178+
.. code-block:: yaml
179+
180+
local_costmap:
181+
local_costmap:
182+
ros__parameters:
183+
plugins: ["semantic_segmentation_layer", "inflation_layer"]
184+
semantic_segmentation_layer:
185+
plugin: "semantic_segmentation_layer::SemanticSegmentationLayer"
186+
enabled: True
187+
observation_sources: camera
188+
camera:
189+
segmentation_topic: "/segmentation/mask"
190+
confidence_topic: "/segmentation/confidence"
191+
labels_topic: "/segmentation/label_info"
192+
pointcloud_topic: "/rgbd_camera/depth/points"
193+
max_obstacle_distance: 5.0
194+
min_obstacle_distance: 0.3
195+
tile_map_decay_time: 2.0
196+
class_types: ["traversable", "intermediate", "danger"]
197+
traversable:
198+
classes: ["sidewalk"]
199+
base_cost: 0
200+
max_cost: 0
201+
intermediate:
202+
classes: ["background"]
203+
base_cost: 127
204+
max_cost: 127
205+
danger:
206+
classes: ["grass"]
207+
base_cost: 254
208+
max_cost: 254
209+
210+
The tutorial provides a pre-configured `nav2_params.yaml <https://github.com/ros-navigation/navigation2_tutorials/blob/master/nav2_semantic_segmentation_demo/semantic_segmentation_sim/config/nav2_params.yaml>`_ file in the semantic_segmentation_sim package. You can use it to configure the Nav2 costmaps for your own application.
211+
212+
3- Run everything together
213+
--------------------------
214+
215+
The tutorial provides a complete launch file that launches the simulation, the semantic segmentation node, and the Nav2 navigation stack. To run it, simply launch the `segmentation_simulation_launch.py <https://github.com/ros-navigation/navigation2_tutorials/blob/master/nav2_semantic_segmentation_demo/semantic_segmentation_sim/launch/segmentation_simulation_launch.py>`_ file:
216+
217+
.. code-block:: bash
218+
219+
ros2 launch semantic_segmentation_sim segmentation_simulation_launch.py
220+
221+
The Baylands simulation and `rviz` should appear. You should be able to send navigation goals via `rviz` and the robot should navigate the Baylands world, preferring sidewalks and avoiding grass:
222+
223+
.. image:: images/Navigation2_with_segmentation/demo.gif
224+
:width: 90%
225+
:align: center
226+
227+
To better see what the plugin is doing, you can enable the segmentation tile map visualization in `rviz`. This will show a pointcloud of the segmentation observations for each tile, colored by their confidence.
228+
Again, you can refer to the picture on the Layer's `README <https://github.com/kiwicampus/semantic_segmentation_layer>`_ for a visual explanation of how observations are accumulated on the costmap tiles and how that translates to the cost assigned to each tile.
229+
230+
.. image:: images/Navigation2_with_segmentation/tile_map.gif
231+
:width: 90%
232+
:align: center
233+
234+
**IMPORTANT NOTE:** For the sake of simplicity, this tutorial publishes a static transform between the ``map`` and ``odom`` frames. In a real-world application, you should have a proper localization system (e.g. GPS) to get the ``map`` => ``odom`` transform.
235+
236+
Conclusion
237+
==========
238+
239+
This tutorial demonstrated how to integrate semantic segmentation with Nav2 for terrain-aware navigation using a pre-trained model that works on Gazebo's Baylands world and a custom semantic segmentation layer plugin.
240+
241+
To go further, you can train your own model using the `Simple Segmentation Toolkit <https://github.com/pepisg/simple_segmentation_toolkit>`_, and tune the costmap parameters to your own application.
242+
243+
Happy terrain-aware navigating!

tutorials/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ Nav2 Tutorials
1313
docs/navigation2_with_stvl.rst
1414
docs/navigation2_with_gps.rst
1515
docs/using_isaac_perceptor.rst
16+
docs/navigation2_with_semantic_segmentation.rst
1617
docs/using_groot.rst
1718
docs/integrating_vio.rst
1819
docs/navigation2_dynamic_point_following.rst

0 commit comments

Comments
 (0)