Skip to content

Commit b944c47

Browse files
committed
readme fixes
Signed-off-by: pepisg <pedro.gonzalez@eia.edu.co>
1 parent ab3134b commit b944c47

1 file changed

Lines changed: 15 additions & 18 deletions

File tree

tutorials/docs/navigation2_with_semantic_segmentation.rst

Lines changed: 15 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ It is assumed ROS2 and Nav2 dependent packages are installed or built locally. A
3333
You will also need to compile the semantic_segmentation_layer package. To do it, clone the repo to your ros2 workspace source, checkout to the appropriate branch and build the package:
3434

3535
.. code-block:: bash
36+
3637
# on your workspace source replace rolling with your ros distro. branches are available for humble, jazzy and rolling.
3738
git clone -b rolling https://github.com/kiwicampus/semantic_segmentation_layer.git
3839
cd <your workspace path>
@@ -66,29 +67,30 @@ However, if you want to train your own model, you can use the `Simple Segmentati
6667
:align: center
6768
:alt: Example of semantic segmentation showing original image and segmented mask
6869

69-
Once trained, The output of a semantic segmentation model is typically an image with the same size as the input, where each pixel holds the probability of that pixel belonging to each class.
70-
For instance, in the model provided in this tutorial has 3 classes: sidewalk, grass, and background; hence its raw output is a 3-channel image, where each channel corresponds to the probability of the pixel belonging to each class.
70+
Once trained, the output of a semantic segmentation model is typically an image with the same size as the input, where each pixel holds the probability of that pixel belonging to each class.
71+
For instance, the model provided in this tutorial has 3 classes: sidewalk, grass, and background; hence its raw output is a 3-channel image, where each channel corresponds to the probability of the pixel belonging to that class.
7172
At the end, the class with the highest probability is selected for each pixel, and a confidence value is calculated as the probability of the class that was selected.
7273

73-
A perfectly working model should have a confidence value of 1 for the class that was selected, and 0 for the other classes, however this is rarely the case. Pixels with lower confidence usually correspond to classifications that may be wrong,
74-
for that reason both the class and the confidence are important inputs for deciding how to assign a cost to a pixel, and both are taken into account by the semantic segmentation layer. You can refer to its `README <https://github.com/kiwicampus/semantic_segmentation_layer>`_ for a detailed explanation on how this is done
74+
A perfectly working model should have a confidence value of 1 for the class that was selected, and 0 for the other classes; however, this is rarely the case. Pixels with lower confidence usually correspond to classifications that may be wrong.
75+
For that reason, both the class and the confidence are important inputs for deciding how to assign a cost to a pixel, and both are taken into account by the semantic segmentation layer. You can refer to its `README <https://github.com/kiwicampus/semantic_segmentation_layer>`_ for a detailed explanation on how this is done.
7576

7677

7778
Tutorial Steps
7879
==============
7980

8081
0- Setup Simulation Environment
81-
------------------------------
82+
-------------------------------
8283

8384
To navigate using semantic segmentation, we first need to set up a simulation environment with a robot equipped with a camera sensor. For this tutorial, we will use the Baylands outdoor world in Gazebo with a TurtleBot 4 robot.
8485
Everything is already set up in the `nav2_semantic_segmentation_demo <https://github.com/ros-navigation/navigation2_tutorials/tree/master/nav2_semantic_segmentation_demo>`_ package, so clone the repo and build it if you haven't already:
8586

86-
8787
.. code-block:: bash
88-
# on your workspace source folder
89-
git clone https://github.com/navigation2-tutorials/navigation2_tutorials.git
88+
89+
# On your workspace source folder
90+
git clone https://github.com/ros-navigation/navigation2_tutorials.git
9091
cd <your workspace path>
9192
colcon build --symlink-install --packages-up-to nav2_semantic_segmentation_demo
93+
9294
source install/setup.bash
9395
9496
Test that the simulation launches correctly:
@@ -108,7 +110,7 @@ You should see Gazebo launch with the TurtleBot 4 in the Baylands world.
108110
-----------------------------------------------
109111

110112
The semantic segmentation node performs real-time inference on camera images using an ONNX model. It subscribes to camera images, runs inference, and publishes segmentation masks, confidence maps, and label information.
111-
To run the semantic segmentation node, you need to install the `requirements.txt <https://github.com/ros-navigation/navigation2_tutorials/blob/master/nav2_semantic_segmentation_demo/semantic_segmentation_node/requirements.txt>`_ file in the semantic_segmentation_node package:
113+
To run the semantic segmentation node, you need to install the dependencies from the `requirements.txt <https://github.com/ros-navigation/navigation2_tutorials/blob/master/nav2_semantic_segmentation_demo/semantic_segmentation_node/requirements.txt>`_ file in the semantic_segmentation_node package:
112114

113115
.. code-block:: bash
114116
@@ -117,8 +119,8 @@ To run the semantic segmentation node, you need to install the `requirements.txt
117119
118120
The segmentation node is configured through an ontology YAML file that defines:
119121

120-
- **Classes to detect**: Each class has a name, text prompt (for certain model types), and color for visualization. classes should be defined in the same order as the model output. 0 is always the background class.
121-
- **Model settings**: Device (CPU/CUDA), image preprocessing parameters. we use the CPU for inference for greater compatibility, however if you have a GPU you can install onnxruntime-gpu and set the device to cuda.
122+
- **Classes to detect**: Each class has a name and color for visualization. Classes should be defined in the same order as the model output. 0 is always the background class.
123+
- **Model settings**: Device (CPU/CUDA), image preprocessing parameters. We use the CPU for inference for greater compatibility; however, if you have a GPU you can install onnxruntime-gpu and set the device to cuda.
122124

123125
An example configuration file (`config/ontology.yaml`):
124126

@@ -127,17 +129,12 @@ An example configuration file (`config/ontology.yaml`):
127129
ontology:
128130
classes:
129131
- name: sidewalk
130-
prompt: sidewalk
131132
color: [255, 0, 0] # BGR format
132133
- name: grass
133-
prompt: grass
134134
color: [0, 255, 0] # BGR format
135135
136136
model:
137137
device: cpu # cuda or cpu
138-
max_image_dim: 1024
139-
mask_opacity: 0.15
140-
border_width: 1
141138
142139
The node publishes several topics:
143140

@@ -216,7 +213,7 @@ The tutorial provides a complete launch file that launches the simulation, the s
216213
217214
ros2 launch semantic_segmentation_sim segmentation_simulation_launch.py
218215
219-
The baylands simulation and `rviz` should appear. You should be able to send navigation goals via `rviz` and the robot should navigate the Baylands world, preferring sidewalks and avoiding grass:
216+
The Baylands simulation and `rviz` should appear. You should be able to send navigation goals via `rviz` and the robot should navigate the Baylands world, preferring sidewalks and avoiding grass:
220217

221218
.. image:: images/Navigation2_with_segmentation/demo.gif
222219
:width: 90%
@@ -234,7 +231,7 @@ Again, you can refer to the picture on the Layer's `README <https://github.com/k
234231
Conclusion
235232
==========
236233

237-
This tutorial demonstrated how to integrate semantic segmentation with Nav2 for terrain-aware navigation. using a pretrained model that works on gazebo's Baylands world and a custom semantic segmentation layer plugin.
234+
This tutorial demonstrated how to integrate semantic segmentation with Nav2 for terrain-aware navigation using a pretrained model that works on Gazebo's Baylands world and a custom semantic segmentation layer plugin.
238235

239236
To go further, you can train your own model using the `Simple Segmentation Toolkit <https://github.com/pepisg/simple_segmentation_toolkit>`_, and tune the costmap parameters to your own application.
240237

0 commit comments

Comments
 (0)