|
7 | 7 |
|
8 | 8 | **Task description** |
9 | 9 |
|
10 | | -The Navigation Stack has long provided robust navigation in a wide range of environments. Controllers have been developed to operate effectively in the presence of dynamic obstacles without explicitly modeling the characteristics of dynamic obstacles. However, as the field has progressed and we see more and more robots using ROS deployed in human-filled spaces, more consideration must be taken with respect to dynamic obstacles such as people, carts, animals, and vehicles. |
11 | | - |
12 | | -Your task will be to create integrations with existing machine learning tools that create dynamic obstacle information (ComplexYolo, Yolo3D, etc) and tie them into the navigation stack for use. It is not in the scope for you to retrain or otherwise become an expert in 3D machine learning, but some basic knowledge will be helpful. We already have a starting point in the project links below that needs to be driven to completion. This includes completing the on-going work to integrate yolact edge into this work to replace detectron2 and benchmark these capabilities on GPUs to verify sufficient run-time performance, as well as other tangental feature development. |
13 | | - |
14 | | -This task will involve identifying a few techniques that produce position and velocity information about dynamic obstacles that can run on a mobile robot (using high-power Intel CPU, Nvidia Jetson SoC, external GPUs, etc) and get them running with ROS and Navigation. Next, you will help create a new costmap layer to use this information to mark the dynamic obstacle in the costmap to ensure a robot does not collide with a future trajectory of an obstacle. |
15 | | - |
16 | | -If time permits, you may also work to also integrate this dynamic information into a path planner and/or controller to help in direct motion consideration. This will likely be in collaboration with another community member. |
| 10 | +The Navigation Stack has long provided robust navigation in a wide range of environments. |
| 11 | +Controllers have been developed to operate effectively in the presence of dynamic obstacles without explicitly modeling the characteristics of dynamic obstacles. |
| 12 | +However, as the field has progressed and we see more and more robots using ROS deployed in human-filled spaces, |
| 13 | +more consideration must be taken with respect to dynamic obstacles such as people, carts, animals, and vehicles. |
| 14 | + |
| 15 | +Your task will be to create integrations with existing machine learning tools that create dynamic obstacle information (ComplexYolo, Yolo3D, etc) and tie them into the navigation stack for use. |
| 16 | +It is not in the scope for you to retrain or otherwise become an expert in 3D machine learning, but some basic knowledge will be helpful. |
| 17 | +We already have a starting point in the project links below that needs to be driven to completion. |
| 18 | +This includes completing the on-going work to integrate yolact edge into this work to replace detectron2 and benchmark these capabilities on GPUs to verify sufficient run-time performance, |
| 19 | +as well as other tangental feature development. |
| 20 | + |
| 21 | +This task will involve identifying a few techniques that produce position and velocity information about dynamic obstacles that can run on a mobile robot |
| 22 | +(using high-power Intel CPU, Nvidia Jetson SoC, external GPUs, etc) and get them running with ROS and Navigation. |
| 23 | +ext, you will help create a new costmap layer to use this information to mark the dynamic obstacle in the costmap to ensure a robot does not collide with a future trajectory of an obstacle. |
| 24 | + |
| 25 | +If time permits, you may also work to also integrate this dynamic information into a path planner and/or controller to help in direct motion consideration. |
| 26 | +This will likely be in collaboration with another community member. |
17 | 27 |
|
18 | 28 | **Project difficulty: High** |
19 | 29 |
|
|
0 commit comments