Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/dddmobilerobot/dddmr_perception_3d

Perception 3D is the graph-based framework for 3d mobile robot, it is the substitution of costmap_2d
https://github.com/dddmobilerobot/dddmr_perception_3d

costmap-2d depth-camera graph graph-algorithms lidar lidar-point-cloud perception point-cloud pointcloud robot-perception ros2

Last synced: 19 days ago
JSON representation

Perception 3D is the graph-based framework for 3d mobile robot, it is the substitution of costmap_2d

Awesome Lists containing this project

README

        

# dddmr_perception_3d
Perception 3D is graph-based framework allowing user to develope applications for mobile robots, such as path planning, marking/clearing obstacles, creating no-enter/speed limit layer.
You can reference:
- [dddmr_global_planner](https://github.com/dddmobilerobot/dddmr_global_planner)
- [dddmr_local_planner](https://github.com/dddmobilerobot/dddmr_local_planner)


Global planning in 3D map
Marking/Tracking/Clearing
Speed-limit/no-enter zone

Perception 3D:
- Sensor support:
- [x] Multilayer spinning lidar (Velodyne/Ouster/Leishen)
- [x] Depth camera (Realsense/oak)
- [x] Scanning Lidar (Livox mid-360/Unitree 4D LiDAR L1)
- Zone feature support:
- [x] Static layer
- [x] Speed limit layer
- [x] No enter layer

## Multilayer Lidar Demo (Leishen Lidar C16)



Click me to see tutorial

### 1. Create docker image
The package runs in the docker, so we need to build the image first. We support both x64 (tested in intel NUC) and arm64 (tested in nvidia jetson jpack5.1.3/6).
```
cd ~
git clone https://github.com/dddmobilerobot/dddmr_navigation.git
cd ~/dddmr_navigation && git submodule init && git submodule update
cd ~/dddmr_navigation/dddmr_docker/docker_file && ./build.bash
```
### 2. Download essential files
ROS2 bag that contains multilayer lidar from Leishen C16 will be download to run the demo.
```
cd ~/dddmr_navigation/src/dddmr_perception_3d && ./download_files.bash
```
### 3. Run demo
#### Create a docker container
> [!NOTE]
> The following command will create an interactive docker container using the image we built. We will launch the demo manually in the container.
```
cd ~/dddmr_navigation/dddmr_docker && ./run_demo.bash
```
##### Launch everything in the container
The bag file will be auto-played after 3 seconds when launching.
```
cd ~/dddmr_navigation && source /opt/ros/humble/setup.bash && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
ros2 launch perception_3d multilayer_spinning_lidar_3d_ros_launch.py
```

## Multiple Depth Cameras Demo (Realsense D455)



Click me to see tutorial

### 1. Create docker image
The package runs in the docker, so we need to build the image first. We support both x64 (tested in intel NUC) and arm64 (tested in nvidia jetson jpack5.1.3/6).
```
cd ~
git clone https://github.com/dddmobilerobot/dddmr_navigation.git
cd ~/dddmr_navigation && git submodule init && git submodule update
cd ~/dddmr_navigation/dddmr_docker/docker_file && ./build.bash
```
### 2. Download essential files
ROS2 bag that contains depth images from two cameras will be download to run the demo.
```
cd ~/dddmr_navigation/src/dddmr_perception_3d && ./download_files.bash
```
### 3. Run demo
#### Create a docker container
> [!NOTE]
> The following command will create an interactive docker container using the image we built. We will launch the demo manually in the container.
```
cd ~/dddmr_navigation/dddmr_docker && ./run_demo.bash
```
##### Launch everything in the container
The bag file will be auto-played after 3 seconds when launching.
```
cd ~/dddmr_navigation && source /opt/ros/humble/setup.bash && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
ros2 launch perception_3d multi_depth_camera_3d_ros_launch.py
```

## Scanning Lidar Demo (Unitree G4)



Click me to see tutorial

### 1. Create docker image
The package runs in the docker, so we need to build the image first. We support both x64 (tested in intel NUC) and arm64 (tested in nvidia jetson jpack5.1.3/6).
```
cd ~
git clone https://github.com/dddmobilerobot/dddmr_navigation.git
cd ~/dddmr_navigation && git submodule init && git submodule update
cd ~/dddmr_navigation/dddmr_docker/docker_file && ./build.bash
```
### 2. Download essential files
ROS2 bag that contains depth images from two cameras will be download to run the demo.
```
cd ~/dddmr_navigation/src/dddmr_perception_3d && ./download_files.bash
```
### 3. Run demo
#### Create a docker container
> [!NOTE]
> The following command will create an interactive docker container using the image we built. We will launch the demo manually in the container.
```
cd ~/dddmr_navigation/dddmr_docker && ./run_demo.bash
```
##### Launch everything in the container
The bag file will be auto-played after 3 seconds when launching.
```
cd ~/dddmr_navigation && source /opt/ros/humble/setup.bash && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
ros2 launch perception_3d scanning_lidar_3d_ros_launch.py
```