Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tentone/monodepth
Python ROS depth estimation from RGB image based on code from the paper "High Quality Monocular Depth Estimation via Transfer Learning"
https://github.com/tentone/monodepth
depth-estimation machine-vision rgbd robotics ros vision
Last synced: 3 months ago
JSON representation
Python ROS depth estimation from RGB image based on code from the paper "High Quality Monocular Depth Estimation via Transfer Learning"
- Host: GitHub
- URL: https://github.com/tentone/monodepth
- Owner: tentone
- License: gpl-3.0
- Created: 2019-12-14T13:39:12.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2020-11-15T11:26:37.000Z (about 4 years ago)
- Last Synced: 2024-10-03T12:16:10.927Z (4 months ago)
- Topics: depth-estimation, machine-vision, rgbd, robotics, ros, vision
- Language: Python
- Homepage: https://tentone.github.io/monodepth/
- Size: 1.55 MB
- Stars: 52
- Watchers: 5
- Forks: 8
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Mono Depth ROS
- ROS node used to estimated depth from monocular RGB data.
- Should be used with Python 2.X and ROS
- The original code is at the repository [Dense Depth Original Code](https://github.com/ialhashim/DenseDepth)
- [High Quality Monocular Depth Estimation via Transfer Learning](https://arxiv.org/abs/1812.11941) by Ibraheem Alhashim and Peter Wonka### Configuration
- Topics subscribed by the ROS node
- /image/camera_raw - Input image from camera (can be changed on the parameter topic_color)
- Topics published by the ROS node, containing depth and point cloud data generated.
- /image/depth - Image message containing the depth image estimated (can be changed on the parameter topic_depth).
- /pointcloud - Pointcloud2 message containing a estimated point cloud (can be changed on the parameter topic_pointcloud).
- Parameters that can be configurated
- frame_id - TF Frame id to be published in the output messages.
- debug - If set true a window with the output result if displayed.
- min_depth, max_depth - Min and max depth values considered for scaling.
- batch_size - Batch size used when predicting the depth image using the model provided.
- model_file - Keras model file used, relative to the monodepth package.### Setup
- Install Python 2 and ROS dependencies
```bash
apt-get install python python-pip curl
pip install rosdep rospkg rosinstall_generator rosinstall wstool vcstools catkin_tools catkin_pkg
```- Install project dependencies
```bash
pip install tensorflow keras pillow matplotlib scikit-learn scikit-image opencv-python pydot GraphViz tk
```- Clone the project into your ROS workspace and download pretrained models
```bash
git clone https://github.com/tentone/monodepth.git
cd monodepth/models
curl –o nyu.h5 https://s3-eu-west-1.amazonaws.com/densedepth/nyu.h5
```### Launch
- Example ROS launch entry provided bellow, for easier integration into your already existing ROS launch pipeline.
```xml
```
### Pretrained models
- Pre-trained keras models can be downloaded and placed in the /models folder from the following links:
- [NYU Depth V2 (165MB)](https://s3-eu-west-1.amazonaws.com/densedepth/nyu.h5)
- [KITTI (165MB)](https://s3-eu-west-1.amazonaws.com/densedepth/kitti.h5)### Datasets for training
- [NYU Depth V2 (50K)](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
- The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
- [Download dataset](https://s3-eu-west-1.amazonaws.com/densedepth/nyu_data.zip) (4.1 GB)
- [KITTI Dataset (80K)](http://www.cvlibs.net/datasets/kitti/)
- Datasets captured by driving around the mid-size city of [Karlsruhe](http://maps.google.com/?ie=UTF8&z=15&ll=49.010627,8.405871&spn=0.018381,0.029826&t=k&om=1), in rural areas and on highways. Up to 15 cars and 30 pedestrians are visible per image.