{"id":15646235,"url":"https://github.com/tentone/monodepth","last_synced_at":"2025-04-30T11:56:59.139Z","repository":{"id":62530374,"uuid":"228028952","full_name":"tentone/monodepth","owner":"tentone","description":"Python ROS depth estimation from RGB image based on code from the paper \"High Quality Monocular Depth Estimation via Transfer Learning\"","archived":false,"fork":false,"pushed_at":"2020-11-15T11:26:37.000Z","size":1630,"stargazers_count":56,"open_issues_count":3,"forks_count":7,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-03-30T15:51:16.366Z","etag":null,"topics":["depth-estimation","machine-vision","rgbd","robotics","ros","vision"],"latest_commit_sha":null,"homepage":"https://tentone.github.io/monodepth/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tentone.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-12-14T13:39:12.000Z","updated_at":"2025-03-15T03:14:41.000Z","dependencies_parsed_at":"2022-11-02T14:31:06.752Z","dependency_job_id":null,"html_url":"https://github.com/tentone/monodepth","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tentone%2Fmonodepth","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tentone%2Fmonodepth/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tentone%2Fmonodepth/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tentone%2Fmonodepth/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tentone","download_url":"https://codeload.github.com/tentone/monodepth/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251694820,"owners_count":21628913,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["depth-estimation","machine-vision","rgbd","robotics","ros","vision"],"created_at":"2024-10-03T12:11:59.021Z","updated_at":"2025-04-30T11:56:59.071Z","avatar_url":"https://github.com/tentone.png","language":"Python","readme":"## Mono Depth ROS\n - ROS node used to estimated depth from monocular RGB data.\n - Should be used with Python 2.X and ROS\n - The original code is at the repository [Dense Depth Original Code](https://github.com/ialhashim/DenseDepth)\n - [High Quality Monocular Depth Estimation via Transfer Learning](https://arxiv.org/abs/1812.11941) by Ibraheem Alhashim and Peter Wonka\n\n\u003cimg src=\"https://raw.githubusercontent.com/tentone/monodepth/master/readme/c.png\" width=\"370\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/tentone/monodepth/master/readme/d.png\" width=\"370\"\u003e\n\n### Configuration\n\n- Topics subscribed by the ROS node\n  - /image/camera_raw - Input image from camera (can be changed on the parameter topic_color)\n- Topics published by the ROS node, containing depth and point cloud data generated.\n  - /image/depth - Image message containing the depth image estimated (can be changed on the parameter topic_depth).\n  - /pointcloud - Pointcloud2 message containing a estimated point cloud (can be changed on the parameter topic_pointcloud).\n- Parameters that can be configurated\n  - frame_id - TF Frame id to be published in the output messages.\n  - debug - If set true a window with the output result if displayed.\n  - min_depth, max_depth - Min and max depth values considered for scaling.\n  - batch_size - Batch size used when predicting the depth image using the model provided.\n  - model_file - Keras model file used, relative to the monodepth package.\n\n\n\n### Setup\n\n- Install Python 2 and ROS dependencies\n\n```bash\napt-get install python python-pip curl\npip install rosdep rospkg rosinstall_generator rosinstall wstool vcstools catkin_tools catkin_pkg\n```\n\n- Install project dependencies\n\n```bash\npip install tensorflow keras pillow matplotlib scikit-learn scikit-image opencv-python pydot GraphViz tk\n```\n\n- Clone the project into your ROS workspace and download pretrained models\n\n```bash\ngit clone https://github.com/tentone/monodepth.git\ncd monodepth/models\ncurl –o nyu.h5 https://s3-eu-west-1.amazonaws.com/densedepth/nyu.h5\n```\n\n\n\n### Launch\n\n- Example ROS launch entry provided bellow, for easier integration into your already existing ROS launch pipeline.\n\n```xml\n\u003cnode pkg=\"monodepth\" type=\"monodepth.py\" name=\"monodepth\" output=\"screen\" respawn=\"true\"\u003e\n    \u003cparam name=\"topic_color\" value=\"/camera/image_raw\"/\u003e\n    \u003cparam name=\"topic_depth\" value=\"/camera/depth\"/\u003e\n\u003c/node\u003e\n```\n\n\n\n### Pretrained models\n\n - Pre-trained keras models can be downloaded and placed in the /models folder from the following links:\n    - [NYU Depth V2 (165MB)](https://s3-eu-west-1.amazonaws.com/densedepth/nyu.h5) \n    - [KITTI (165MB)](https://s3-eu-west-1.amazonaws.com/densedepth/kitti.h5)\n\n\n\n\n### Datasets for training\n - [NYU Depth V2 (50K)](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) \n    - The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.\n    - [Download dataset](https://s3-eu-west-1.amazonaws.com/densedepth/nyu_data.zip) (4.1 GB)\n - [KITTI Dataset (80K)](http://www.cvlibs.net/datasets/kitti/) \n    - Datasets captured by driving around the mid-size city of [Karlsruhe](http://maps.google.com/?ie=UTF8\u0026z=15\u0026ll=49.010627,8.405871\u0026spn=0.018381,0.029826\u0026t=k\u0026om=1), in rural areas and on highways. Up to 15 cars and 30 pedestrians are visible per image.\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftentone%2Fmonodepth","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftentone%2Fmonodepth","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftentone%2Fmonodepth/lists"}