https://github.com/cansik/midas-converter
Utility to convert RGB frames into depth frames by using MiDas.
https://github.com/cansik/midas-converter
depth-estimation midas openvino python
Last synced: 7 months ago
JSON representation
Utility to convert RGB frames into depth frames by using MiDas.
- Host: GitHub
- URL: https://github.com/cansik/midas-converter
- Owner: cansik
- Created: 2020-10-30T19:36:40.000Z (almost 5 years ago)
- Default Branch: main
- Last Pushed: 2020-10-30T21:57:26.000Z (almost 5 years ago)
- Last Synced: 2025-02-05T06:43:31.787Z (9 months ago)
- Topics: depth-estimation, midas, openvino, python
- Language: Python
- Homepage:
- Size: 161 KB
- Stars: 2
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# MiDas Converter
Utility to convert RGB frames into depth frames by using MiDas.
*Office demo image by [fauxels](https://www.pexels.com/@fauxels) from Pexels*
### Installation
It is recommended to use a seperate Python envrionment ([virtualenv](https://virtualenv.pypa.io/en/latest/)) and Python `3.6.X` (Windows 10 & MacOS).1. Download and install the latest [OpenVINO framework](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) (`2021.1`).
2. Run the following script to install the models, convert them and setup the environment:```bash
# one-script setup of the full environment (grab a coffee ☕)# windows
setup.bat# unix
./setup.sh
```### Demo
Either use a predefined batch script to run the examples or call the demo itself.
```bash
# demo
run.bat
./run.sh# on your own image
python monodepth_demo.py -m public\midasnet\FP32\midasnet.xml -i yourimage.jpg
```There is also a live webcam-feed inference example.
```bash
run_webcam.bat
./run_webcam.sh
```#### Batch Convert
To batch convert a lot of frames run the `monodepth_convert.py` script.
```bash
python monodepth_convert.py -m public\midasnet\FP32\midasnet.xml -i frames
```### OpenCV Demo
If you are interested in a OpenCV implementation (in Java) feel free to have a look at [the code here](https://github.com/cansik/deep-vision-processing/blob/master/src/main/java/ch/bildspur/vision/MidasNetwork.java#L37-L89).
### About
The [MiDas network](https://github.com/intel-isl/MiDaS) and the pre-trained weights are part of the [Intel ISL](https://github.com/intel-isl). This repository just adds some scripts and utilties to work with it.