https://github.com/apple/ml-depth-pro
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.
https://github.com/apple/ml-depth-pro
Last synced: 7 months ago
JSON representation
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.
- Host: GitHub
- URL: https://github.com/apple/ml-depth-pro
- Owner: apple
- License: other
- Created: 2024-08-26T20:26:05.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-04-21T12:13:43.000Z (8 months ago)
- Last Synced: 2025-04-30T12:16:59.668Z (8 months ago)
- Language: Python
- Homepage:
- Size: 2.43 MB
- Stars: 4,392
- Watchers: 46
- Forks: 318
- Open Issues: 70
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
## Depth Pro: Sharp Monocular Metric Depth in Less Than a Second
This software project accompanies the research paper:
**[Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https://arxiv.org/abs/2410.02073)**,
*Aleksei Bochkovskii, Amaël Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter, and Vladlen Koltun*.

We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
The model in this repository is a reference implementation, which has been re-trained. Its performance is close to the model reported in the paper but does not match it exactly.
## Getting Started
We recommend setting up a virtual environment. Using e.g. miniconda, the `depth_pro` package can be installed via:
```bash
conda create -n depth-pro -y python=3.9
conda activate depth-pro
pip install -e .
```
To download pretrained checkpoints follow the code snippet below:
```bash
source get_pretrained_models.sh # Files will be downloaded to `checkpoints` directory.
```
### Running from commandline
We provide a helper script to directly run the model on a single image:
```bash
# Run prediction on a single image:
depth-pro-run -i ./data/example.jpg
# Run `depth-pro-run -h` for available options.
```
### Running from python
```python
from PIL import Image
import depth_pro
# Load model and preprocessing transform
model, transform = depth_pro.create_model_and_transforms()
model.eval()
# Load and preprocess an image.
image, _, f_px = depth_pro.load_rgb(image_path)
image = transform(image)
# Run inference.
prediction = model.infer(image, f_px=f_px)
depth = prediction["depth"] # Depth in [m].
focallength_px = prediction["focallength_px"] # Focal length in pixels.
```
### Evaluation (boundary metrics)
Our boundary metrics can be found under `eval/boundary_metrics.py` and used as follows:
```python
# for a depth-based dataset
boundary_f1 = SI_boundary_F1(predicted_depth, target_depth)
# for a mask-based dataset (image matting / segmentation)
boundary_recall = SI_boundary_Recall(predicted_depth, target_mask)
```
## Citation
If you find our work useful, please cite the following paper:
```bibtex
@inproceedings{Bochkovskii2024:arxiv,
author = {Aleksei Bochkovskii and Ama\"{e}l Delaunoy and Hugo Germain and Marcel Santos and
Yichao Zhou and Stephan R. Richter and Vladlen Koltun},
title = {Depth Pro: Sharp Monocular Metric Depth in Less Than a Second},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://arxiv.org/abs/2410.02073},
}
```
## License
This sample code is released under the [LICENSE](LICENSE) terms.
The model weights are released under the [LICENSE](LICENSE) terms.
## Acknowledgements
Our codebase is built using multiple opensource contributions, please see [Acknowledgements](ACKNOWLEDGEMENTS.md) for more details.
Please check the paper for a complete list of references and datasets used in this work.