https://github.com/LiheYoung/Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
https://github.com/LiheYoung/Depth-Anything
depth-estimation image-synthesis metric-depth-estimation monocular-depth-estimation
Last synced: 3 months ago
JSON representation
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
- Host: GitHub
- URL: https://github.com/LiheYoung/Depth-Anything
- Owner: LiheYoung
- License: apache-2.0
- Created: 2024-01-22T01:09:25.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-17T10:46:05.000Z (11 months ago)
- Last Synced: 2024-10-27T07:32:18.899Z (8 months ago)
- Topics: depth-estimation, image-synthesis, metric-depth-estimation, monocular-depth-estimation
- Language: Python
- Homepage: https://depth-anything.github.io
- Size: 232 MB
- Stars: 6,928
- Watchers: 49
- Forks: 531
- Open Issues: 122
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Reasoning-Foundation-Models - [code
- AiTreasureBox - LiheYoung/Depth-Anything - 05-13_7508_0](https://img.shields.io/github/stars/LiheYoung/Depth-Anything.svg)|Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data| (Repos)
- awesome-osml-for-devs - Depth Anything
- awesome-osml-for-devs - Depth Anything
- awesome - LiheYoung/Depth-Anything - \[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation (Python)
- awesome - LiheYoung/Depth-Anything - \[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation (Python)
- awesome-scientific-image-analysis - Depth Anything - -> (🛸 Other / 🛠️ Utilities)
- StarryDivineSky - LiheYoung/Depth-Anything - Anything是一个CVPR 2024发布的单目深度估计基础模型项目,它利用大规模无标注数据训练,旨在释放无标签数据的潜力。该项目提供了一个强大的深度估计解决方案,能够从单张图像中预测深度信息。Depth-Anything模型具有零样本泛化能力,无需针对特定场景进行微调即可应用于各种图像。它通过巧妙的网络设计和训练策略,实现了高精度和鲁棒性的深度估计。项目提供了预训练模型和代码,方便研究人员和开发者使用。Depth-Anything在深度估计任务上表现出色,为计算机视觉领域带来了新的突破。其核心优势在于利用海量无标注数据进行预训练,从而提升了模型的泛化能力和准确性。项目代码和模型可用于各种应用,如机器人导航、自动驾驶和三维重建等。开发者可以基于该项目进行二次开发,进一步提升深度估计的性能。 (对象检测_分割 / 资源传输下载)