{"id":18317328,"url":"https://github.com/compvis/depth-fm","last_synced_at":"2025-04-08T04:10:38.169Z","repository":{"id":229061508,"uuid":"775661985","full_name":"CompVis/depth-fm","owner":"CompVis","description":"[AAAI 2025] DepthFM: Fast Monocular Depth Estimation with Flow Matching","archived":false,"fork":false,"pushed_at":"2024-12-21T14:42:47.000Z","size":4778,"stargazers_count":526,"open_issues_count":20,"forks_count":37,"subscribers_count":14,"default_branch":"main","last_synced_at":"2025-04-01T03:25:59.366Z","etag":null,"topics":["depth-estimation","diffusion-model","flow-matching","stochastic-interpolants"],"latest_commit_sha":null,"homepage":"https://depthfm.github.io/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CompVis.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-21T20:03:02.000Z","updated_at":"2025-03-31T04:59:22.000Z","dependencies_parsed_at":"2024-03-25T11:50:44.060Z","dependency_job_id":"e44045c4-4fa6-4515-aec4-b6435a52f731","html_url":"https://github.com/CompVis/depth-fm","commit_stats":null,"previous_names":["compvis/depth-fm"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fdepth-fm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fdepth-fm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fdepth-fm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fdepth-fm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CompVis","download_url":"https://codeload.github.com/CompVis/depth-fm/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247773719,"owners_count":20993639,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["depth-estimation","diffusion-model","flow-matching","stochastic-interpolants"],"created_at":"2024-11-05T18:05:46.384Z","updated_at":"2025-04-08T04:10:38.151Z","avatar_url":"https://github.com/CompVis.png","language":"Jupyter Notebook","readme":"\u003cp align=\"center\"\u003e\n \u003c!-- \u003ch2 align=\"center\"\u003e📻 DepthFM: Fast Monocular Depth Estimation with Flow Matching\u003c/h2\u003e --\u003e\n \u003ch2 align=\"center\"\u003e\u003cimg src=assets/figures/radio.png width=28\u003e DepthFM: Fast Monocular Depth Estimation with Flow Matching\u003c/h2\u003e\n \u003cp align=\"center\"\u003e \n    Ming Gui\u003csup\u003e*\u003c/sup\u003e · Johannes Schusterbauer\u003csup\u003e*\u003c/sup\u003e · Ulrich Prestel · Pingchuan Ma\n \u003c/p\u003e\u003cp align=\"center\"\u003e \n    Dmytro Kotovenko · Olga Grebenkova · Stefan A. Baumann · Vincent Tao Hu · Björn Ommer\n \u003c/p\u003e\n \u003cp align=\"center\"\u003e \n    \u003cb\u003eCompVis Group @ LMU Munich\u003c/b\u003e\n \u003c/p\u003e\n \u003cp align=\"center\"\u003e \n    \u003cb\u003eAAAI 2025\u003c/b\u003e\n \u003c/p\u003e\n  \u003cp align=\"center\"\u003e \u003csup\u003e*\u003c/sup\u003e \u003ci\u003eequal contribution\u003c/i\u003e \u003c/p\u003e\n\u003c/p\u003e\n\n \u003c/p\u003e\n\n[![Website](assets/figures/badge-website.svg)](https://depthfm.github.io)\n[![Paper](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.org/abs/2403.13788)\n\n\n![Cover](/assets/figures/dfm-cover.png)\n\n\n## 📻 Overview\n\nWe present **DepthFM**, a state-of-the-art, versatile, and fast monocular depth estimation model. DepthFM is efficient and can synthesize realistic depth maps within *a single inference* step. Beyond conventional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting and depth conditional synthesis.\n\nWith our work we demonstrate the successful transfer of strong image priors from a foundation image synthesis diffusion model (Stable Diffusion v2-1) to a flow matching model. Instead of starting from noise, we directly map from input image to depth map.\n\n\n## 🛠️ Setup\n\nThis setup was tested with `Ubuntu 22.04.4 LTS`, `CUDA Version: 12.4`, and `Python 3.10.12`.\n\nFirst, clone the github repo...\n\n```bash\ngit clone git@github.com:CompVis/depth-fm.git\ncd depth-fm\n```\n\nThen download the weights via\n\n```bash\nwget https://ommer-lab.com/files/depthfm/depthfm-v1.ckpt -P checkpoints/\n```\n\nNow you have either the option to setup a virtual environment and install all required packages with `pip`\n\n```bash\npip install -r requirements.txt\n```\n\nor if you prefer to use `conda` create the conda environment via\n\n```bash\nconda env create -f environment.yml\n```\n\nNow you should be able to listen to DepthFM! 📻 🎶\n\n\n## 🚀 Usage\n\nYou can either use the notebook `inference.ipynb` or just run the python script `inference.py` as follows\n\n```bash\npython inference.py \\\n   --num_steps 2 \\\n   --ensemble_size 4 \\\n   --img assets/dog.png \\\n   --ckpt checkpoints/depthfm-v1.ckpt\n```\n\nThe argument `--num_steps` allows you to set the number of function evaluations. We find that our model already gives very good results with as few as one or two steps. Ensembling also improves performance, so you can set it via the `--ensemble_size` argument. Currently, the inference code only supports a batch size of one for ensembling.\n\n## 📈 Results\n\nOur quantitative analysis shows that despite being substantially more efficient, our DepthFM performs on-par or even outperforms the current state-of-the-art generative depth estimator Marigold **zero-shot** on a range of benchmark datasets. Below you can find a quantitative comparison of DepthFM against other affine-invariant depth estimators on several benchmarks.\n\n![Results](/assets/figures/sota-comparison.jpg)\n\n\n\n## Trend\n\n[![Star History Chart](https://api.star-history.com/svg?repos=CompVis/depth-fm\u0026type=Date)](https://star-history.com/#CompVis/depth-fm\u0026Date)\n\n\n\n\n## 🎓 Citation\n\nPlease cite our paper:\n\n```bibtex\n@misc{gui2024depthfm,\n      title={DepthFM: Fast Monocular Depth Estimation with Flow Matching}, \n      author={Ming Gui and Johannes Schusterbauer and Ulrich Prestel and Pingchuan Ma and Dmytro Kotovenko and Olga Grebenkova and Stefan Andreas Baumann and Vincent Tao Hu and Björn Ommer},\n      year={2024},\n      eprint={2403.13788},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcompvis%2Fdepth-fm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcompvis%2Fdepth-fm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcompvis%2Fdepth-fm/lists"}