{"id":13633742,"url":"https://github.com/nianticlabs/diffusionerf","last_synced_at":"2025-04-07T13:06:21.087Z","repository":{"id":108734173,"uuid":"605662936","full_name":"nianticlabs/diffusionerf","owner":"nianticlabs","description":"[CVPR 2023] DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models","archived":false,"fork":false,"pushed_at":"2023-11-23T09:38:20.000Z","size":2544,"stargazers_count":303,"open_issues_count":10,"forks_count":17,"subscribers_count":22,"default_branch":"main","last_synced_at":"2025-03-31T11:04:26.118Z","etag":null,"topics":["deep-learning","diffusion","diffusion-models","nerf","neuralradiance-fields","radiance-field","reconstruction","regularization"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nianticlabs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-02-23T16:28:19.000Z","updated_at":"2025-03-15T13:06:32.000Z","dependencies_parsed_at":null,"dependency_job_id":"529e1508-01a4-44ac-bc26-cebcc9dcdb2a","html_url":"https://github.com/nianticlabs/diffusionerf","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Fdiffusionerf","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Fdiffusionerf/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Fdiffusionerf/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nianticlabs%2Fdiffusionerf/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nianticlabs","download_url":"https://codeload.github.com/nianticlabs/diffusionerf/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247657276,"owners_count":20974344,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","diffusion","diffusion-models","nerf","neuralradiance-fields","radiance-field","reconstruction","regularization"],"created_at":"2024-08-01T23:00:50.877Z","updated_at":"2025-04-07T13:06:21.062Z","avatar_url":"https://github.com/nianticlabs.png","language":"Python","readme":"# [DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models](https://arxiv.org/abs/2302.12231)\n\n**[Jamie Wynn](https://scholar.google.com/citations?user=ASP-uu4AAAAJ\u0026hl=en\u0026oi=ao) and [Daniyar Turmukhambetov](https://scholar.google.com/citations?user=ELFm0CgAAAAJ\u0026hl=en\u0026oi=ao) – CVPR 2023**\n\n\n[Paper](https://arxiv.org/abs/2302.12231) | [Supplementary material](https://storage.googleapis.com/niantic-lon-static/research/diffusionerf/diffusionerf_supplemental.pdf)\n\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://youtu.be/zyRbeBbM-mw\"\u003e\n  \u003cimg src=\"assets/video_thumbnail.png\" alt=\"2 minute video\" width=\"500\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n## Update 1:\nIt was [brought to our attention](https://github.com/nianticlabs/diffusionerf/issues/13) that we incorrectly used Alex-net to compute LPIPS (and subsequently Average) metrics for our networks in Table 1 of the CVPR-version of the paper. We have now updated the table by using VGG network scores and updated the paper on [arxiv]((https://arxiv.org/abs/2302.12231)).\n\n## Quickstart\n\nThis section will walk you through setting up DiffusioNeRF and using it to fit a NeRF to a scene from LLFF.\n\n### Hardware requirements\n\nYou will need a relatively powerful graphics card to run DiffusioNeRF, in part due to the use of the [tiny-cuda-nn](https://github.com/NVlabs/tiny-cuda-nn) framework. All of our experiments were performed on an A100.\n\n### Conda environment\nCreate the DiffusioNeRF Conda environment using:\n\n```\nconda env create -f environment.yml\n```\n\n### Downloading the pretrained diffusion model\n\nTo download the RGBD patch diffusion model which we trained for our experiments, run (from the root of this repo):\n\n```\nmkdir models \u0026\u0026 wget https://storage.googleapis.com/niantic-lon-static/research/diffusionerf/rgbd-patch-diffusion.pt -O models/rgbd-patch-diffusion.pt\n```\n\n### Prepare the LLFF dataset\n\nFirst acquire the LLFF dataset by downloading and extracting `nerf_llff_data.zip` from the [official link](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1) provided by the [NeRF repository](https://github.com/bmild/nerf). Extract it to ./data/nerf_llff_data relative to the root of this repo.\n\nAfter downloading it, you must preprocess it into the required format by running the included `scripts/preprocess_llff.sh` from inside the root directory of the extracted LLFF dataset. This will generate a transforms.json for each scene.\n\n### Run on the LLFF dataset\n\nYou can now fit a NeRF to an LLFF scene using our regularizers by running from the root of the repo:\n\n```\nbash scripts/run_diffusionerf_example.sh\n```\n\nThe arguments passed in this script correspond to the configuration reported as ours in the paper.\n\nImage-by-image metrics will be written to the output folder (which with the above script will be `./runs/example/3_poses/room/`) under `metrics.json`. You should obtain an average test PSNR of about 21.6 with this script.\n\nTo change the script to run a full LLFF evaluation, just delete the `--only_run_on room` argument to run on all scenes, and change `--num_train 3` to `--num_train 3 6 9` to run each scene with 3, 6 and 9 training views.\n\nTo run without our learned diffusion model regularizer, just drop the `--patch_regulariser_path` argument; to run without the Mipnerf-360 loss, drop the `--spread_loss_strength 1.e-5`.\n\n### Run on other scenes\n\n`nerf/evaluate.py`, which is used in the above steps, is just a wrapper around `main.py`; if you want to run on other data, you should use `main.py`. The data should be in the NeRF 'blender' format, i.e. it should contain a `transforms.json` file.\n\n## Citation\n\nIf you find our work useful or interesting, please consider citing [our paper](https://arxiv.org/abs/2302.12231):\n\n```\n@inproceedings{wynn-2023-diffusionerf,\n title   = {{DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models}},\n author  = {Jamie Wynn and\n            Daniyar Turmukhambetov\n           },\n booktitle = {CVPR},\n year = {2023}\n}\n```\n\n## Acknowledgements\n\nThis code is built on [torch-ngp](https://github.com/ashawkey/torch-ngp). It also uses functions from [denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).\n\n# License\nCopyright © Niantic, Inc. 2023. Patent Pending. All rights reserved. Please see the license file for terms.\n","funding_links":[],"categories":["Awesome NeRF with Geometry Losses"],"sub_categories":["Existing data loaders"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnianticlabs%2Fdiffusionerf","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnianticlabs%2Fdiffusionerf","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnianticlabs%2Fdiffusionerf/lists"}