Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Karbo123/RGBD-Diffusion
RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models
https://github.com/Karbo123/RGBD-Diffusion
3d-reconstruction diffusion-models inpainting scene-reconstruction
Last synced: 3 months ago
JSON representation
RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models
- Host: GitHub
- URL: https://github.com/Karbo123/RGBD-Diffusion
- Owner: Karbo123
- License: mit
- Created: 2022-12-13T11:47:27.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-03-17T06:24:13.000Z (almost 2 years ago)
- Last Synced: 2024-08-01T18:33:36.874Z (6 months ago)
- Topics: 3d-reconstruction, diffusion-models, inpainting, scene-reconstruction
- Language: Python
- Homepage: https://jblei.site/proj/rgbd-diffusion
- Size: 28.3 KB
- Stars: 88
- Watchers: 6
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
RGBD2: Generative Scene Synthesis via Incremental
View Inpainting using RGBD Diffusion Models
CVPR 2023
In this work, we present a new solution termed RGBD2 that sequentially
generates novel RGBD views along a camera trajectory, and the scene geometry
is simply the fusion result of these views.
# Preparation
```bash
# download this repo
git clone [email protected]:Karbo123/RGBD-Diffusion.git --depth=1
cd RGBD-Diffusion
git submodule update --init --recursive# set up environment
conda create -n RGBD2 python=3.8
conda activate RGBD2# install packages
pip install torch # tested on 1.12.1+cu116
pip install torchvision
pip install matplotlib # tested on 3.5.3
pip install opencv-python einops trimesh diffusers ninja open3d# install dependencies
cd ./third_party/nvdiffrast && pip install . && cd ../..
cd ./third_party/recon && pip install . && cd ../..```
Download some files:
1. [the preprocessed ScanNetV2 dataset](https://drive.google.com/file/d/12MUFPsLxJakr5bnLO5XsyGQ4lEN9q2Wb/view?usp=share_link). Extract via `mkdir data_file && unzip scans_keyframe.zip -d data_file && mv data_file/scans_keyframe data_file/ScanNetV2`.
2. [model checkpoint](https://drive.google.com/file/d/1R2fvrnVx4ORh3d9Z5n_NHf97X93S78vo/view?usp=share_link). Extract via `mkdir -p out/RGBD2/checkpoint && unzip model.zip -d out/RGBD2/checkpoint`.Copy the config file to an output folder:
```
mkdir -p out/RGBD2/backup/config
cp ./config/cfg_RGBD2.py out/RGBD2/backup/config
```# Training
We provide a checkpoint, so you actually don't need to train a model from scratch.
To launch training, simply run:
```bash
CUDA_VISIBLE_DEVICES=0 python -m recon.runner.train --cfg config/cfg_RGBD2.py
```
If you want to train with multiple GPUs, try setting, e.g. `CUDA_VISIBLE_DEVICES=0,1,2,3`.
We note that it visualizes the training process by producing some TensorBoard files.# Inference
To generate a test scene, simply run:
```bash
CUDA_VISIBLE_DEVICES=0 python experiments/run.py
```By additionally providing `--interactive`, you can control the generation process via manual control using a GUI.
Our GUI code uses Matplotlib, so you can even run the code on a remote server, and use x-server (e.g. MobaXterm) to enable graphic control!![GUI](https://i.328888.xyz/2023/03/17/LD7h3.png)
# About
If you find our work useful, please consider citing our paper:
```
@InProceedings{Lei_2023_CVPR,
author = {Lei, Jiabao and Tang, Jiapeng and Jia, Kui},
title = {RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023}
}
```This repo is yet an **early-access** version which is under active update.
If you have any questions or needs, feel free to contact me, or just create a GitHub issue.