Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jiawei-ren/dreamgaussian4d
[arXiv 2023] DreamGaussian4D: Generative 4D Gaussian Splatting
https://github.com/jiawei-ren/dreamgaussian4d
image-to-4d
Last synced: 29 days ago
JSON representation
[arXiv 2023] DreamGaussian4D: Generative 4D Gaussian Splatting
- Host: GitHub
- URL: https://github.com/jiawei-ren/dreamgaussian4d
- Owner: jiawei-ren
- License: mit
- Created: 2023-12-28T08:17:40.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-06-10T15:30:49.000Z (6 months ago)
- Last Synced: 2024-08-04T01:22:23.877Z (4 months ago)
- Topics: image-to-4d
- Language: Python
- Homepage: https://jiawei-ren.github.io/projects/dreamgaussian4d/
- Size: 6.88 MB
- Stars: 487
- Watchers: 15
- Forks: 30
- Open Issues: 7
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
- ai-game-devtools - DreamGaussian4D
README
DreamGaussian4D:
Generative 4D Gaussian Splatting
Jiawei Ren* Liang Pan* Jiaxiang Tang Chi Zhang Ang Cao Gang Zeng Ziwei Liu†
S-Lab, Nanyang Technological University
Shanghai AI Laboratory
Peking University
University of Michigan
*equal contribution
†corresponding author
Arxiv 2023https://github.com/jiawei-ren/dreamgaussian4d/assets/72253125/8fdadc58-1ad8-4664-a6f8-70e20c612c10
---
[Project Page] |
[Paper]
|
### News
- 2024.6.10: add Gradio demo.
- 2024.6.9:
- support [LGM](https://github.com/3DTopia/LGM) for static 3D generation.
- support video-to-4d generation. Add evaluation scripts for the [Consistent4D](https://consistent4d.github.io/) benchmark. Results are in our updated [project page](https://jiawei-ren.github.io/projects/dreamgaussian4d/) and [report](https://arxiv.org/abs/2312.17142).
- improve the implementation for better speed and quality. Add a gradio demo for image-to-4d.## Install
```bash
# python 3.10 cuda 11.8
conda create -n dg4d python=3.10 -y && conda activate dg4d
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkitpip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install xformers==0.0.23 --no-deps --index-url https://download.pytorch.org/whl/cu118# other dependencies
pip install -r requirements.txt# a modified gaussian splatting (+ depth, alpha rendering)
git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
pip install ./diff-gaussian-rasterization# simple-knn
pip install ./simple-knn# for mesh extraction
pip install git+https://github.com/NVlabs/nvdiffrast/
```To use pretrained LGM:
```bash
# for LGM
mkdir pretrained && cd pretrained
wget https://huggingface.co/ashawkey/LGM/resolve/main/model_fp16_fixrot.safetensors
cd ..
```## Image-to-4D
##### (Optional) Preprocess input image
```bash
python scripts/process.py data/anya.png
```
##### Step 1: Generate driving videos
```bash
python scripts/gen_vid.py --path data/anya_rgba.png --seed 42 --bg white
```
##### Step 2: static generation
Static generation with [LGM](https://github.com/3DTopia/LGM):
```bash
python lgm/infer.py big --test_path data/anya_rgba.png
```
Optionally, we support static generation with [DreamGaussian](https://github.com/dreamgaussian/dreamgaussian):
```bash
python dg.py --config configs/dg.yaml input=data/anya_rgba.png
```
See `configs/dghd.yaml` for high-quality DreamGaussian training configurations.##### Step 3: dynamic generation
```bash
# load static 3D from LGM
python main_4d.py --config configs/4d.yaml input=data/anya_rgba.png# (Optional) to load static 3D from DreamGaussian, add `radius=2`
python main_4d.py --config configs/4d.yaml input=data/anya_rgba.png radius=2# (Optional) to turn on viser GUI, add `gui=True`, e.g.:
python main_4d.py --config configs/4d.yaml input=data/anya_rgba.png gui=True
```
See `configs/4d_low.yaml` and `configs/4d_demo.yaml` for more memory-friendly and faster optimization configurations.##### (Optional) Step 4: mesh refinment
```bash
# export mesh after temporal optimization by adding `mesh_format=obj`
python main_4d.py --config configs/4d.yaml input=data/anya_rgba.png mesh_format=obj# mesh refinement
python main2_4d.py --config configs/refine.yaml input=data/anya_rgba.png# (Optional) to load static 3D from DreamGaussian, add `radius=2`
python main2_4d.py --config configs/refine.yaml input=data/anya_rgba.png radius=2
```## Video-to-4D
##### Prepare Data
Download [Consistent4D data](https://consistent4d.github.io/) to `data/CONSISTENT4D_DATA`. `python scripts/add_bg_to_gt.py` will add white background to ground-truth novel views.##### Step 1: static generation
```bash
python lgm/infer.py big --test_path data/CONSISTENT4D_DATA/in-the-wild/blooming_rose/0.png# (Optional) static 3D generation with DG
python dg.py --config configs/dg.yaml input=data/CONSISTENT4D_DATA/in-the-wild/blooming_rose/0.png
```##### Step 2: dynamic generation
```bash
python main_4d.py --config configs/4d_c4d.yaml input=data/CONSISTENT4D_DATA/in-the-wild/blooming_rose# (Optional) to load static 3D from DG, add `radius=2`
python main_4d.py --config configs/4d_c4d.yaml input=data/CONSISTENT4D_DATA/in-the-wild/blooming_rose radius=2
```
## Run demo locally
```bash
gradio gradio_app.py
```## Load exported meshes in Blender
- Install the [Stop-motion-OBJ
](https://github.com/neverhood311/Stop-motion-OBJ) add-on
- File -> Import -> Mesh Sequence
- Go to `logs` directory, type in the file name (e.g., 'anya'), and tick `Material per Frame`.https://github.com/jiawei-ren/dreamgaussian4d/assets/72253125/a558a475-e2db-4cdf-9bbf-e0e8d031e232
## Tips
- Black video after running `gen_vid.py`.
- Make sure pytorch version is >=2.0## Acknowledgement
This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
* [4DGaussians](https://github.com/hustvl/4DGaussians)
* [DreamGaussian](https://github.com/dreamgaussian/dreamgaussian)
* [gaussian-splatting](https://github.com/graphdeco-inria/gaussian-splatting) and [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization)
* [threestudio](https://github.com/threestudio-project/threestudio)
* [nvdiffrast](https://github.com/NVlabs/nvdiffrast)## Citation
```
@article{ren2023dreamgaussian4d,
title={DreamGaussian4D: Generative 4D Gaussian Splatting},
author={Ren, Jiawei and Pan, Liang and Tang, Jiaxiang and Zhang, Chi and Cao, Ang and Zeng, Gang and Liu, Ziwei},
journal={arXiv preprint arXiv:2312.17142},
year={2023}
}
```