https://github.com/lakonik/mvedit
3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation
https://github.com/lakonik/mvedit
3d diffusion-models generative-ai stable-diffusion texture
Last synced: 6 days ago
JSON representation
3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation
- Host: GitHub
- URL: https://github.com/lakonik/mvedit
- Owner: Lakonik
- License: mit
- Created: 2024-03-14T19:36:56.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-25T09:49:46.000Z (5 months ago)
- Last Synced: 2025-05-04T21:06:02.499Z (6 days ago)
- Topics: 3d, diffusion-models, generative-ai, stable-diffusion, texture
- Language: JavaScript
- Homepage: https://lakonik.github.io/3d-adapter
- Size: 73.5 MB
- Stars: 326
- Watchers: 11
- Forks: 17
- Open Issues: 11
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# 3D-Adapter
Official PyTorch implementation of the papers:
**3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation**
[Hansheng Chen](https://lakonik.github.io/)1,
[Bokui Shen](https://cs.stanford.edu/people/bshen88/)2,
[Yulin Liu](https://liuyulinn.github.io/)3,4,
[Ruoxi Shi](https://rshi.top/)3,
[Linqi Zhou](https://alexzhou907.github.io/)2,
[Connor Z. Lin](https://connorzlin.com/)2,
[Jiayuan Gu](https://pages.ucsd.edu/~ztu/)3,
[Hao Su](https://cseweb.ucsd.edu/~haosu/)3,4,
[Gordon Wetzstein](http://web.stanford.edu/~gordonwz/)1,
[Leonidas Guibas](https://geometry.stanford.edu/member/guibas/)1
1Stanford University, 2Apparate Labs, 3UCSD, 4Hillbot
[[Project page](https://lakonik.github.io/3d-adapter)] [[🤗Demo](https://huggingface.co/spaces/Lakonik/3D-Adapter)] [[Paper](https://arxiv.org/abs/2410.18974)]**Generic 3D Diffusion Adapter Using Controlled Multi-View Editing**
[Hansheng Chen](https://lakonik.github.io/)1,
[Ruoxi Shi](https://rshi.top/)2,
[Yulin Liu](https://liuyulinn.github.io/)2,
[Bokui Shen](https://cs.stanford.edu/people/bshen88/)3,
[Jiayuan Gu](https://pages.ucsd.edu/~ztu/)2,
[Gordon Wetzstein](http://web.stanford.edu/~gordonwz/)1,
[Hao Su](https://cseweb.ucsd.edu/~haosu/)2,
[Leonidas Guibas](https://geometry.stanford.edu/member/guibas/)1
1Stanford University, 2UCSD, 3Apparate Labs
[[Project page](https://lakonik.github.io/mvedit)] [[🤗Demo](https://huggingface.co/spaces/Lakonik/MVEdit)] [[Paper](https://arxiv.org/abs/2403.12032)]https://github.com/user-attachments/assets/6cba3a92-04fe-46ee-88ca-e6dfe5443c36
## Todos
- [ ] Release GRM-based 3D-Adapters (unfortunately, we cannot release these models before the official release of [GRM](https://github.com/justimyhxu/GRM))
## Installation
The code has been tested in the environment described as follows:
- Linux (tested on Ubuntu 20 and above)
- [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) 11.8 and above
- [PyTorch](https://pytorch.org/get-started/previous-versions/) 2.1 and above
- FFmpeg, x264 (optional, for exporting videos)Other dependencies can be installed via `pip install -r requirements.txt`.
An example of installation commands is shown below (you may change the CUDA version yourself):
```bash
# Export the PATH of CUDA toolkit
export PATH=/usr/local/cuda-12.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64:$LD_LIBRARY_PATH# Create conda environment
conda create -y -n mvedit python=3.10 numpy=1.26 ninja
conda activate mvedit# Install FFmpeg (optional)
conda install -c conda-forge ffmpeg x264# Install PyTorch
conda install pytorch==2.1.2 torchvision==0.16.2 pytorch-cuda=12.1 -c pytorch -c nvidia# Clone this repo and install other dependencies
git clone https://github.com/Lakonik/MVEdit && cd MVEdit
pip install -r requirements.txt
```This codebase also works on Windows systems if the environment is configured correctly. Certain packages (e.g., tiny-cuda-nn) may require adjustments for installation on Windows.
## Inference
We recommend using the Gradio Web UI and its APIs for inference. A GPU with at least 24GB of VRAM is required to run the Web UI.
### Web UI
Run the following command to start the Web UI:
```bash
python app.py --unload-models
```The Web UI will be available at [http://localhost:7860](http://localhost:7860). If you add the `--share` flag, a temporary public URL will be generated for you to share the Web UI with others.
All models will be automatically loaded on demand. The first run will take a very long time to download the models. Check your network connection to GitHub, Google Drive and Hugging Face if the download fails.
To view other options, run:
```bash
python app.py -h
```### API
After starting the Web UI, the API docs will be available at [http://localhost:7860/?view=api](http://localhost:7860/?view=api).
Alternatively, you can access the APIs provided by the official demo ([https://mvedit.hanshengchen.com/?view=api](https://mvedit.hanshengchen.com/?view=api)), including APIs for GRM Adapters.
The API docs are automatically generated by Gradio, and the data types and default values may be incorrect. Please use the default values in the Web UI as a reference.
**Please refer to our [examples](scripts/) for API usage with python.**
## Training
Optimization-based 3D-Adapters (a.k.a. MVEdit adapters) adopt only off-the-shelf models and require no further training.
The training code for GRM-based 3D-Adapters will be released after the official release of [GRM](https://github.com/justimyhxu/GRM).
## Acknowledgements
This codebase is built upon the following repositories:
- Base library modified from [SSDNeRF](https://github.com/Lakonik/SSDNeRF)
- NeRF renderer and DMTet modified from [Stable-DreamFusion](https://github.com/ashawkey/stable-dreamfusion)
- Gaussian Splatting renderer modified from [3DGS](https://github.com/graphdeco-inria/gaussian-splatting) and [Differential Gaussian Rasterization](https://github.com/ashawkey/diff-gaussian-rasterization)
- Mesh I/O modified from [DreamGaussian](https://github.com/dreamgaussian/dreamgaussian)
- [GRM](https://github.com/justimyhxu/GRM) for Gaussian reconstruction
- [Zero123++](https://github.com/SUDO-AI-3D/zero123plus) for image-to-3D initialization
- [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter) for extra conditioning
- [TRACER](https://github.com/Karel911/TRACER) for background removal
- [LoFTR](https://github.com/zju3dv/LoFTR) for pose estimation in image-to-3D
- [Omnidata](https://github.com/EPFL-VILAB/omnidata) for normal prediction in image-to-3D
- [Image Packer](https://github.com/theFroh/imagepacker) for mesh preprocessing## Citation
```bibtex
@misc{3dadapter2024,
title={3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation},
author={Hansheng Chen and Bokui Shen and Yulin Liu and Ruoxi Shi and Linqi Zhou and Connor Z. Lin and Jiayuan Gu and Hao Su and Gordon Wetzstein and Leonidas Guibas},
year={2024},
eprint={2410.18974},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.18974},
}@misc{mvedit2024,
title={Generic 3D Diffusion Adapter Using Controlled Multi-View Editing},
author={Hansheng Chen and Ruoxi Shi and Yulin Liu and Bokui Shen and Jiayuan Gu and Gordon Wetzstein and Hao Su and Leonidas Guibas},
year={2024},
eprint={2403.12032},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```