Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Stability-AI/stable-fast-3d
https://github.com/Stability-AI/stable-fast-3d
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/Stability-AI/stable-fast-3d
- Owner: Stability-AI
- License: other
- Created: 2024-07-17T10:09:07.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2024-08-02T08:04:17.000Z (about 2 months ago)
- Last Synced: 2024-08-02T18:06:26.035Z (about 2 months ago)
- Language: Python
- Size: 23.7 MB
- Stars: 338
- Watchers: 8
- Forks: 17
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ai-game-devtools - SF3D - unwrapping and Illumination Disentanglement. |[arXiv](https://arxiv.org/abs/2408.00653) | | 3D | (<span id="model">3D Model</span> / <span id="tool">Tool (AI LLM)</span>)
README
# SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement
This is the official codebase for **Stable Fast 3D**, a state-of-the-art open-source model for **fast** feedforward 3D mesh reconstruction from a single image.
Stable Fast 3D is based on [TripoSR](https://github.com/VAST-AI-Research/TripoSR) but introduces several new key techniques. For one we explicitly optimize our model to produce good meshes without artifacts alongside textures with UV unwrapping. We also delight the color and predict material parameters so the assets can be easier integrated in a game. We achieve all of this and still keep the fast inference speeds of TripoSR.
## Getting Started
### Installation
Ensure your environment is:
- Python >= 3.8
- Has CUDA available
- Has PyTorch installed according to your platform: https://pytorch.org/get-started/locally/ [Make sure the Pytorch CUDA version matches your system's one.]
- Update setuptools by `pip install -U setuptools==69.5.1`Then install the remaining requirements with `pip install -r requirements.txt`.
For the gradio demo an additional `pip install -r requirements-demo.txt` is required.### Manual Inference
```sh
python run.py demo_files/examples/chair1.png --output-dir output/
```
This will save the reconstructed 3D model as a GLB file to `output/`. You can also specify more than one image path separated by spaces. The default options takes about **6GB VRAM** for a single image input.You may also use `--texture-resolution` to specify the resolution in pixels of the output texture and `--remesh_option` to specify the remeshing operation (None, Triangle, Quad).
For detailed usage of this script, use `python run.py --help`.
### Local Gradio App
```sh
python gradio_app.py
```## Citation
```BibTeX
@article{sf3d2024,
title={SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement},
author={Boss, Mark and Huang, Zixuan and Vasishta, Aaryaman and Jampani, Varun},
journal={arXiv preprint},
year={2024}
}
```