https://github.com/pythonicforge/triposr-implementation
🧠 Reimplementation of TripoSR for 2D-to-3D object reconstruction using Colab and Hugging Face.
https://github.com/pythonicforge/triposr-implementation
computer-vision deep-learning huggingface lrm paper-reproduction python triposr
Last synced: 6 months ago
JSON representation
🧠 Reimplementation of TripoSR for 2D-to-3D object reconstruction using Colab and Hugging Face.
- Host: GitHub
- URL: https://github.com/pythonicforge/triposr-implementation
- Owner: pythonicforge
- Created: 2025-06-12T03:59:47.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-06-12T04:40:55.000Z (7 months ago)
- Last Synced: 2025-06-12T05:34:47.720Z (7 months ago)
- Topics: computer-vision, deep-learning, huggingface, lrm, paper-reproduction, python, triposr
- Language: Jupyter Notebook
- Homepage:
- Size: 36.1 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# TripoSR Reimplementation
This is a clean and reproducible **Colab-based reimplementation** of [Stability AI's TripoSR](https://huggingface.co/stabilityai/TripoSR) — a powerful zero-shot model that generates 3D object meshes from a single 2D image.
Built on top of the [PyImageSearch blog tutorial]([https://pyimagesearch.com/](https://pyimagesearch.com/2024/11/25/create-a-3d-object-from-your-images-with-triposr-in-python/)), this notebook integrates:
- 2D input image upload
- Background removal via `rembg`
- Inference using TripoSR via `Hugging Face`
- 360° turntable render output
- Mesh export in `.obj` format
## Project Highlights
- **Zero-shot inference**: No need for fine-tuning — just drop an image and get a 3D model
- **Background cleanup**: Uses `rembg` for cleaner foreground object extraction
- **Smooth renders**: Generates 30-angle renders + MP4 video
- **Mesh export**: Outputs ready-to-use 3D `.obj` files
## How to Use (in Colab)
1. **Open the Colab notebook**
> [▶️ Click here to run in Colab](https://colab.research.google.com/drive/127g5BFZoHsj4dt6nspENpLUN6RRX9Ldr?usp=sharing)
2. **Upload an image** (preferably product-style or with clear foreground)
3. **Run all cells**
Sit back and let the notebook:
- Process the image
- Run the TripoSR model
- Render 360° views
- Export a `.obj` 3D mesh
4. **Preview your result!**
A video preview will auto-play inside the notebook 🎬
## Output Structure
After running the notebook, your `output/` folder will look like this:
output/
└── 0/
├── input.png # Processed input image
├── render_000.png # 30 rendered views
├── render_001.png
├── ...
├── render.mp4 # Turntable video
└── mesh.obj # Exported mesh
## References
- [TripoSR on Hugging Face](https://huggingface.co/stabilityai/TripoSR)
- [Original PyImageSearch tutorial](https://pyimagesearch.com/2024/11/25/create-a-3d-object-from-your-images-with-triposr-in-python/)
- [Paper: TripoSR: Ultra-Fast 3D Reconstruction from a Single Image](https://arxiv.org/pdf/2403.02151)
## Future Plans
- [ ] Wrap this into an end-to-end web app
- [ ] Dockerize for easier deployment
- [ ] Add sample gallery + download links