https://github.com/hujiecpp/PE3R
PE3R: Perception-Efficient 3D Reconstruction. Take 2 - 3 photos with your phone, upload them, wait a few minutes, and then start exploring your 3D world via text!
https://github.com/hujiecpp/PE3R
Last synced: about 1 month ago
JSON representation
PE3R: Perception-Efficient 3D Reconstruction. Take 2 - 3 photos with your phone, upload them, wait a few minutes, and then start exploring your 3D world via text!
- Host: GitHub
- URL: https://github.com/hujiecpp/PE3R
- Owner: hujiecpp
- License: cc0-1.0
- Created: 2025-03-06T08:06:49.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2025-03-12T07:10:38.000Z (about 1 month ago)
- Last Synced: 2025-03-12T08:22:45.716Z (about 1 month ago)
- Language: Python
- Homepage:
- Size: 15.8 MB
- Stars: 149
- Watchers: 5
- Forks: 4
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-and-novel-works-in-slam - [Code
README
# PE3R: Perception-Efficient 3D Reconstruction
![]()
(PE3R reconstructs 3D scenes using only 2D images and enables semantic understanding through language.)
> **PE3R: Perception-Efficient 3D Reconstruction**
> [Jie Hu](https://hujiecpp.github.io), [Shizun Wang](https://littlepure2333.github.io/home/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)
> [xML Lab, National University of Singapore](https://sites.google.com/view/xml-nus/home?authuser=0)
> 📔 [[paper]](https://arxiv.org/abs/2503.07507) 🎥 [[video]](https://youtu.be/iFRijE4GQv4) 🤗 [[demo]](https://huggingface.co/spaces/hujiecpp/PE3R)### Why PE3R
* 🚀 Input efficiency: Operate solely with 2D images.
* 🚀 Time efficiency: Accelerated 3D semantic reconstruction.
* 🚀 Generalizability: Zero-shot generalization across scenes and objects.### Quick Start
#### Install
```bash
conda create --name pe3r
conda activate pe3r
git clone
pip install requirements.txt
```
#### Usage
```bash
python pe3r_demo.py
```### Acknowledgements
- [DUSt3R](https://github.com/naver/dust3r) / [MASt3R](https://github.com/naver/mast3r)
- [SAM](https://github.com/facebookresearch/segment-anything) / [SAM2](https://github.com/facebookresearch/sam2) / [MobileSAM](https://github.com/ChaoningZhang/MobileSAM)
- [SigLIP](https://github.com/google-research/big_vision)