An open API service indexing awesome lists of open source software.

https://github.com/hujiecpp/PE3R

PE3R: Perception-Efficient 3D Reconstruction. Take 2 - 3 photos with your phone, upload them, wait a few minutes, and then start exploring your 3D world via text!
https://github.com/hujiecpp/PE3R

Last synced: about 1 month ago
JSON representation

PE3R: Perception-Efficient 3D Reconstruction. Take 2 - 3 photos with your phone, upload them, wait a few minutes, and then start exploring your 3D world via text!

Awesome Lists containing this project

README

        

# PE3R: Perception-Efficient 3D Reconstruction






(PE3R reconstructs 3D scenes using only 2D images and enables semantic understanding through language.)



> **PE3R: Perception-Efficient 3D Reconstruction**
> [Jie Hu](https://hujiecpp.github.io), [Shizun Wang](https://littlepure2333.github.io/home/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)
> [xML Lab, National University of Singapore](https://sites.google.com/view/xml-nus/home?authuser=0)
> 📔 [[paper]](https://arxiv.org/abs/2503.07507) 🎥 [[video]](https://youtu.be/iFRijE4GQv4) 🤗 [[demo]](https://huggingface.co/spaces/hujiecpp/PE3R)

### Why PE3R
* 🚀 Input efficiency: Operate solely with 2D images.
* 🚀 Time efficiency: Accelerated 3D semantic reconstruction.
* 🚀 Generalizability: Zero-shot generalization across scenes and objects.

### Quick Start
#### Install
```bash
conda create --name pe3r
conda activate pe3r
git clone
pip install requirements.txt
```
#### Usage
```bash
python pe3r_demo.py
```

### Acknowledgements
- [DUSt3R](https://github.com/naver/dust3r) / [MASt3R](https://github.com/naver/mast3r)
- [SAM](https://github.com/facebookresearch/segment-anything) / [SAM2](https://github.com/facebookresearch/sam2) / [MobileSAM](https://github.com/ChaoningZhang/MobileSAM)
- [SigLIP](https://github.com/google-research/big_vision)