Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zju3dv/neumesh
Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral
https://github.com/zju3dv/neumesh
3d-vision nerf neural-rendering
Last synced: about 22 hours ago
JSON representation
Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral
- Host: GitHub
- URL: https://github.com/zju3dv/neumesh
- Owner: zju3dv
- License: mit
- Created: 2022-07-23T16:21:07.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-07-16T17:03:21.000Z (4 months ago)
- Last Synced: 2024-07-16T20:57:44.723Z (4 months ago)
- Topics: 3d-vision, nerf, neural-rendering
- Language: Python
- Homepage: https://zju3dv.github.io/neumesh/
- Size: 11.8 MB
- Stars: 380
- Watchers: 26
- Forks: 13
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-NeRF - Torch - based Implicit Field for Geometry and Texture Editing](https://arxiv.org/pdf/2207.11911.pdf)|[Project Page](https://zju3dv.github.io/neumesh/)| (Papers / NeRF Related Tasks)
- awesome-NeRF - Torch - based Implicit Field for Geometry and Texture Editing](https://arxiv.org/pdf/2207.11911.pdf)|[Project Page](https://zju3dv.github.io/neumesh/)| (Papers / NeRF Related Tasks)
README
# NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing
### [Project Page](https://zju3dv.github.io/neumesh/) | [Video](https://www.youtube.com/watch?v=8Td3Oy7y_Sc) | [Paper](http://www.cad.zju.edu.cn/home/gfzhang/papers/neumesh/neumesh.pdf)
> [NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing](http://www.cad.zju.edu.cn/home/gfzhang/papers/neumesh/neumesh.pdf)
>
> [[Bangbang Yang](https://ybbbbt.com), [Chong Bao](https://github.com/1612190130/)]Co-Authors, [Junyi Zeng](https://github.com/LangHiKi/), [Hujun Bao](http://www.cad.zju.edu.cn/home/bao/), [Yinda Zhang](https://www.zhangyinda.com/), [Zhaopeng Cui](https://zhpcui.github.io/), [Guofeng Zhang](http://www.cad.zju.edu.cn/home/gfzhang/).
>
> ECCV 2022 Oral
>## Installation
We have tested the code on Python 3.8.0 and PyTorch 1.8.1, while a newer version of pytorch should also work.
The steps of installation are as follows:* create virtual environmental: `conda env create --file environment.yml`
* install pytorch 1.8.1: `pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html`
* install [open3d **development**](http://www.open3d.org/docs/latest/getting_started.html) version: `pip install [open3d development package url]`
* install [FRNN](https://github.com/lxxue/FRNN), a fixed radius nearest neighbors search implemented on CUDA.## Data
We use DTU data of [NeuS version](https://github.com/Totoro97/NeuS) and [NeRF synthetic data](https://www.dropbox.com/scl/fi/i06pz7b6frvfvqtzmv74o/nerf_synthetic.zip?rlkey=je4q2vfen166jcxrqw86nfbqj&st=9s62dxiz&dl=0).[Update]: We release the [test image names](https://www.dropbox.com/scl/fo/o7vvobhspv6r2uw08p4hm/AGUlk0SpTLHme8cFcOoSKm0?rlkey=7s15mi3qr0ku85xmwqq6i2157&st=6iwn840t&dl=0) for our pre-trained model in the DTU dataset, which is randomly selected for evaluating PSNR/SSIM/LPIPS. Each sequence has a `val_names.txt` that contains the names of test images.
P.S. Please enable the `intrinsic_from_cammat: True` for `hotdog`, `chair`, `mic` if you use the provided NeRF synthetic dataset.
## Train
Here we show how to run our code on one example scene.
Note that the `data_dir` should be specified in the `configs/*.yaml`.1. Train the teacher network (NeuS) from multi-view images.
```python
python train.py --config configs/neus_dtu_scan63.yaml
```
2. Extract a triangle mesh from a trained teacher network.
```python
python extract_mesh.py --config configs/neus_dtu_scan63.yaml --ckpt_path logs/neus_dtuscan63/ckpts/latest.pt --output_dir out/neus_dtuscan63/mesh
```
3. Train NeuMesh from multi-view images and the teacher network. Note that the `prior_mesh`, `teacher_ckpt`, `teacher_config` should be specified in the `neumesh*.yaml`
```python
python train.py --config configs/neumesh_dtu_scan63.yaml
```
## EvaluationHere we provide all [pre-trained models](https://www.dropbox.com/scl/fo/3pmq3139vtifnaak3h41a/AMOH18OVsLBp9M72WyPjitI?rlkey=q77k3bbkl1bcil3qrvrsvz7se&st=ay9fm5t8&dl=0) of DTU and NeRF synthetic dataset.
You can evaluate images with the trained models.
```python
python -m render --config configs/neumesh_dtu_scan63.yaml --load_pt logs/neumesh_dtuscan63/ckpts/latest.pt --camera_path spiral --background 1 --test_frame 24 --spiral_rad 1.2
```
P.S. If the time of inference costs too much, `--downscale` can be enabled for acceleration.
## Manipulation
Please refer to [`editing/README.md`](editing/README.md).## Citing
```
@inproceedings{neumesh,
title={NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing},
author={{Chong Bao and Bangbang Yang} and Zeng Junyi and Bao Hujun and Zhang Yinda and Cui Zhaopeng and Zhang Guofeng},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
```
Note: joint first-authorship is not really supported in BibTex; you may need to modify the above if not using CVPR's format. For the SIGGRAPH (or ACM) format you can try the following:
```
@inproceedings{neumesh,
title={NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing},
author={{Bao and Yang} and Zeng Junyi and Bao Hujun and Zhang Yinda and Cui Zhaopeng and Zhang Guofeng},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
```
## Acknowledgement
In this project we use parts of the implementations of the following works:* [NeuS](https://github.com/Totoro97/NeuS) by Peng Wang
* [neurecon](https://github.com/ventusff/neurecon) by ventusffWe thank the respective authors for open sourcing their methods.