Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/dsaurus/nerf-test


https://github.com/dsaurus/nerf-test

Last synced: 8 days ago
JSON representation

Awesome Lists containing this project

README

        

# NeRF-pytorch

[NeRF](http://www.matthewtancik.com/nerf) (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are some videos generated by this repository (pre-trained models are provided below):

![](https://user-images.githubusercontent.com/7057863/78472232-cf374a00-7769-11ea-8871-0bc710951839.gif)
![](https://user-images.githubusercontent.com/7057863/78472235-d1010d80-7769-11ea-9be9-51365180e063.gif)

This project is a faithful PyTorch implementation of [NeRF](http://www.matthewtancik.com/nerf) that **reproduces** the results while running **1.3 times faster**. The code is based on authors' Tensorflow implementation [here](https://github.com/bmild/nerf), and has been tested to match it numerically.

## Installation

```
cd nerf-pytorch
pip install -r requirements.txt
cd torchsearchsorted
pip install .
cd ../
```

Dependencies (click to expand)

## Dependencies
- PyTorch 1.4
- matplotlib
- numpy
- imageio
- imageio-ffmpeg
- configargparse

## How To Run?

### Quick Start

To train a `multihuman` NeRF:
```
python run_nerf.py --config configs/multihuman.txt
```

To test the NeRF by rendering test dataset:
```
python run_nerf.py --config configs/multihuman.txt --render_only --render_test
```
---
## Method

[NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis](http://tancik.com/nerf)
[Ben Mildenhall](https://people.eecs.berkeley.edu/~bmild/)\*1,
[Pratul P. Srinivasan](https://people.eecs.berkeley.edu/~pratul/)\*1,
[Matthew Tancik](http://tancik.com/)\*1,
[Jonathan T. Barron](http://jonbarron.info/)2,
[Ravi Ramamoorthi](http://cseweb.ucsd.edu/~ravir/)3,
[Ren Ng](https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html)1

1UC Berkeley, 2Google Research, 3UC San Diego
\*denotes equal contribution

> A neural radiance field is a simple fully connected network (weights are ~5MB) trained to reproduce input views of a single scene using a rendering loss. The network directly maps from spatial location and viewing direction (5D input) to color and opacity (4D output), acting as the "volume" so we can use volume rendering to differentiably render new views

## Citation
Kudos to the authors for their amazing results:
```
@misc{mildenhall2020nerf,
title={NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis},
author={Ben Mildenhall and Pratul P. Srinivasan and Matthew Tancik and Jonathan T. Barron and Ravi Ramamoorthi and Ren Ng},
year={2020},
eprint={2003.08934},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```

However, if you find this implementation or pre-trained models helpful, please consider to cite:
```
@software{pytorchnerf2020github,
author = {Yen-Chen, Lin},
title = {{PyTorchNeRF}: a {PyTorch} implementation of {NeRF}},
url = {https://github.com/yenchenlin/nerf-pytorch/},
version = {0.0},
year = {2020},
}
```