An open API service indexing awesome lists of open source software.

https://github.com/tiangexiang/OccNeRF

[ICCV 2023] Rendering Humans from Object-Occluded Monocular Videos
https://github.com/tiangexiang/OccNeRF

human-nerf iccv nerf video-processing

Last synced: 7 months ago
JSON representation

[ICCV 2023] Rendering Humans from Object-Occluded Monocular Videos

Awesome Lists containing this project

README

          

# OccNeRF: Rendering Humans from Object-Occluded Monocular Videos, ICCV 2023
Project Page: https://cs.stanford.edu/~xtiange/projects/occnerf/
Paper: https://arxiv.org/pdf/2308.04622.pdf

![framework](./teaser.png)

## Prerequisite

### `Configure environment`

1. Create and activate a virtual environment.

conda create --name occnerf python=3.7
conda activate occnerf

2. Install the required packages as required in HumanNeRF.

pip install -r requirements.txt

3. Install [PyTorch3D](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md) by following its official instructions.

4. Build `gridencoder` by following the instructions provided in [torch-ngp](https://github.com/ashawkey/torch-ngp/tree/main#build-extension-optional)

### `Download SMPL model`

Download the gender neutral SMPL model from [here](https://smplify.is.tue.mpg.de/), and unpack **mpips_smplify_public_v2.zip**.

Copy the smpl model.

SMPL_DIR=/path/to/smpl
MODEL_DIR=$SMPL_DIR/smplify_public/code/models
cp $MODEL_DIR/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl third_parties/smpl/models

Follow [this page](https://github.com/vchoutas/smplx/tree/master/tools) to remove Chumpy objects from the SMPL model.

## Run on ZJU-Mocap Dataset

Below we take the subject 387 as a running example.

### `Prepare the ZJU-Mocap daatset`

First, download ZJU-Mocap dataset from [here](https://github.com/zju3dv/neuralbody/blob/master/INSTALL.md#zju-mocap-dataset).

Second, modify the yaml file of subject 387 at `tools/prepare_zju_mocap/387.yaml`. In particular, `zju_mocap_path` should be the directory path of the ZJU-Mocap dataset.

```
dataset:
zju_mocap_path: /path/to/zju_mocap
subject: '387'
sex: 'neutral'

...
```

Finally, run the data preprocessing script.

cd tools/prepare_zju_mocap
python prepare_dataset.py --cfg 387.yaml
cd ../../

### `Prepare the two OcMotion sequences`

Download the preprocessed two sequences used in the paper from [here](https://drive.google.com/drive/folders/1xH9dvrA7_-pCCF29vTKs7YAoFTVpnpoR?usp=sharing).

Please resepect OcMotion's original [license of usage](https://github.com/boycehbz/CHOMP).

### `Train`

To train a model:

python train.py --cfg configs/occnerf/zju_mocap/387/occnerf.yaml

Alternatively, you can update the scripts: in `train.sh`.

### `Render output`

Render the frame input (i.e., observed motion sequence).

python run.py \
--type movement \
--cfg configs/occnerf/zju_mocap/387/occnerf.yaml

Run free-viewpoint rendering on a particular frame (e.g., frame 128).

python run.py \
--type freeview \
--cfg configs/occnerf/zju_mocap/387/occnerf.yaml \
freeview.frame_idx 128

Render the learned canonical appearance (T-pose).

python run.py \
--type tpose \
--cfg configs/occnerf/zju_mocap/387/occnerf.yaml

In addition, you can find the rendering scripts in `scripts/zju_mocap`.

## Acknowledgement

The implementation is based on [HumanNeRF](https://github.com/chungyiweng/humannerf/tree/main), which took reference from [Neural Body](https://github.com/zju3dv/neuralbody), [Neural Volume](https://github.com/facebookresearch/neuralvolumes), [LPIPS](https://github.com/richzhang/PerceptualSimilarity), and [YACS](https://github.com/rbgirshick/yacs). We thank the authors for their generosity to release code.

## Citation

If you find our work useful, please consider citing:

```BibTeX
@InProceedings{Xiang_2023_OccNeRF,
author = {Xiang, Tiange and Sun, Adam and Wu, Jiajun and Adeli, Ehsan and Li, Fei-Fei},
title = {Rendering Humans from Object-Occluded Monocular Videos},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023}
}
```