Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://haian-jin.github.io/TensoIR/

[CVPR 2023] TensoIR: Tensorial Inverse Rendering
https://haian-jin.github.io/TensoIR/

Last synced: 3 months ago
JSON representation

[CVPR 2023] TensoIR: Tensorial Inverse Rendering

Awesome Lists containing this project

README

        

# TensoIR: Tensorial Inverse Rendering (CVPR 2023)

## [Project Page](https://haian-jin.github.io/TensoIR/) | [Paper](https://arxiv.org/abs/2304.12461)

This repository contains a pytorch implementation for the paper: [TensoIR: Tensorial Inverse Rendering](https://arxiv.org/abs/2304.12461).

**The code can run well, but it is not well organized. I may re-organize the code when I am available.**

https://user-images.githubusercontent.com/79512936/235218355-0d4177c1-7614-4772-a8ec-44d76a95743f.mp4

#### Tested on Ubuntu 20.04 + Pytorch 1.10.1

Install environment:

```
conda create -n TensoIR python=3.8
conda activate TensoIR
pip install torch==1.10 torchvision
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard loguru plyfile
```

## Dataset

### Downloading

**Please download the dataset and environment maps from the following links and put them in the `./data` folder:**

* [TensoIR-Synthetic](https://zenodo.org/record/7880113#.ZE68FHZBz18)
We provide a TensoIR-Synthetic dataset for training and testing. The dataset is rendered by Blender and consists of four complex synthetic scenes (ficus, lego, armadillo, and hotdog). We use the same camera settings as NeRFactor, so we have 100 training views and 200 test views. For each view, we provide the normals map, albedo map, and multiple RGB images (11 images) under different lighting conditions. The testing lighting for quantitative comparison includes: 'bridge', 'city', 'fireplace', 'forest', and 'night'. Please use the [link](https://drive.google.com/file/d/10WLc4zk2idf4xGb6nPL43OXTTHvAXSR3/view?usp=share_link) to download the GT relighting environment maps.

**More details about the dataset and our multi-light settings can be found in the supplementary material of our paper.**
* [NeRF-Synthetic](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)
Original NeRF-Synthetic dataset is not widely used for inverse rendering work, as some scenes of it are not entirely rendered with the environment map and some objects' materials cannot be well handled by the simplfied BRDF model(as discussed in the "limitations" section of our paper's supplementary material). However, we still provide the original NeRF-Synthetic dataset to facilitate the analysis of our work.
* [Environment Maps](https://drive.google.com/file/d/10WLc4zk2idf4xGb6nPL43OXTTHvAXSR3/view?usp=share_link)
The file folder has environment maps of different resolutions ($2048 \times 1024$ and $1024 \times 512$). We use the relatively lower resolution environment maps for relighting-testing because of the limited GPU memory, though the G.T. data is rendered by high-resolution environment maps. You can also use the higher-resolution environment map for relighting-testing if you have enough GPU memory.

### Generating your own synthetic dataset

We provide the code for generating your own synthetic dataset with your own Blender files and Blender software. Please download this [file](https://drive.google.com/file/d/1mnh81gvxSzCl_2-S2jAnXxsyZSpz0Kga/view?usp=sharing) and follow the readme.md file inside it to render your own dataset. The Blender rendering scripts heavily rely on the code provided by [NeRFactor](https://github.com/google/nerfactor). Thanks for its great work!

## Training

### Note:

1. After finishing all training iterations, the training script will automatically render all test images under the learned lighting condition and save them in the log folder. It will also compute all metrics related to geometry, materials, and novel view synthesis(except for relighting). The results will be saved in the log folder as well.
2. Different scenes have different config files. The main difference between those config files is the different weight values for `normals_diff_weight`, which controls how close the predicted normals should be to the derived normals. A larger weight will help prevent the normals prediction from overfitting the supervised colors, but at the same time, it will damage the normals prediction network's ability to predict high-frequency details. **We recommend three values to try: `0.0005`, `0.002`, and `0.005` when you train TensoIR on your own dataset.**

### For pre-trained checkpoints and results please see:

[Checkpoints](https://drive.google.com/file/d/1kGCuXo64n_35jjWTG9fHQEhvHQx8ABch/view?usp=sharings)
[Results](https://drive.google.com/drive/folders/1bRCiXIs-0wcNm3MNIYihpVKQoPxFPzI8?usp=drive_link)

### Training under single lighting condition

```bash
export PYTHONPATH=. && python train_tensoIR.py --config ./configs/single_light/armadillo.txt
```

### Training under rotated multi-lighting conditions

```bash
export PYTHONPATH=. && python train_tensoIR_rotated_multi_lights.py --config ./configs/multi_light_rotated/hotdog.txt
```

### Training under general multi-lighting conditions

```bash
export PYTHONPATH=. && python train_tensoIR_general_multi_lights.py --config ./configs/multi_light_general/ficus.txt
```

### (Optional) Training for the original NeRF-Synthetic dataset

We don't do quantitative and qualitative comparisons for the original NeRF-Synthetic dataset in our paper (the reasons have been discussed above), but you can still train TensoIR on the original NeRF-Synthetic dataset for some analysis.

```bash
export PYTHONPATH=. && python train_tensoIR_simple.py --config ./configs/single_light/blender.txt
```

## Testing and Validation

### Rendering with a pre-trained model under learned lighting condition

```bash
export PYTHONPATH=. && python "$training_file" --config "$config_path" --ckpt "$ckpt_path" --render_only 1 --render_test 1
```

`"$training_file"` is the training script you used for training, e.g. `train_tensoIR.py` or `train_tensoIR_rotated_multi_lights.py` or `train_tensoIR_general_multi_lights.py`.

`"$config_path"` is the path to the config file you used for training, e.g. `./configs/single_light/armadillo.txt` or `./configs/multi_light_rotated/hotdog.txt` or `./configs/multi_light_general/ficus.txt`.

`"$ckpt_path"` is the path to the checkpoint you want to test.

The result will be stored in `--basedir` defined in the config file.

### Relighting with a pre-trained model under unseen lighting conditions

```bash
export PYTHONPATH=. && python scripts/relight_importance.py --ckpt "$ckpt_path" --config configs/relighting_test/"$scene".txt --batch_size 800
```

We do light-intensity importance sampling for relighting. The sampling results are stored in `--geo_buffer_path` defined in the config file.

`"$ckpt_path"` is the path to the checkpoint you want to test.

`"$scene"` is the name of the scene you want to relight, e.g. `armadillo` or `ficus` or `hotdog` or `lego`.

Reduce the `--batch_size` if you have limited GPU memory.

The line 370 of `scripts/relight_importance.py` specifies the names of environment maps for relighting. You can change it if you want to test other unseen lighting conditions.

### Extracting mesh

The mesh will be stored in the same folder as the checkpoint.

```bash
export PYTHONPATH=. && python scripts/export_mesh.py --ckpt "$ckpt_path"
```

## Citations

If you find our code or paper helps, please consider citing:

```
@inproceedings{Jin2023TensoIR,
title={TensoIR: Tensorial Inverse Rendering},
author={Jin, Haian and Liu, Isabella and Xu, Peijia and Zhang, Xiaoshuai and Han, Songfang and Bi, Sai and Zhou, Xiaowei and Xu, Zexiang and Su, Hao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
```

## Acknowledgement

The code was built on [TensoRF](https://github.com/apchenstu/TensoRF). Thanks for this great project!