https://github.com/snap-research/unsupervised-volumetric-animation
The repository for paper Unsupervised Volumetric Animation
https://github.com/snap-research/unsupervised-volumetric-animation
Last synced: 1 day ago
JSON representation
The repository for paper Unsupervised Volumetric Animation
- Host: GitHub
- URL: https://github.com/snap-research/unsupervised-volumetric-animation
- Owner: snap-research
- License: other
- Created: 2023-01-25T21:07:34.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-09-22T16:57:12.000Z (over 1 year ago)
- Last Synced: 2025-03-27T14:22:01.004Z (19 days ago)
- Language: Python
- Size: 91 MB
- Stars: 69
- Watchers: 25
- Forks: 1
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
- awesome-avatar - Unsupervised-Volumetric-Animation CVPR'23 - SGAN ECCV'22](https://arxiv.org/abs/2112.01422), [Articulated-Animation CVPR'21](https://arxiv.org/abs/2104.11280), [Monkey-Net CVPR'19](https://arxiv.org/abs/1812.08861), [FOMM NeurIPS'19](http://papers.nips.cc/paper/8935-first-order-motion-model-for-image-animation); (Researchers and labs)
README
# Unsupervised Volumetric Animation
This repository contains the source code for the CVPR'2023 paper [Unsupervised Volumetric Animation](https://arxiv.org/abs/2301.11326).
For more qualitiative examples visit our [project page](https://snap-research.github.io/unsupervised-volumetric-animation/).Here is an example of several images produced by our method.
On the left is sample visualization: In the first column the driving video is shown. For the remaining columns the top image is animated by using motions extracted from the driving.
On the right is rotation visualization: We show source image as well as rotated rgb, depth, normals and segments.Sample animation | Rotation visualization
:-------------------------:|:-------------------------:
 | ### Installation
We support ```python3```. To install the dependencies run:
```bash
pip install -r requirements.txt
```### YAML configs
There are several configuration files one for each `dataset` in the `config` folder named as ```config/dataset_name_stage.yaml```. We adjust the configuration to run on 8 A100 GPU.
### Pre-trained checkpoints
Checkpoints can be found under this [link](https://drive.google.com/drive/folders/1RKbzSRRQvJ0bsEMDq1ed9Fk3x8Pw-clE?usp=sharing).### Inversion
Inversion, to run inversion on your own image use:
```bash
python inversion.py --config config/dataset_name.yaml --driving_video path/to/driving --source_image path/to/source --checkpoint tb-logs/vox_second_stage/{time}/checkpoints/last.cpkt
```
The result can be seen with tensorboard.### Training
To train a model, fist, download the mraa checkpoints and place them into ```./```.
Then run the following commands:```bash
python train.py --config config/vox_first_stage.yaml
# [Optional] To save time one could first train with 128 resolution:
python train.py --config config/vox_second_stage_128.yaml --checkpoint tb-logs/vox_first_stage/{time}/checkpoints/last.cpkt
python train.py --config config/vox_second_stage.yaml --checkpoint tb-logs/vox_first_stage/{time}/checkpoints/last.cpkt
```Citation:
```
@article{siarohin2023unsupervised,
author = {Siarohin, Aliaksandr and Menapace, Willi and Skorokhodov, Ivan and Olszewski, Kyle and Lee, Hsin-Ying and Ren, Jian and Chai, Menglei and Tulyakov, Sergey},
title = {Unsupervised Volumetric Animation},
journal = {arXiv preprint arXiv:2301.11326},
year = {2023},
}
```