Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/athn-nik/teach

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"
https://github.com/athn-nik/teach

Last synced: about 2 months ago
JSON representation

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"

Awesome Lists containing this project

README

        

TEACH: Temporal Action Compositions for 3D Humans

ArXiv PDF


Project Page



Nikos Athanasiou
·
Mathis Petrovich
·
Michael J. Black
·
Gül Varol


3DV 2022






Check our upcoming YouTube video for a quick overview and our paper for more details.

### Video

## Features

This implementation:
- Instruction on how to prepare the datasets used in the experiments.
- The training code:
- For both baselines
- For TEACH method
- A simple interacting demo that given some prompts with texts and durations returns back:
- a `npy` file containing the vertices of the body generated by TEACH.
- a video that demonstrates the result.

## Updates

To be uploaded:
- Instructions about the baselines and how to run them.
- Instructions for sampling and evaluating with the code all of the models.
- The rendering code for the blender renderings used in the paper.

## Getting Started
TEACH has been implemented and tested on Ubuntu 20.04 with python >= 3.9.

Clone the repo:
```bash
git clone https://github.com/athn-nik/teach.git
```

After it do this to install DistillBERT:

```shell
cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..
```

Install the requirements using `virtualenv` :
```bash
# pip
source scripts/install.sh
```
You can do something equivalent with `conda` as well.

## Running the Demo

We have prepared a nice demo code to run TEACH on arbitrary videos.
First, you need download the required data(i.e our trained model from our [website](https://teach.is.tue.mpg.de)).
The `path/to/experiment` directory should look like:

```
experiment

└───.hydra
│ | config.yaml
| | overrides.yaml
| | hydra.yaml
|
└───checkpoints
│ last.ckpt
```

Then, running the demo is as simple as:

```bash

python interact_teach.py folder=/path/to/experiment output=/path/to/yourfname texts='[text prompt1, text prompt2, text prompt3, ]' durs='[dur1, dur2, dur3, ...]'

```

## Data

Download the data from [AMASS website](https://amass.is.tue.mpg.de). Then, run this command to extract the amass sequences that are annotated in babel:

```shell
python scripts/process_amass.py --input-path /path/to/data --output-path path/of/choice/default_is_/babel/babel-smplh-30fps-male --use-betas --gender male
```

Download the data from [TEACH website](https://teach.is.tue.mpg.de), after signing in. The data TEACH was trained was a processed version of BABEL. Hence, we provide them directly to your via our website, where you will also find more relevant details.
Finally, download the male SMPLH male body model from the [SMPLX website](https://smpl-x.is.tue.mpg.de/). Specifically the AMASS version of the SMPLH model. Then, follow the instructions [here](https://github.com/vchoutas/smplx/blob/main/tools/README.md#smpl-h-version-used-in-amass) to extract the smplh model in pickle format.

The run this script and change your paths accordingly inside it extract the different babel splits from amass:

```shell
python scripts/amass_splits_babel.py
```

Then create a directory named `data` and put the babel data and the processed amass data in.
You should end up with a data folder with the structure like this:

```
data
|-- amass
| `-- your-processed-amass-data
|
|-- babel
| `-- babel-teach
| `...
| `-- babel-smplh-30fps-male
| `...
|
|-- smpl_models
| `-- smplh
| `--SMPLH_MALE.pkl
```

Be careful not to push any data!
Then you should softlink inside this repo. To softlink your data, do:

`ln -s /path/to/data`

## Training
To start training after activating your environment. Do:

```shell
python train.py experiment=baseline logger=none
```

Explore `configs/train.yaml` to change some basic things like where you want
your output stored, which data you want to choose if you want to do a small
experiment on a subset of the data etc.
[TODO]: More on this coming soon.

### Sampling & Evaluation

Here are some commands if you want to sample from the validaiton set and evaluate on the metrics reported
in the paper:

```shell
python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8
```

In general the folder is: `folder_our////`
This folder should contain a `checkpoints` directory with a `last.ckpt` file inside and a `.hydra` directory from which the configuration
will be pulled and the relevant checkpoint. This folder is created during training in the output directory and is provided in our website
for the experiments in the paper.

- `align=trans`: chooses if translation will be aligned or if the global orientation also(`align=full`)
- `slerp_ws`: decides on whether slerp is done or not(`=null`) and what is the size of its window.

Then for the evaluation you should do:

```shell
python eval.py folder=/path/to/experiment align=true slerp=true
```

the two extra parameters decide the samples on which the evaluation will be performed.

### Transition distance

- Without alignment column: ```python compute_td.py folder=/path/to/experiment align_full_bodies=false align_only_trans=true```

- With alignment column: ```python compute_td.py folder=/path/to/experiment align_full_bodies=true align_only_trans=false```

[TODO]: More on this coming soon.

## Citation

```bibtex
@inproceedings{TEACH:3DV:2022,
title={TEACH: Temporal Action Compositions for 3D Humans},
author={Athanasiou, Nikos and Petrovich, Mathis and Black, Michael J. and Varol, G\"{u}l },
booktitle = {International Conference on 3D Vision (3DV)},
month = {September},
year = {2022}
}
```

## License
This code is available for **non-commercial scientific research purposes** as defined in the [LICENSE file](LICENSE). By downloading and using this code you agree to the terms in the [LICENSE](LICENSE). Third-party datasets and software are subject to their respective licenses.

## Acknowledgments
We thank [Benjamin Pellkofer](https://is.mpg.de/person/bpellkofer) for his IT support.

## References
Many part of this code were based on the official implementation of [TEMOS](https://github.com/Mathux/TEMOS). Here are some great resources we benefit:

- SMPL models and layer is from [SMPL-X model](https://github.com/vchoutas/smplx).
## Contact

This code repository was implemented mainly by [Nikos Athanasiou](https://is.mpg.de/~nathanasiou) with the help of [Mathis Petrovich](https://mathis.petrovich.fr/).

Give a ⭐ if you like.

For commercial licensing (and all related questions for business applications), please contact [email protected].