Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/TaeryungLee/MultiAct_RELEASE
Official PyTorch implementation of "MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels", in AAAI 2023 (Oral presentation).
https://github.com/TaeryungLee/MultiAct_RELEASE
Last synced: 3 months ago
JSON representation
Official PyTorch implementation of "MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels", in AAAI 2023 (Oral presentation).
- Host: GitHub
- URL: https://github.com/TaeryungLee/MultiAct_RELEASE
- Owner: TaeryungLee
- License: mit
- Created: 2023-01-16T08:13:57.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-01-19T04:25:56.000Z (almost 2 years ago)
- Last Synced: 2024-04-05T01:51:58.980Z (7 months ago)
- Language: Python
- Size: 6.04 MB
- Stars: 50
- Watchers: 3
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# **MultiAct: Long-Term 3D Human Motion Generation from Multiple Actions**
## Introduction
This repo is official **[PyTorch](https://pytorch.org)** implementation of **[**MultiAct: Long-Term 3D Human Motion Generation from Multiple Actions** (AAAI 2023 Oral.)](https://arxiv.org/abs/2212.05897)**.## Quick demo
* Install **[PyTorch](https://pytorch.org)** and Python >= 3.8.13. Run `sh requirements.sh` to install the python packages. You should slightly change `torchgeometry` kernel code following [here](https://github.com/mks0601/I2L-MeshNet_RELEASE/issues/6#issuecomment-675152527).
* Download the pre-trained model from [here](https://drive.google.com/file/d/1opOAjbExu1v8_frMOST7SZV7PMD6SP_G/view?usp=share_link) and unzip in `${ROOT}/output`.
* Prepare BABEL dataset following [here](https://github.com/TaeryungLee/MultiAct_RELEASE#babel-dataset).
* Prepare SMPL-H body model following [here](https://github.com/TaeryungLee/MultiAct_RELEASE#smplh-body-model).
* Run `python generate.py --env gen --gpu 0 --mode gen_short` for the short-term generation.
* Run `python generate.py --env gen --gpu 0 --mode gen_long` for the long-term generation.
* Generated motions are stored in `${ROOT}/output/gen_release/vis/`.## Preparation
### BABEL dataset
* Prepare BABEL dataset following [here](https://babel.is.tue.mpg.de).
* Unzip AMASS and babel_v1.0_release folder in dataset directory as below.
```
${ROOT}
|-- dataset
| |-- BABEL
| | |-- AMASS
| | |-- babel_v1.0_release
```### SMPLH body model
* Prepare SMPL-H body model from [here](https://mano.is.tue.mpg.de).
* Place the human body 3D model files in human_models directory as below.
```
${ROOT}
|-- human_models
| |-- SMPLH_MALE.pkl
| |-- SMPLH_FEMALE.pkl
| |-- SMPLH_NEUTRAL.npz
```### Body visualizer
* We use the body visualizer code released in this [repo](https://github.com/nghorbani/body_visualizer.git).
* Running requirements.sh installs the body visualizer in `${ROOT}/body_visualizer/`.## Running MultiAct
### Train
* Run `python train.py --env train --gpu 0`.
* Running this code will override the downloaded checkpoints.### Test
* Run `python test.py --env test --gpu 0`.
* Note that the variation of the generation result depends on the random sampling of the latent vector from estimated prior Gaussian distribution. Thus, the evaluation result may be slightly different from the reported metric scores in our [paper](https://arxiv.org/abs/2212.05897).
* Evaluation result is stored in the log file in `${ROOT}/output/test_release/log/`.### Short-term generation
* Run `python generate.py --env gen --gpu 0 --mode gen_short` for the short-term generation.
* Generated motions are stored in `${ROOT}/output/gen_release/vis/single_step_unseen`.### Long-term generation
#### Generating long-term motion at once
* Run `python generate.py --env gen --gpu 0 --mode gen_long` for the long-term generation.
* Generated motions are stored in `${ROOT}/output/gen_release/vis/long_term/(exp_no)/(sample_no)/(step-by-step motion)`.#### Generating long-term motion step-by-step via resuming from previous generation results
* Modify environment file `${ROOT}/envs/gen.yaml` to match your purpose.
* Mark `resume: True` in environment file.
* Specify `resume_exp, resume_sample, and resume_step` to determine which point to continue the generation.
* Generated motions are stored in `${ROOT}/output/gen_release/vis/long_term/(next_exp_no)/(sample_no)/(step-by-step motion)`.## Reference
```
@InProceedings{Lee2023MultiAct,
author = {Lee, Taeryung and Moon, Gyeongsik and Lee, Kyoung Mu},
title = {MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels},
booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
year = {2023}
}
```