Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/akira-l/SEEG
Code for SEEG: Semantic Energized Co-speech Gesture Generation
https://github.com/akira-l/SEEG
Last synced: 3 months ago
JSON representation
Code for SEEG: Semantic Energized Co-speech Gesture Generation
- Host: GitHub
- URL: https://github.com/akira-l/SEEG
- Owner: akira-l
- License: other
- Created: 2022-03-07T07:44:27.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-12-03T11:14:13.000Z (almost 2 years ago)
- Last Synced: 2024-04-05T09:34:44.598Z (7 months ago)
- Language: Python
- Size: 451 KB
- Stars: 29
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# SEEG
This project is a pytorch implementation of *SEEG: Semantic Energized Co-speech Gesture Generation*.
# Insight
* Only learning beat gestures already performs comparably with the SOTA methods.
* Introducing additional semantic-aware supervision can influence the gestures expressions.## Environment & Training
This repository is developed and tested on Ubuntu 18.04, Python 3.6+, and PyTorch 1.3+. The environment is the same to [Trimodal Context](https://github.com/ai4r/Gesture-Generation-from-Trimodal-Context).
This project is mainly developed based on [Trimodal Context](https://github.com/ai4r/Gesture-Generation-from-Trimodal-Context). You can run this project by ``` bash train.sh ``` or the same commands in [Trimodal Context](https://github.com/ai4r/Gesture-Generation-from-Trimodal-Context).
## Citation
Please cite our CVPR2022 paper if you find our SEEG is helpful in your work:
```
@inproceedings{liang2022seeg,
title={SEEG: Semantic Energized Co-speech Gesture Generation},
author={Liang, Yuanzhi and Feng, Qianyu and Zhu, Linchao and Hu, Li and Pan, Pan and Yang, Yi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2022}
}
```