Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zshyang/amg
https://github.com/zshyang/amg
Last synced: 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/zshyang/amg
- Owner: zshyang
- Created: 2024-08-26T22:12:02.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2024-08-28T06:40:19.000Z (3 months ago)
- Last Synced: 2024-08-29T01:44:35.061Z (3 months ago)
- Language: Python
- Size: 2.37 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
AMG: Avatar Motion Guided Video Generation
Zhangsihao Yang1
·
Mengyi Shan2
·
Mohammad Farazi1
·
Wenhui Zhu1
·
Yanxi Chen1
·
Xuanzhao Dong1
·
Yalin Wang1
1Arizona State University 2University of Washington
_Human video generation is a challenging task due to the complexity of human body movements and the need for photorealism. While 2D methods excel in realism, they lack 3D control, and 3D avatar-based approaches struggle with seamless background integration. We introduce AMG, a method that merges 2D photorealism with 3D control by conditioning video diffusion models on 3D avatar renderings. AMG enables multi-person video generation with precise control over camera positions, human motions, and background style, outperforming existing methods in realism and adaptability._
## 0. Getting Started
Make `vgen` virual environment.
Download [model.zip](https://drive.google.com/file/d/1n979-fIwIBlxqavI_lJQFFrMUKcJwqjI/view?usp=sharing) to `_runtime` and unzip it.
## 1. Inference
### 1.1 Preparation
1. Download weight
The weights for inference could be downloaded from [here](https://drive.google.com/file/d/1g274tXyfaA45cy8IkaUJF39iVg5sQNTU/view?usp=sharing) (5.28GB).
2. Install `amg` package
Run the following command line for installing `amg` package.
```bash
pip install -e .
```### 1.2 Try it out!
Once finish the steps above, you could try any of the following examples:
1. change background
2. move camera
3. change motion1.2.1 Change Background
Run the command below to get **change background** results:
```bash
python applications/change_background.py --cfg configs/applications/change_background/demo.yaml
```The results are store under newly created folder `_demo_results/change_background`.
You should be able to see exact same results like the following:
Input
Reference
Generated
1.2.2 Move Camera
Run the command below to get **move camera** results:
```bash
python applications/move_camera.py --cfg configs/applications/move_camera/demo.yaml
```The results are store under newly created folder `_demo_results/move_camera`.
You should be able to see exact same results like the following:
Input
Generated
1.2.3 Change Motion
## Traning
### Download data
1. Fill out [this google form](https://forms.gle/xrx4sfAn7QAWgiXq9) for reqeusting of the processed dataset in the paper.
2. Put the downloaded data under `_data`.
### Start training
```bash
python train_net_ddp.py --cfg configs/train.yaml
```## Folder Structure
- configs
- demo_data