Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mayuelala/FollowYourCanvas
[AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content Generation"
https://github.com/mayuelala/FollowYourCanvas
Last synced: 12 days ago
JSON representation
[AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content Generation"
- Host: GitHub
- URL: https://github.com/mayuelala/FollowYourCanvas
- Owner: mayuelala
- Created: 2024-08-22T02:20:47.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-10-15T06:52:05.000Z (3 months ago)
- Last Synced: 2024-12-24T10:19:11.529Z (18 days ago)
- Language: Python
- Homepage: https://follow-your-canvas.github.io/
- Size: 9.26 MB
- Stars: 102
- Watchers: 5
- Forks: 4
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ai-game-devtools - Follow-Your-Canvas - Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content Generation. |[arXiv](https://arxiv.org/abs/2409.01055) | | Video | (<span id="video">Video</span> / <span id="tool">Tool (AI LLM)</span>)
- awesome-diffusion-categorized - [Code
README
Follow-Your-Canvas πΌ :
Higher-Resolution Video Outpainting with Extensive Content Generation[Qihua Chen*](https://scholar.google.com/citations?user=xjWP9gEAAAAJ&hl=en), [Yue Ma*](https://github.com/mayuelala), [Hongfa Wang*](https://scholar.google.com.hk/citations?user=q9Fn50QAAAAJ&hl=zh-CN), [Junkun yuan*βοΈ](https://scholar.google.com/citations?user=j3iFVPsAAAAJ&hl=zh-CN),
[Wenzhe Zhao](https://github.com/mayuelala/FollowYourCanvas), [Qi Tian](https://github.com/mayuelala/FollowYourCanvas), [Hongmei Wang](https://github.com/mayuelala/FollowYourCanvas),[Shaobo Min](https://github.com/mayuelala/FollowYourCanvas), [Qifeng Chen](https://cqf.io), and [Wei LiuβοΈ](https://scholar.google.com/citations?user=AjxoEpIAAAAJ&hl=zh-CN)
![visitors](https://visitor-badge.laobi.icu/badge?page_id=mayuelala.FollowYourCanvas&left_color=green&right_color=red) [![GitHub](https://img.shields.io/github/stars/mayuelala/FollowYourCanvas?style=social)](https://github.com/mayuelala/FollowYourCanvas)## π£ Updates
- **[2024.09.18]** π₯ Release `training/inference code`, `config` and `checkpoints`!
- **[2024.09.07]** π₯ Release Paper and Project page!## π Introduction
Follow-Your-Canvas enables higher-resolution video outpainting with rich content generation, overcoming GPU memory constraints and maintaining spatial-temporal consistency.
## π οΈ Environment
Before running the code, make sure you have setup the environment and installed the required packages.
Since the outpainting window is 512*512*64 each time, you need a GPU with at least 60G memory for both training and inference.
```bash
pip install -r requirements.txt
```
Download our checkpoints [here](https://drive.google.com/file/d/1CIiEYxo6Sfe0NSTr14_W9gSKePVsyIlQ/view?usp=drive_link).You also need to download [[sam_vit_b_01ec64](https://github.com/facebookresearch/segment-anything/tree/main?tab=readme-ov-file#model-checkpoints)], [[stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1)], and [[Qwen-VL-Chat](https://huggingface.co/Qwen/Qwen-VL-Chat)].
Finally, these pretrained models should be organized as follows:
```text
pretrained_models
βββ sam
βΒ Β βββ sam_vit_b_01ec64.pth
βββ follow-your-canvas
βΒ Β βββ checkpoint-40000.ckpt
βββ stable-diffusion-2-1
βββ Qwen-VL-Chat
```
## π TrainWe also provide the training code for Follow-Your-Canvas. In our implementation, eight NVIDIA A800 GPUs are used for training (50K steps).
First, you should download the [Panda-70M](https://snap-research.github.io/Panda-70M/) dataset. Our dataset (animatediff/dataset.py) needs a csv which contains the video file names and prompt.
```bash
# config the csv path and video path in train_outpainting-SAM.yaml
torchrun --nnodes=1 --nproc_per_node=8 --master_port=8888 train.py --config train_outpainting-SAM.yaml
```## π Inference
We support outpaint with and without prompt (where the prompt will be generated by Qwen).
```bash
# outpaint the video in demo_video/panda to 2k with prompt 'a panda sitting on a grassy area in a lake, with forest mountain in the background'.
python3 inference_outpainting-dir.py --config infer-configs/infer-9-16.yaml
# outpaint the video in demo_video/polar to 2k without prompt.
python3 inference_outpainting-dir-with-prompt.py --config infer-configs/prompt-panda.yaml
```
The result will be saved in /infer.## π Evaluation
We evaluate our Follow-Your-Canvas on the DAVIS 2017 dataset. [Here](https://drive.google.com/file/d/1u4I9ca35mNIG4b1b8aZaHn6nmGrITw7e/view?usp=sharing) we provide the input for each experimental settings, gt videos and our outpainting results.
The code for PSNR, SSIM, LPIPS, and FVD metics is in /video_metics/demo.py and fvd2.py. To compute aesthetic quality (AQ) and imaging quality (IQ) from V-Bench:
```bash
cd video_metics
git clone https://github.com/Vchitect/VBench.git
pip install -r VBench/requirements.txt
pip install VBench
# change the video dir in evaluate-quality.sh
bash evaluate-quality.sh
```## π¨βπ©βπ§βπ¦ Follow Family
[Follow-Your-Pose](https://github.com/mayuelala/FollowYourPose): Pose-Guided text-to-Video Generation.[Follow-Your-Click](https://github.com/mayuelala/FollowYourClick): Open-domain Regional image animation via Short Prompts.
[Follow-Your-Handle](https://github.com/mayuelala/FollowYourHandle): Controllable Video Editing via Control Handle Transformations.
[Follow-Your-Emoji](https://github.com/mayuelala/FollowYourEmoji): Fine-Controllable and Expressive Freestyle Portrait Animation.
[Follow-Your-Canvas](https://github.com/mayuelala/FollowYourCanvas): High-resolution video outpainting with rich content generation.
## π Acknowledgement
We acknowledge the following open source projects.
[AnimateDiff](https://github.com/guoyww/AnimateDiff) β
[VBench](https://github.com/Vchitect/VBench) β## β Citation
If you find Follow-Your-Canvas useful for your research, welcome to π this repo and cite our work using the following BibTeX:
```bibtex```