Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://videoverses.github.io/videotuna/
Let's finetune video generation models!
https://videoverses.github.io/videotuna/
ai aigc content-production fine-tuning-diffusion text-to-video video-generation visual-art
Last synced: about 1 month ago
JSON representation
Let's finetune video generation models!
- Host: GitHub
- URL: https://videoverses.github.io/videotuna/
- Owner: VideoVerses
- License: other
- Created: 2024-11-01T03:15:30.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2024-11-09T05:44:00.000Z (about 1 month ago)
- Last Synced: 2024-11-09T06:27:16.690Z (about 1 month ago)
- Topics: ai, aigc, content-production, fine-tuning-diffusion, text-to-video, video-generation, visual-art
- Language: Python
- Homepage:
- Size: 74.9 MB
- Stars: 144
- Watchers: 5
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Video-Diffusion - VideoTuna
README
# VideoTuna
![Version](https://img.shields.io/badge/version-0.1.0-blue) ![visitors](https://visitor-badge.laobi.icu/badge?page_id=VideoVerses.VideoTuna&left_color=green&right_color=red) [![](https://dcbadge.limes.pink/api/server/AammaaR2?style=flat)](https://discord.gg/AammaaR2) [![Homepage](https://img.shields.io/badge/Homepage-VideoTuna-orange)](https://videoverses.github.io/videotuna/) [![GitHub](https://img.shields.io/github/stars/VideoVerses/VideoTuna?style=social)](https://github.com/VideoVerses/VideoTuna)
π€π€π€ Videotuna is a useful codebase for text-to-video applications.
π VideoTuna is the first repo that integrates multiple AI video generation models for text-to-video, image-to-video, text-to-image generation (to the best of our knowledge).
π VideoTuna is the first repo that provides comprehensive pipelines in video generation, including pre-training, continuous training, post-training (alignment), and fine-tuning (to the best of our knowledge).
π The models of VideoTuna include both U-Net and DiT architectures for visual generation tasks.
π A new 3D video VAE, and a controllable facial video generation model will be released soon.## Features
π **All-in-one framework:** Inference and fine-tune up-to-date video generation models.
π **Pre-training:** Build your own foundational text-to-video model.
π **Continuous training:** Keep improving your model with new data.
π **Domain-specific fine-tuning:** Adapt models to your specific scenario.
π **Concept-specific fine-tuning:** Teach your models with unique concepts.
π **Enhanced language understanding:** Improve model comprehension through continuous training.
π **Post-processing:** Enhance the videos with video-to-video enhancement model.
π **Post-training/Human preference alignment:** Post-training with RLHF for more attractive results.## π Updates
- [2024-11-01] We make the VideoTuna V0.1.0 public!## Demo
### Model Inference and Comparison![combined_video_29_A_mountain_biker_racing_down_a_trail__dust_flying_behind](https://github.com/user-attachments/assets/f8249049-e0d8-47b9-a5b3-511994779cb1)
![combined_video_22_Fireworks_exploding_over_a_historic_river__reflections_twinkling_in_the_water](https://github.com/user-attachments/assets/868c02fc-1e44-4636-b4e7-d9f2287bc89f)
![combined_video_20_Waves_crashing_against_a_rocky_shore_under_a_stormy_sky__spray_misting_the_air](https://github.com/user-attachments/assets/ab04d3c6-2d5d-40e5-be64-5d8373f12402)
![combined_video_17_A_butterfly_landing_delicately_on_a_wildflower_in_a_vibrant_meadow](https://github.com/user-attachments/assets/247212e5-0d5a-4f93-b47f-ee9c8ba945fb)
![combined_video_12_Sunlight_piercing_through_a_dense_canopy_in_a_tropical_rainforest__illuminating_a_](https://github.com/user-attachments/assets/f66551ca-7d18-4c73-9656-3d2757ea4fb5)
![combined_video_3_Divers_observing_a_group_of_tuna_as_they_navigate_through_a_vibrant_coral_reef_teem](https://github.com/user-attachments/assets/6c084832-5a0d-42ac-b7b8-1d914b8a35dc)### 3D Video VAE
The 3D video VAE from VideoTuna can accurately compress and reconstruct the input videos with fine details.
Ground Truth
Reconstruction
Ground Truth
Reconstruction
Ground Truth
Reconstruction
Ground Truth
Reconstruction
### Face domain
Input 1
Input 2
Input 3
Emotion: Anger
Emotion: Disgust
Emotion: Fear
Emotion: Happy
Emotion: Sad
Emotion: Surprise
Emotion: Anger
Emotion: Disgust
Emotion: Fear
Emotion: Happy
Emotion: Sad
Emotion: Surprise
Emotion: Anger
Emotion: Disgust
Emotion: Fear
Emotion: Happy
Emotion: Sad
Emotion: Surprise
### Storytelling
The picture shows a cozy room with a little girl telling her travel story to her teddybear beside the bed.
As night falls, teddybear sits by the window, his eyes sparkling with longing for the distant place
Teddybear was in a corner of the room, making a small backpack out of old cloth strips, with a map, a compass and dry food next to it.
The first rays of sunlight in the morning came through the window, and teddybear quietly opened the door and embarked on his adventure.
In the forest, the sun shines through the treetops, and teddybear moves among various animals and communicates with them.
Teddybear leaves his mark on the edge of a clear lake, surrounded by exotic flowers, and the picture is full of mystery and exploration.
Teddybear climbs the rugged mountain road, the weather is changeable, but he is determined.
The picture switches to the top of the mountain, where teddybear stands in the glow of the sunrise, with a magnificent mountain view in the background.
On the way home, teddybear helps a wounded bird, the picture is warm and touching.
Teddybear sits by the little girl's bed and tells her his adventure story, and the little girl is fascinated.
The scene shows a peaceful village, with moonlight shining on the roofs and streets, creating a peaceful atmosphere.
cat sits by the window, her eyes twinkling in the night, reflecting her special connection with the moon and stars.
Villagers gather in the center of the village for the annual Moon Festival celebration, with lanterns and colored lights adorning the night sky.
cat feels the call of the moon, and her beard trembles with the excitement in her heart.
cat quietly leaves her home in the night and embarks on a path illuminated by the silver moonlight.
A group of forest elves dance around glowing mushrooms, their costumes and movements full of magic and vitality.
cat joins the celebration and dances with the elves, the picture is full of joy and freedom.
A wise old owl reveals the secret power of the moon to cat and the light of the moon in the picture becomes brighter.
cat closes her eyes in the moonlight, puts her hands together, and makes a wish, surrounded by the light of stars and the moon.
cat feels the surge of power, and her eyes become more determined.
## β° TODOs
- [ ] More demo and applications
- [ ] More functionalities such as control modules. (Suggestions are welcome!)## π Information
### Code Structure
```
VideoTuna/
βββ assets # put images for readme
βββ checkpoints # put model checkpoints here
βββ configs # model and experimental configs
βββ data # data processing scripts and dataset files
βββ docs # documentations
βββ eval # evaluation scripts
βββ inputs # input examples for testing
βββ scripts # train and inference python scripts
βββ shsripts # train and inference shell scripts
βββ src # model-related source code
βββ tests # testing scripts
βββ tools # some tool scripts
```### Supported Models
|T2V-Models|HxWxL|Checkpoints|
|:---------|:---------|:--------|
|CogVideoX-2B|720x480, 6s|[Hugging Face](https://huggingface.co/THUDM/CogVideoX-2b)
|CogVideoX-5B|720x480, 6s|[Hugging Face](https://huggingface.co/THUDM/CogVideoX-5b)
|Open-Sora 1.0|512Γ512x16|[Hugging Face](https://huggingface.co/hpcai-tech/Open-Sora/blob/main/OpenSora-v1-HQ-16x512x512.pth)
|Open-Sora 1.0|256Γ256x16|[Hugging Face](https://huggingface.co/hpcai-tech/Open-Sora/blob/main/OpenSora-v1-HQ-16x256x256.pth)
|Open-Sora 1.0|256Γ256x16|[Hugging Face](https://huggingface.co/hpcai-tech/Open-Sora/blob/main/OpenSora-v1-16x256x256.pth)
|VideoCrafter2|320x512x16|[Hugging Face](https://huggingface.co/VideoCrafter/VideoCrafter2/blob/main/model.ckpt)
|VideoCrafter1|576x1024x16|[Hugging Face](https://huggingface.co/VideoCrafter/Text2Video-1024/blob/main/model.ckpt)
|VideoCrafter1|320x512x16|[Hugging Face](https://huggingface.co/VideoCrafter/Text2Video-512/blob/main/model.ckpt)|I2V-Models|HxWxL|Checkpoints|
|:---------|:---------|:--------|
|CogVideoX-5B-I2V|720x480, 6s|[Hugging Face](https://huggingface.co/THUDM/CogVideoX-5b-I2V)
|DynamiCrafter|576x1024x16|[Hugging Face](https://huggingface.co/Doubiiu/DynamiCrafter_1024/blob/main/model.ckpt)|
|VideoCrafter1|320x512x16|[Hugging Face](https://huggingface.co/VideoCrafter/Image2Video-512/blob/main/model.ckpt)|* Note: H: height; W: width; L: length
Please check [docs/CHECKPOINTS.md](docs/CHECKPOINTS.md) to download all the model checkpoints.
[Title](docs/CHECKPOINTS.md)## π Get started
### 1.Prepare environment
``` shell
conda create --name videotuna python=3.10 -y
conda activate videotuna
pip install -U poetry pip
poetry config virtualenvs.create false
poetry install
pip install optimum-quanto==0.2.1
pip install -r requirements.txt
git clone https://github.com/JingyeChen/SwissArmyTransformer
pip install -e SwissArmyTransformer/
rm -rf SwissArmyTransformer
git clone https://github.com/tgxs002/HPSv2.git
cd ./HPSv2
pip install -e .
cd ..
```### 2.Prepare checkpoints
Please follow [docs/CHECKPOINTS.md](https://github.com/VideoVerses/VideoTuna/blob/main/docs/CHECKPOINTS.md) to download model checkpoints.
After downloading, the model checkpoints should be placed as [Checkpoint Structure](https://github.com/VideoVerses/VideoTuna/blob/main/docs/CHECKPOINTS.md#checkpoint-orgnization-structure).### 3.Inference state-of-the-art T2V/I2V/T2I models
- Inference a set of text-to-video models **in one command**: `bash tools/video_comparison/compare.sh`
- The default mode is to run all models, e.g., `inference_methods="videocrafter2;dynamicrafter;cogvideoβt2v;cogvideoβi2v;opensora"`
- If the users want to inference specific models, modify the `inference_methods` variable in `compare.sh`, and list the desired models separated by semicolons.
- Also specify the input directory via the `input_dir` variable. This directory should contain a `prompts.txt` file, where each line corresponds to a prompt for the video generation. The default `input_dir` is `inputs/t2v`
- Inference a set of image-to-video models **in one command**: `bash tools/video_comparison/compare_i2v.sh`- Inference a specific model, run the corresponding commands as follows:
Task|Models|Commands|
|:---------|:---------|:---------|
|T2V|CogvideoX|`bash shscripts/inference_cogVideo_diffusers.sh`|
|T2V|Open Sora V1.0|`bash shscripts/inference_opensora_v10_16x256x256.sh`|
|T2V|VideoCrafter-V2-320x512|`bash shscripts/inference_vc2_t2v_320x512.sh`|
|T2V|VideoCrafter-V1-576x1024|`bash shscripts/inference_vc1_t2v_576x1024.sh`|
|I2V|DynamiCrafter|`bash shscripts/inference_dc_i2v_576x1024.sh`|
|I2V|VideoCrafter|`bash shscripts/inference_vc1_i2v_320x512.sh`|
|T2I|Flux|`bash shscripts/inference_flux.sh`|### 4. Finetune T2V models
#### (1). Prepare Dataset
Please follow the [docs/datasets.md](docs/datasets.md) to try provided toydataset or build your own datasets.#### (2). Finetune
#### Open-Sora finetuning
We support open-sora finetuning, you can simply run the following commands:
``` shell
# finetune the Open-Sora v1.0
bash shscripts/train_opensorav10.sh
```#### Lora finetuning
We support lora finetuning to make the model to learn new concepts/characters/styles.
- Example config file: `configs/001_videocrafter2/vc2_t2v_lora.yaml`
- Training lora based on VideoCrafter2: `bash shscripts/train_videocrafter_lora.sh`
- Inference the trained models: `bash shscripts/inference_vc2_t2v_320x512_lora.sh`#### Finetuning for enhanced langugage understanding
### 5. Evaluation
We support VBench evaluation to evaluate the T2V generation performance.
Please check [eval/README.md](docs/evaluation.md) for details.## Acknowledgement
We thank the following repos for sharing their awesome models and codes!
* [VideoCrafter2](https://github.com/AILab-CVC/VideoCrafter): Overcoming Data Limitations for High-Quality Video Diffusion Models
* [VideoCrafter1](https://github.com/AILab-CVC/VideoCrafter): Open Diffusion Models for High-Quality Video Generation
* [DynamiCrafter](https://github.com/Doubiiu/DynamiCrafter): Animating Open-domain Images with Video Diffusion Priors
* [Open-Sora](https://github.com/hpcaitech/Open-Sora): Democratizing Efficient Video Production for All
* [CogVideoX](https://github.com/THUDM/CogVideo): Text-to-Video Diffusion Models with An Expert Transformer
* [VADER](https://github.com/mihirp1998/VADER): Video Diffusion Alignment via Reward Gradients
* [VBench](https://github.com/Vchitect/VBench): Comprehensive Benchmark Suite for Video Generative Models
* [Flux](https://github.com/black-forest-labs/flux): Text-to-image models from Black Forest Labs.
* [SimpleTuner](https://github.com/bghira/SimpleTuner): A fine-tuning kit for text-to-image generation.## Some Resources
* [LLMs-Meet-MM-Generation](https://github.com/YingqingHe/Awesome-LLMs-meet-Multimodal-Generation): A paper collection of utilizing LLMs for multimodal generation (image, video, 3D and audio).
* [MMTrail](https://github.com/litwellchi/MMTrail): A multimodal trailer video dataset with language and music descriptions.
* [Seeing-and-Hearing](https://github.com/yzxing87/Seeing-and-Hearing): A versatile framework for Joint VA generation, V2A, A2V, and I2A.
* [Self-Cascade](https://github.com/GuoLanqing/Self-Cascade): A Self-Cascade model for higher-resolution image and video generation.
* [ScaleCrafter](https://github.com/YingqingHe/ScaleCrafter) and [HiPrompt](https://liuxinyv.github.io/HiPrompt/): Free method for higher-resolution image and video generation.
* [FreeTraj](https://github.com/arthur-qiu/FreeTraj) and [FreeNoise](https://github.com/AILab-CVC/FreeNoise): Free method for video trajectory control and longer-video generation.
* [Follow-Your-Emoji](https://github.com/mayuelala/FollowYourEmoji), [Follow-Your-Click](https://github.com/mayuelala/FollowYourClick), and [Follow-Your-Pose](https://follow-your-pose.github.io/): Follow family for controllable video generation.
* [Animate-A-Story](https://github.com/AILab-CVC/Animate-A-Story): A framework for storytelling video generation.
* [LVDM](https://github.com/YingqingHe/LVDM): Latent Video Diffusion Model for long video generation and text-to-video generation.## π» Contributors
## π License
Please follow [CC-BY-NC-ND](./LICENSE). If you want a license authorization, please contact [email protected] and [email protected].## π Citation
```
To be updated...
```## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=VideoVerses/VideoTuna&type=Date)](https://star-history.com/#VideoVerses/VideoTuna&Date)