https://github.com/kabachuha/sd-webui-text2video
Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
https://github.com/kabachuha/sd-webui-text2video
automatic1111 extension gradio modelscope stable-diffusion text2video videocrafter webui
Last synced: 6 months ago
JSON representation
Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
- Host: GitHub
- URL: https://github.com/kabachuha/sd-webui-text2video
- Owner: kabachuha
- License: other
- Created: 2023-03-19T20:40:23.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-07-14T07:14:03.000Z (about 1 year ago)
- Last Synced: 2025-04-01T03:35:22.085Z (6 months ago)
- Topics: automatic1111, extension, gradio, modelscope, stable-diffusion, text2video, videocrafter, webui
- Language: Python
- Homepage:
- Size: 798 KB
- Stars: 1,312
- Watchers: 26
- Forks: 111
- Open Issues: 49
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
- awesome-stable-diffusion-webui - sd-webui-text2video - generation models with the existing UI. (GitHub projects)
- StarryDivineSky - kabachuha/sd-webui-text2video - webui-text2video是一个基于Auto1111 webui的扩展,旨在实现文本到视频的扩散模型,例如ModelScope或VideoCrafter。它无需额外依赖,仅使用Auto1111 webui的现有功能。该项目允许用户通过文本描述生成视频内容,简化了视频创作流程。通过利用扩散模型,它能够根据输入的文本提示,逐步生成高质量的视频帧序列。该扩展集成了文本到视频生成的功能,方便用户在Auto1111 webui界面中直接使用。它支持多种文本到视频模型,为用户提供灵活的选择。该项目旨在降低文本到视频生成的门槛,让更多用户能够轻松创作视频内容。它充分利用了Auto1111 webui的生态系统,提供了便捷的使用体验。用户可以通过简单的配置,即可开始使用文本到视频生成功能。该项目仍在积极开发中,未来将支持更多模型和功能。 (视频生成_补帧_摘要 / 资源传输下载)
README
# text2video Extension for AUTOMATIC1111's StableDiffusion WebUI
**~~Warning: as of 2023-11-21 this extension is not maintained. If you'd like to continue devving/remaking it, please contact me on Discord @kabachuha (you can also find me on [camenduru's server's text2video channel](https://discord.gg/TYk6rfT9)) and we'll figure it out~~**
**~~Maintained starting on 2023-11-21 by [Deforum-art](https://github.com/deforum-art)~~**
**Maintained by me again**
Auto1111 extension implementing various text2video models, such as ModelScope and VideoCrafter, using only Auto1111 webui dependencies and downloadable models (so no logins required anywhere)
## Requirements
### ModelScope
6 GBs vram should be enough to run on GPU with low vram vae on at 256x256 (and we are already getting reports of people launching 192x192 videos [with 4gbs of vram](https://github.com/deforum-art/sd-webui-modelscope-text2video/discussions/27)). 24 frames long 256x256 video definitely fits into 12gbs of NVIDIA GeForce RTX 2080 Ti, or if you have a Torch2 attention optimization supported videocard, you can fit the whopping 125 frames (8 seconds) long video into the same 12 GBs of VRAM! 250 frames (16 seconds) in the same conditions take 20 gbs.
Prompt: `best quality, anime girl dancing`
https://user-images.githubusercontent.com/14872007/232229730-82df36cc-ac8b-46b3-949d-0e1dfc10a975.mp4
We will appreciate *any* help with this extension, *especially* pull-requests.
### LoRA Support
Currently, there is support for trained LoRAs using this finetune repository. Please follow instructions there on how to train them.
https://github.com/ExponentialML/Text-To-Video-Finetuning#updatesAfter training, simply place them into your default LoRA directory defined by your webui installation.
### VideoCrafter (WIP, needs more devs to maintain properly as well)
VideoCrafter runs with around 9.2 GBs of VRAM with the settings set on Default.
## Major changes between versions
Update 2023-03-27: VAE settings and "Keep model in VRAM" moved to general webui setting under 'ModelScopeTxt2Vid' section.
Update 2023-03-26: prompt weights **implemented**! (ModelScope only yet, as of 2023-04-05)
Update 2023-04-05: added VideoCrafter support, renamed the extension to plainly 'sd-webui-text2video'
Update 2023-04-13: in-framing/in-painting support: allows to 'animate' an existing pic or even seamlessly loop the videos!
Update 2023-04-15: **MEGA-UPDATE**: Torch2/xformers optimizations, possible to make 125 frames long video on 12 gbs of VRAM. CPU offloading doesn't happen now if keep_pipe_in_vram is checked.
Update 2023-04-16: WebAPI is available!
Update 2023-07-02: Alternate samplers, model hotswitch.
## Test examples:
### ModelScope
Prompt: `cinematic explosion by greg rutkowski`
https://user-images.githubusercontent.com/14872007/226345611-a1f0601f-db32-41bd-b983-80d363eca4d5.mp4
Prompt: `really attractive anime girl skating, by makoto shinkai, cinematic lighting`
https://user-images.githubusercontent.com/14872007/226468406-ce43fa0c-35f2-4625-a892-9fb3411d96bb.mp4
**'Continuing' an existing image**
Prompt: `best quality, astronaut dog`
https://user-images.githubusercontent.com/14872007/232073361-bdb87a47-85ec-44d8-9dc4-40dab0bd0555.mp4
Prompt: `explosion`
https://user-images.githubusercontent.com/14872007/232073687-b7e78b06-182b-4ce6-b565-d6738c4890d1.mp4
**In-painting and looping back the videos**
Prompt: `nuclear explosion`
https://user-images.githubusercontent.com/14872007/232073842-84860a3e-fa82-43a6-a411-5cfc509b5355.mp4
Prompt: `best quality, lots of cheese`
https://user-images.githubusercontent.com/14872007/232073876-16895cae-0f26-41bc-a575-0c811219cf88.mp4
### VideoCrafter
Prompt: `anime 1girl reimu touhou`
https://user-images.githubusercontent.com/14872007/230231253-2fd9b7af-3f05-41c8-8c92-51042b269116.mp4
## Where to get the weights
### ModelScope
Download the following files from the [original HuggingFace repository](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis/tree/main). Alternatively, [download half-precision fp16 pruned weights (they are smaller and use less vram on loading)](https://huggingface.co/kabachuha/modelscope-damo-text2video-pruned-weights/tree/main):
- VQGAN_autoencoder.pth
- configuration.json
- open_clip_pytorch_model.bin
- text2video_pytorch_model.pthAnd put them in `stable-diffusion-webui/models/ModelScope/t2v`. Create those 2 folders if they are missing.
### VideoCrafter
Download pretrained T2V models either via [this link](https://drive.google.com/file/d/13ZZTXyAKM3x0tObRQOQWdtnrI2ARWYf_/view?usp=share_link) or download [the pruned half precision weights](https://huggingface.co/kabachuha/videocrafter-pruned-weights/tree/main), and put the `model.ckpt` in `models/VideoCrafter/model.ckpt`.
## Fine-tunes and how to use them
Thanks to https://github.com/ExponentialML/Text-To-Video-Finetuning you can fine-tune your models!
To utilize a fine-tuned model here, use [this script](https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/52) which will convert the Diffusers-formatted model that repo outputs into the original weights format.
### Prominent Fine-tunes
**ZeroScope v2**
Trained by @cerspense on high quality YouTube videos. Download the files from the folder named `zs2_XL` at [cerspense/zeroscope_v2_XL](https://huggingface.co/cerspense/zeroscope_v2_XL/tree/main/zs2_XL) and then add the missing `VQGAN_autoencoder.pth` and `configuration.json` from [any other ModelScope model](https://huggingface.co/kabachuha/modelscope-damo-text2video-pruned-weights/tree/main).
https://github.com/kabachuha/sd-webui-text2video/assets/14872007/6fa39221-3608-415e-b8ce-04a2bad11d30
**Potat1**
[Potat1](https://huggingface.co/camenduru/potat1) is a ModelScope-based model trained by @camenduru on 2197 clips with the resolution of 1024x576 which makes it the first open source hi-res text2video model.
https://github.com/kabachuha/sd-webui-text2video/assets/14872007/ff01c6cb-0000-40a2-ac7e-ec3edc5f9713
To download the plug-and-play weights for the extension use this link https://huggingface.co/kabachuha/potat1-with-text-encoder-original-format.
**Animov-0.1**
[Animov-0.1 by strangeman3107](https://huggingface.co/datasets/strangeman3107/animov-0.1). The converted weights for this model reside [here](https://huggingface.co/kabachuha/animov-0.1-modelscope-original-format).
https://user-images.githubusercontent.com/14872007/232611542-600cec38-d944-4530-bc5c-3595a115c2be.mp4
## Screenshots
txt2vid with img2vid

vid2vid

## Dev resources
### ModelScope
HuggingFace space:
https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis
The model PyTorch implementation from ModelScope:
https://github.com/modelscope/modelscope/tree/master/modelscope/models/multi_modal/video_synthesis
Google Colab from the devs:
https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing
### VideoCrafter
Github:
https://github.com/VideoCrafter/VideoCrafter