Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kpthedev/ez-text2video
Easily run text-to-video diffusion with customized video length, fps, and dimensions on 4GB video cards or on CPU.
https://github.com/kpthedev/ez-text2video
artificial-intelligence huggingface pytorch text-to-video
Last synced: 1 day ago
JSON representation
Easily run text-to-video diffusion with customized video length, fps, and dimensions on 4GB video cards or on CPU.
- Host: GitHub
- URL: https://github.com/kpthedev/ez-text2video
- Owner: kpthedev
- License: gpl-3.0
- Created: 2023-03-31T06:07:49.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-09-08T21:44:48.000Z (4 months ago)
- Last Synced: 2025-01-01T22:35:47.406Z (9 days ago)
- Topics: artificial-intelligence, huggingface, pytorch, text-to-video
- Language: Python
- Homepage:
- Size: 31.3 KB
- Stars: 104
- Watchers: 6
- Forks: 20
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ez-text2vid
A Streamlit app to easily run the [ModelScope text-to-video](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis) diffusion model with customized video length, fps, and dimensions. It can run on 4GB video cards, as well as CPU and Apple M chips.
**Built with:**
* [Huggingface Diffusers](https://github.com/huggingface/diffusers)๐งจ
* [Pytorch](https://github.com/pytorch/pytorch)
* [Streamlit](https://github.com/streamlit/streamlit)## Installation
Before installing, make sure you have working [git](https://git-scm.com/downloads) and [conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) installations. If you have an Nvidia graphics card, you should also install [CUDA](https://developer.nvidia.com/cuda-downloads).### Install Steps:
1. Open a terminal on your machine. On Windows, you should use the Anaconda Prompt terminal.2. Clone this repo using git:
```terminal
git clone https://github.com/kpthedev/ez-text2video.git
```3. Open the folder:
```terminal
cd ez-text2video
```4. Create the conda environment:
```terminal
conda env create -f environment.yaml
```## Running
To run the app, make sure you are in the `ez-text2video` folder in your terminal. Then run these two commands to activate the conda environment and start the Streamlit app:```bash
conda activate t2v
streamlit run app.py
```
This should open the webUI in your browser automatically.> The very first time you run the app, it will automatically download the models from Huggingface. This may a couple of minutes (~5 mins).
## License
All the original code that I have written is licensed under a GPL license. For the text-to-video model license and conditions please refer to the [model card](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis).## Changelog
* Mar 31, 2023 - Inital release
* April 1, 2023 - Switch to conda install
* June 2, 2023 - Move to stable version of diffusers