{"id":13489049,"url":"https://github.com/rese1f/stablevideo","last_synced_at":"2025-05-16T07:02:47.838Z","repository":{"id":189173061,"uuid":"603740126","full_name":"rese1f/StableVideo","owner":"rese1f","description":"[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing","archived":false,"fork":false,"pushed_at":"2023-09-07T04:02:23.000Z","size":69229,"stargazers_count":1430,"open_issues_count":17,"forks_count":87,"subscribers_count":21,"default_branch":"master","last_synced_at":"2025-05-16T07:01:40.759Z","etag":null,"topics":["aigc","computer-vision","controlnet","diffusion-model","video-editing"],"latest_commit_sha":null,"homepage":"https://rese1f.github.io/StableVideo/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/rese1f.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-02-19T12:48:30.000Z","updated_at":"2025-05-15T09:01:03.000Z","dependencies_parsed_at":"2024-10-31T01:31:42.788Z","dependency_job_id":"03bff842-00e6-4f00-aa6c-09b72a0fdf09","html_url":"https://github.com/rese1f/StableVideo","commit_stats":null,"previous_names":["rese1f/stablevideo"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rese1f%2FStableVideo","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rese1f%2FStableVideo/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rese1f%2FStableVideo/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rese1f%2FStableVideo/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/rese1f","download_url":"https://codeload.github.com/rese1f/StableVideo/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254485045,"owners_count":22078767,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aigc","computer-vision","controlnet","diffusion-model","video-editing"],"created_at":"2024-07-31T18:01:27.907Z","updated_at":"2025-05-16T07:02:47.788Z","avatar_url":"https://github.com/rese1f.png","language":"Python","readme":"# StableVideo\n\n[![](http://img.shields.io/badge/cs.CV-arXiv%3A2308.09592-B31B1B.svg)](https://arxiv.org/abs/2308.09592)\n[![](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-orange)](https://huggingface.co/spaces/Reself/StableVideo)\n\n\u003e **StableVideo: Text-driven Consistency-aware Diffusion Video Editing**  \n\u003e Wenhao Chai, Xun Guo✉️, Gaoang Wang, Yan Lu  \n\u003e _ICCV 2023_\n\nhttps://github.com/rese1f/StableVideo/assets/58205475/558555f1-711c-46f0-85bc-9c229ff1f511\n\nhttps://github.com/rese1f/StableVideo/assets/58205475/c152d0fa-16d3-4528-b9c2-ad2ec53944b9\n\nhttps://github.com/rese1f/StableVideo/assets/58205475/0edbefdd-9b5f-4868-842c-9bf3156a54d3\n\n\n## VRAM requirement\n|   |VRAM (MiB)|\n|---|---|\n|float32|29145|\n|amp|23005|\n|amp + cpu|17639|\n|amp + cpu + xformers|14185|\n\n- cpu: use cpu cache, args: `save_memory`\n\nunder default setting (*e.g.* resolution, *etc.*) in `app.py`\n\n## Installation\n```\ngit clone https://github.com/rese1f/StableVideo.git\nconda create -n stablevideo python=3.11\npip install -r requirements.txt\n(optional) pip install xformers \n```\n\n(optional) We also provide CPU only version [huggingface demo](https://huggingface.co/spaces/Reself/StableVideo).\n```\ngit lfs install\ngit clone https://huggingface.co/spaces/Reself/StableVideo\npip install -r requirements.txt\n```\n\n## Download Pretrained Model\n\nAll models and detectors can be downloaded from ControlNet Hugging Face page at [Download Link](https://huggingface.co/lllyasviel/ControlNet).\n\n\n## Download example videos\nDownload the example atlas for car-turn, boat, libby, blackswa, bear, bicycle_tali, giraffe, kite-surf, lucia and motorbike at [Download Link](https://www.dropbox.com/s/oiyhbiqdws2p6r1/nla_share.zip?dl=0) shared by [Text2LIVE](https://github.com/omerbt/Text2LIVE) authors.\n\nYou can also train on your own video following [NLA](https://github.com/ykasten/layered-neural-atlases).\n\nAnd it will create a folder data:\n```\nStableVideo\n├── ...\n├── ckpt\n│   ├── cldm_v15.yaml\n|   ├── dpt_hybrid-midas-501f0c75.pt\n│   ├── control_sd15_canny.pth\n│   └── control_sd15_depth.pth\n├── data\n│   └── car-turn\n│       ├── checkpoint # NLA models are stored here\n│       ├── car-turn # contains video frames\n│       ├── ...\n│   ├── blackswan\n│   ├── ...\n└── ...\n```\n\n## Run and Play!\nRun the following command to start.\n```\npython app.py\n```\nthe result `.mp4` video and keyframe will be stored in the directory `./log` after clicking `render` button.\n\nYou can also edit the mask region for the foreground atlas as follows. Currently there might be a bug in Gradio. Please carefully check if the `editable output foreground atlas block` looks the same as the one above. If not, try to restart the entire program.\n\n\u003cimg width=\"916\" alt=\"\" src=\"https://github.com/rese1f/StableVideo/assets/58205475/ec8dd9f0-84fb-43ca-baaa-fb6c58da0d77\"\u003e\n\n\n## Citation\nIf our work is useful for your research, please consider citing as below. Many thanks :)\n```\n@article{chai2023stablevideo,\n  title={StableVideo: Text-driven Consistency-aware Diffusion Video Editing},\n  author={Chai, Wenhao and Guo, Xun and Wang, Gaoang and Lu, Yan},\n  journal={arXiv preprint arXiv:2308.09592},\n  year={2023}\n}\n```\n\n## Acknowledgement\n\nThis implementation is built partly on [Text2LIVE](https://github.com/omerbt/Text2LIVE) and [ControlNet](https://github.com/lllyasviel/ControlNet).\n\n\u003c!-- ## Citation --\u003e\n","funding_links":[],"categories":["Video Editing","\u003cspan id=\"video\"\u003eVideo\u003c/span\u003e"],"sub_categories":["\u003cspan id=\"tool\"\u003eLLM (LLM \u0026 Tool)\u003c/span\u003e"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frese1f%2Fstablevideo","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frese1f%2Fstablevideo","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frese1f%2Fstablevideo/lists"}