{"id":21990663,"url":"https://github.com/wz0919/DreamRunner","last_synced_at":"2025-07-23T00:31:42.378Z","repository":{"id":264726261,"uuid":"894197827","full_name":"wz0919/DreamRunner","owner":"wz0919","description":"Official implementation of DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation","archived":false,"fork":false,"pushed_at":"2025-04-04T08:09:38.000Z","size":18042,"stargazers_count":66,"open_issues_count":2,"forks_count":7,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-04-04T09:24:13.273Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/wz0919.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-11-25T23:38:12.000Z","updated_at":"2025-04-04T08:09:43.000Z","dependencies_parsed_at":"2025-04-04T09:32:56.505Z","dependency_job_id":null,"html_url":"https://github.com/wz0919/DreamRunner","commit_stats":null,"previous_names":["wz0919/dreamrunner"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/wz0919/DreamRunner","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wz0919%2FDreamRunner","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wz0919%2FDreamRunner/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wz0919%2FDreamRunner/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wz0919%2FDreamRunner/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/wz0919","download_url":"https://codeload.github.com/wz0919/DreamRunner/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wz0919%2FDreamRunner/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266596689,"owners_count":23953891,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-22T02:00:09.085Z","response_time":66,"last_error":null,"robots_txt_status":null,"robots_txt_updated_at":null,"robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-29T20:01:19.262Z","updated_at":"2025-07-23T00:31:37.333Z","avatar_url":"https://github.com/wz0919.png","language":"Python","readme":"# DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation\n\n[![Project Website](https://img.shields.io/badge/Project-Website-blue)](https://dreamrunner-story2video.github.io)  [![arXiv](https://img.shields.io/badge/arXiv-2411.1665-b31b1b.svg)](https://arxiv.org/pdf/2411.16657)   \n\n#### [Zun Wang](https://zunwang1.github.io/), [Jialu Li](https://jialuli-luka.github.io/), [Han Lin](https://hl-hanlin.github.io/), [Jaehong Yoon](https://jaehong31.github.io), [Mohit Bansal](https://www.cs.unc.edu/~mbansal/)\n\n\u003cbr\u003e\n\u003cimg width=\"950\" src=\"files/teaser.gif\"/\u003e\n\u003cbr\u003e\n\n\n#### Code coming soon! Expected before December 4th, 2024.\n\n## ToDos\n- [x] Release the inference code on T2V-ComBench.\n- [ ] Release the code for retrieving videos and training character and motion loras.\n- [ ] Release the inference code for storytelling video genetation.\n\n## Setup\n\n### Environment Setup \n```shell\nconda create -n dreamrunner python==3.10\nconda activate dreamrunner\npip install -r requirements.txt \n```\n\n### Download Models \nDreamRunner is implemented using CogVideoX-2B. You can download it [here](https://huggingface.co/THUDM/CogVideoX-2b) and put it to `pretrained_models/CogVideoX-2b`.\n\n## Running the Code\n\n### T2V-Combench\n\n#### Inference\nWe provide the plans we used for T2V-ComBench in `MotionDirector_SR3AI/t2v-combench/plan`.\nYou can specify the GPUs you want use in `MotionDirector_SR3AI/t2v-combench-2b.sh` for parallel inference.\nThen directly Infer 600 videos on 6 dimensions of T2V-ComBnech with the following script\n```\ncd MotionDirector_SR3AI\nbash run_bench_2b.sh\n```\nThe generated videos will be saved at `MotionDirector_SR3AI/T2V-CompBench`.\n\n#### Evaluation\nPlease follow [T2V-ComBench](https://github.com/KaiyueSun98/T2V-CompBench) for evaluating the generated videos.\n\n### Storytell Video Generation\n#### Coming soon!\n\n# Citation\n\nIf you find our project useful in your research, please cite the following paper:\n\n```bibtex\n@article{zun2024dreamrunner,\n    author = {Zun Wang and Jialu Li and Han Lin and Jaehong Yoon and Mohit Bansal},\n    title  = {DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation},\n\tjournal   = {arxiv},\n\tyear      = {2024},\n\turl       = {https://arxiv.org/abs/2411.16657}\n}\n```\n","funding_links":[],"categories":["Personalized Restoration"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwz0919%2FDreamRunner","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwz0919%2FDreamRunner","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwz0919%2FDreamRunner/lists"}