{"id":13488894,"url":"https://github.com/open-mmlab/PIA","last_synced_at":"2025-03-28T02:31:19.663Z","repository":{"id":213671670,"uuid":"734171107","full_name":"open-mmlab/PIA","owner":"open-mmlab","description":"[CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos.           PIA，你的个性化图像动画生成器，利用文本提示将图像变为奇妙的动画","archived":false,"fork":false,"pushed_at":"2024-08-05T09:11:48.000Z","size":82829,"stargazers_count":950,"open_issues_count":7,"forks_count":76,"subscribers_count":24,"default_branch":"main","last_synced_at":"2025-03-26T04:05:46.269Z","etag":null,"topics":["aigc","animation","diffusion-models","image-to-video","image-to-video-generation","personalized-generation","stable-diffusion"],"latest_commit_sha":null,"homepage":"https://pi-animator.github.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/open-mmlab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-12-21T03:29:34.000Z","updated_at":"2025-03-25T07:19:33.000Z","dependencies_parsed_at":"2024-10-31T01:40:51.123Z","dependency_job_id":null,"html_url":"https://github.com/open-mmlab/PIA","commit_stats":null,"previous_names":["open-mmlab/pia"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2FPIA","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2FPIA/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2FPIA/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2FPIA/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/open-mmlab","download_url":"https://codeload.github.com/open-mmlab/PIA/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245957680,"owners_count":20700316,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aigc","animation","diffusion-models","image-to-video","image-to-video-generation","personalized-generation","stable-diffusion"],"created_at":"2024-07-31T18:01:23.694Z","updated_at":"2025-03-28T02:31:19.646Z","avatar_url":"https://github.com/open-mmlab.png","language":"Python","readme":"# CVPR 2024 | PIA：Personalized Image Animator\n\n[**PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models**](https://arxiv.org/abs/2312.13964)\n\n[Yiming Zhang*](https://github.com/ymzhang0319), [Zhening Xing*](https://github.com/LeoXing1996/), [Yanhong Zeng†](https://zengyh1900.github.io/), [Youqing Fang](https://github.com/FangYouqing), [Kai Chen†](https://chenkai.site/)\n\n(*equal contribution, †corresponding Author)\n\n\n[![arXiv](https://img.shields.io/badge/arXiv-2312.13964-b31b1b.svg)](https://arxiv.org/abs/2312.13964)\n[![Project Page](https://img.shields.io/badge/PIA-Website-green)](https://pi-animator.github.io)\n[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_app.svg)](https://openxlab.org.cn/apps/detail/zhangyiming/PiaPia)\n[![Third Party Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/PIA-colab/blob/main/PIA_colab.ipynb)\n[![HuggingFace Model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue)](https://huggingface.co/Leoxing/PIA)\n\u003ca target=\"_blank\" href=\"https://huggingface.co/spaces/Leoxing/PIA\"\u003e\n  \u003cimg src=\"https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg\" alt=\"Open in HugginFace\"/\u003e\n\u003c/a\u003e\n[![Replicate](https://replicate.com/cjwbw/pia/badge)](https://replicate.com/cjwbw/pia)\n\n\nPIA is a personalized image animation method which can generate videos with **high motion controllability** and **strong text and image alignment**.\n\nIf you find our project helpful, please give it a star :star: or [cite](#bibtex) it, we would be very grateful :sparkling_heart: .\n\n\u003cimg src=\"__assets__/image_animation/teaser/teaser.gif\"\u003e\n\n\n## What's New\n- [x] `2024/01/03` [Replicate Demo \u0026 API](https://replicate.com/cjwbw/pia) support!\n- [x] `2024/01/03` [Colab](https://github.com/camenduru/PIA-colab) support from [camenduru](https://github.com/camenduru)!\n- [x] `2023/12/28` Support `scaled_dot_product_attention` for 1024x1024 images with just 16GB of GPU memory.\n- [x] `2023/12/25` HuggingFace demo is available now! [🤗 Hub](https://huggingface.co/spaces/Leoxing/PIA/)\n- [x] `2023/12/22` Release the demo of PIA on [OpenXLab](https://openxlab.org.cn/apps/detail/zhangyiming/PiaPia) and checkpoints on [Google Drive](https://drive.google.com/file/d/1RL3Fp0Q6pMD8PbGPULYUnvjqyRQXGHwN/view?usp=drive_link) or [![Open in OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/zhangyiming/PIA)\n\n## Setup\n### Prepare Environment\n\nUse the following command to install a conda environment for PIA from scratch:\n\n```\nconda env create -f pia.yml\nconda activate pia\n```\nYou may also want to install it based on an existing environment, then you can use `environment-pt2.yaml` for Pytorch==2.0.0. If you want to use lower version of Pytorch (e.g. 1.13.1), you can use the following command:\n\n```\nconda env create -f environment.yaml\nconda activate pia\n```\n\nWe strongly recommend you to use Pytorch==2.0.0 which supports `scaled_dot_product_attention` for memory-efficient image animation.\n\n### Download checkpoints\n\u003cli\u003eDownload the Stable Diffusion v1-5\u003c/li\u003e\n\n```\nconda install git-lfs\ngit lfs install\ngit clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/\n```\n\n\u003cli\u003eDownload PIA\u003c/li\u003e\n\n```\ngit clone https://huggingface.co/Leoxing/PIA models/PIA/\n```\n\n\u003cli\u003eDownload Personalized Models\u003c/li\u003e\n\n```\nbash download_bashscripts/1-RealisticVision.sh\nbash download_bashscripts/2-RcnzCartoon.sh\nbash download_bashscripts/3-MajicMix.sh\n```\n\n\nYou can also download *pia.ckpt* manually through link on [Google Drive](https://drive.google.com/file/d/1RL3Fp0Q6pMD8PbGPULYUnvjqyRQXGHwN/view?usp=drive_link)\nor [HuggingFace](https://huggingface.co/Leoxing/PIA).\n\nPut checkpoints as follows:\n```\n└── models\n    ├── DreamBooth_LoRA\n    │   ├── ...\n    ├── PIA\n    │   ├── pia.ckpt\n    └── StableDiffusion\n        ├── vae\n        ├── unet\n        └── ...\n```\n\n## Inference\n### Image Animation\nImage to Video result can be obtained by:\n```\npython inference.py --config=example/config/lighthouse.yaml\npython inference.py --config=example/config/harry.yaml\npython inference.py --config=example/config/majic_girl.yaml\n```\nRun the command above, then you can find the results in example/result:\n\u003ctable class=\"center\"\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003elightning, lighthouse\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003esun rising, lighthouse\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003efireworks, lighthouse\u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cimg src=\"example/img/lighthouse.jpg\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/real/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/real/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/real/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1boy smiling\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1boy playing the magic fire\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1boy is waving hands\u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cimg src=\"example/img/harry.png\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/rcnz/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/rcnz/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/rcnz/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1girl is smiling\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1girl is crying\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1girl, snowing \u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cimg src=\"example/img/majic_girl.jpg\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/majic/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/majic/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/majic/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n\n\u003c/table\u003e\n\n\u003c!-- More results:\n\n\u003ctable class=\"center\"\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e --\u003e\n\n### Motion Magnitude\nYou can control the motion magnitude through the parameter **magnitude**:\n```sh\npython inference.py --config=example/config/xxx.yaml --magnitude=0 # Small Motion\npython inference.py --config=example/config/xxx.yaml --magnitude=1 # Moderate Motion\npython inference.py --config=example/config/xxx.yaml --magnitude=2 # Large Motion\n```\nExamples:\n\n```sh\npython inference.py --config=example/config/labrador.yaml\npython inference.py --config=example/config/bear.yaml\npython inference.py --config=example/config/genshin.yaml\n```\n\n\u003ctable class=\"center\"\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003cbr\u003e\u0026 Prompt\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eSmall Motion\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eModerate Motion\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eLarge Motion\u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n    \u003ctd\u003e\u003cimg src=\"example/img/labrador.png\" style=\"width: 220px\"\u003ea golden labrador is running\u003c/td\u003e\n     \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/labrador/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/labrador/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/labrador/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n    \u003ctd\u003e\u003cimg src=\"example/img/bear.jpg\" style=\"width: 220px\"\u003e1bear is walking, ...\u003c/td\u003e\n     \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/bear/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/bear/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/bear/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n    \u003ctd\u003e\u003cimg src=\"example/img/genshin.jpg\" style=\"width: 220px\"\u003echerry blossom, ...\u003c/td\u003e\n     \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/genshin/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/genshin/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/magnitude/genshin/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n### Style Transfer\nTo achieve style transfer, you can run the command(*Please don't forget set the base model in xxx.yaml*):\n\nExamples:\n\n```sh\npython inference.py --config example/config/concert.yaml --style_transfer\npython inference.py --config example/config/anya.yaml --style_transfer\n```\n\u003ctable class=\"center\"\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003cbr\u003e \u0026 Base Model\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1man is smiling\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1man is crying\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1man is singing\u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd style=\"text-align: center\"\u003e\u003cimg src=\"example/img/concert.png\" style=\"width:220px\"\u003eRealistic Vision\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/concert/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/concert/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/concert/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd style=\"text-align: center\"\u003e\u003cimg src=\"example/img/concert.png\" style=\"width:220px\"\u003eRCNZ Cartoon 3d\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/concert/4.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/concert/5.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/concert/6.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1girl smiling\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1girl open mouth\u003c/p\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cp style=\"text-align: center\"\u003e1girl is crying, pout\u003c/p\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd style=\"text-align: center\"\u003e\u003cimg src=\"example/img/anya.jpg\" style=\"width:220px\"\u003eRCNZ Cartoon 3d\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/anya/1.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/anya/2.gif\"\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/style_transfer/anya/3.gif\"\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n### Loop Video\n\nYou can generate loop by using the parameter --loop\n\n```sh\npython inference.py --config=example/config/xxx.yaml --loop\n```\n\nExamples:\n```sh\npython inference.py --config=example/config/lighthouse.yaml --loop\npython inference.py --config=example/config/labrador.yaml --loop\n```\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003elightning, lighthouse\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003esun rising, lighthouse\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003efireworks, lighthouse\u003c/p\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd style=\"text-align: center\"\u003e\u003cimg src=\"example/img/lighthouse.jpg\" style=\"width:auto\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/loop/lighthouse/1.gif\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/loop/lighthouse/2.gif\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/loop/lighthouse/3.gif\"\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003eInput Image\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003elabrador jumping\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003elabrador walking\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp style=\"text-align: center\"\u003elabrador running\u003c/p\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd style=\"text-align: center\"\u003e\u003cimg src=\"example/img/labrador.png\" style=\"width:auto\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/loop/labrador/1.gif\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/loop/labrador/2.gif\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"__assets__/image_animation/loop/labrador/3.gif\"\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n\n## Training\n\nWe provide [training script](\"train.py\") for PIA. It borrows from [AnimateDiff](https://github.com/guoyww/AnimateDiff/tree/main) heavily, so please prepare the dataset and configuration files according to the [guideline](https://github.com/guoyww/AnimateDiff/blob/main/__assets__/docs/animatediff.md#steps-for-training).\n\nAfter preparation, you can train the model by running the following command using torchrun:\n\n```shell\ntorchrun --nnodes=1 --nproc_per_node=1 train.py --config example/config/train.yaml\n```\n\nor by slurm,\n```shell\nsrun --quotatype=reserved --job-name=pia --gres=gpu:8 --ntasks-per-node=8 --ntasks=8  --cpus-per-task=4 --kill-on-bad-exit=1 python train.py --config example/config/train.yaml\n```\n\n\n## AnimateBench\nWe have open-sourced AnimateBench on [HuggingFace](https://huggingface.co/datasets/ymzhang319/AnimateBench) which includes images, prompts and configs to evaluate PIA and other image animation methods.\n\n\n## BibTex\n```\n@inproceedings{zhang2024pia,\n  title={Pia: Your personalized image animator via plug-and-play modules in text-to-image models},\n  author={Zhang, Yiming and Xing, Zhening and Zeng, Yanhong and Fang, Youqing and Chen, Kai},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={7747--7756},\n  year={2024}\n}\n```\n\n\n\n\n## Contact Us\n**Yiming Zhang**: zhangyiming@pjlab.org.cn\n\n**Zhening Xing**: xingzhening@pjlab.org.cn\n\n**Yanhong Zeng**: zengyh1900@gmail.com\n\n## Acknowledgements\nThe code is built upon [AnimateDiff](https://github.com/guoyww/AnimateDiff), [Tune-a-Video](https://github.com/showlab/Tune-A-Video) and [PySceneDetect](https://github.com/Breakthrough/PySceneDetect)\n\nYou may also want to try other project from our team:\n\u003ca target=\"_blank\" href=\"https://github.com/open-mmlab/mmagic\"\u003e\n  \u003cimg src=\"https://github.com/open-mmlab/mmagic/assets/28132635/15aab910-f5c4-4b76-af9d-fe8eead1d930\" height=20 alt=\"MMagic\"/\u003e\n\u003c/a\u003e\n","funding_links":[],"categories":["Video Generation","HarmonyOS"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2FPIA","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopen-mmlab%2FPIA","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2FPIA/lists"}