An open API service indexing awesome lists of open source software.

https://github.com/wangqiang9/awesome-controllable-video-diffusion

Awesome Controllable Video Generation with Diffusion Models
https://github.com/wangqiang9/awesome-controllable-video-diffusion

List: awesome-controllable-video-diffusion

awesome controllable-generation diffusion diffusion-model diffusion-models video-generation

Last synced: about 1 month ago
JSON representation

Awesome Controllable Video Generation with Diffusion Models

Awesome Lists containing this project

README

        

# Awesome-Controllable-Video-Diffusion
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/hee9joon/Awesome-Diffusion-Models)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)

Awesome Controllable Video Generation with Diffusion Models.

## Table of Contents
- [Pose Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion#pose-control)
- [Audio Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion#audio-control)
- [Universal Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion#universal-control)
- [Camera Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion#camera-control)
- [Trajectory Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion#trajectory-control)
- [Subject Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion#subject-control)
- [Area Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion/blob/main/README.md#area-control)
- [Video Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion/blob/main/README.md#video-control)
- [Brain Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion/blob/main/README.md#brain-control)
- [ID Control](https://github.com/wangqiang9/Awesome-Controllable-Video-Diffusion/blob/main/README.md#faceid-control)

## Pose Control

EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation

[📄 Paper](https://arxiv.org/abs/2411.10061) | [🌐 Project Page](https://github.com/antgroup/echomimic_v2) | [💻 Code](https://github.com/antgroup/echomimic_v2)

MikuDance: Animating Character Art with Mixed Motion Dynamics

[📄 Paper](https://arxiv.org/abs/2411.08656) | [🌐 Project Page](https://igl-hkust.github.io/das/) | [💻 Code](https://github.com/IGL-HKUST/DiffusionAsShader)

Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control

[📄 Paper](https://arxiv.org/abs/2501.03847) | [🌐 Project Page](https://kebii.github.io/MikuDance/) | [💻 Code](https://github.com/CyberAgentAILab/TANGO)

TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio-Motion Embedding and Diffusion Interpolation

[📄 Paper](https://arxiv.org/abs/2410.04221) | [🌐 Project Page](https://pantomatrix.github.io/TANGO/) | [💻 Code](https://github.com/CyberAgentAILab/TANGO)

DynamicPose: A robust image-to-video framework for portrait animation driven by pose sequences

[💻 Code](https://github.com/dynamic-X-LAB/DynamicPose)

Alignment is All You Need: A Training-free Augmentation Strategy for Pose-guided Video Generation

[📄 Paper](https://arxiv.org/abs/2408.16506)

Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos

[📄 Paper](https://arxiv.org/abs/2304.01186) | [🌐 Project Page](https://follow-your-pose.github.io/) | [💻 Code](https://github.com/mayuelala/FollowYourPose)

Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation

[📄 Paper](https://arxiv.org/pdf/2311.17117.pdf) | [🌐 Project Page](https://humanaigc.github.io/animate-anyone/)

DreaMoving: A Human Video Generation Framework based on Diffusion Models

[📄 Paper](https://arxiv.org/abs/2312.05107) | [🌐 Project Page](https://dreamoving.github.io/dreamoving/) | [💻 Code](https://github.com/dreamoving/dreamoving-project)

MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion

[📄 Paper](https://arxiv.org/abs/2311.12052) | [🌐 Project Page](https://boese0601.github.io/magicdance/) | [💻 Code](https://github.com/Boese0601/MagicDance)

MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model

[📄 Paper](https://arxiv.org/abs/2406.01188) | [🌐 Project Page](https://showlab.github.io/magicanimate/) | [💻 Code](https://github.com/magic-research/magic-animate)

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

[📄 Paper](https://arxiv.org/pdf/2403.14781) | [🌐 Project Page](https://fudan-generative-vision.github.io/champ/#/) | [💻 Code](https://github.com/fudan-generative-vision/champ)

Magic-Me: Identity-Specific Video Customized Diffusion

[📄 Paper](https://arxiv.org/abs/2402.09368) | [🌐 Project Page](https://magic-me-webpage.github.io/) | [💻 Code](https://github.com/Zhen-Dong/Magic-Me)

DisCo: Disentangled Control for Referring Human Dance Generation in Real World

[📄 Paper](https://arxiv.org/abs/2307.00040) | [🌐 Project Page](https://disco-dance.github.io/) | [💻 Code](https://github.com/Wangt-CN/DisCo)

Human4DiT: Free-view Human Video Generation with 4D Diffusion Transformer

[📄 Paper](https://arxiv.org/abs/2405.17405) | [🌐 Project Page](https://human4dit.github.io/)

MimicMotion : High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance

[📄 Paper](https://arxiv.org/abs/2406.19680) | [🌐 Project Page](https://tencent.github.io/MimicMotion/) | [💻 Code](https://github.com/tencent/MimicMotion)

Follow-Your-Pose v2: Multiple-Condition Guided Character Image Animation for Stable Pose Control

[📄 Paper](https://arxiv.org/abs/2406.03035) | [🌐 Project Page](https://follow-your-pose-v2.github.io/)

HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation

[📄 Paper](https://arxiv.org/abs/2407.17438) | [🌐 Project Page](https://humanvid.github.io/) | [💻 Code](https://github.com/zhenzhiwang/HumanVid)

MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation.

[💻 Code](https://github.com/TMElyralab/MusePose)

MDM: Human Motion Diffusion Model

[📄 Paper](https://arxiv.org/abs/2209.14916) | [🌐 Project Page](https://guytevet.github.io/mdm-page/) | [💻 Code](https://github.com/GuyTevet/motion-diffusion-model)

## Audio Control

MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation

[📄 Paper](https://arxiv.org/abs/2412.04448) | [🌐 Project Page](https://memoavatar.github.io/) | [💻 Code](https://github.com/memoavatar/memo)

Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation

[📄 Paper](https://arxiv.org/abs/2410.07718) | [🌐 Project Page](https://fudan-generative-vision.github.io/hallo2/#/) | [💻 Code](https://github.com/fudan-generative-vision/hallo2)

Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model

[📄 Paper](https://arxiv.org/pdf/2404.01862) | [🌐 Project Page](https://thuhcsi.github.io/S2G-MDDiffusion/) | [💻 Code](https://github.com/thuhcsi/S2G-MDDiffusion)

Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation

[📄 Paper](https://arxiv.org/abs/2309.16429) | [🌐 Project Page](https://pages.cs.huji.ac.il/adiyoss-lab/TempoTokens/) | [💻 Code](https://github.com/guyyariv/TempoTokens)

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

[📄 Paper](https://arxiv.org/abs/2212.09478) | [💻 Code](https://github.com/researchmm/MM-Diffusion)

Speech Driven Video Editing via an Audio-Conditioned Diffusion Model

[📄 Paper](https://arxiv.org/abs/2301.04474) | [🌐 Project Page](https://danbigioi.github.io/DiffusionVideoEditing/) | [💻 Code](https://github.com/DanBigioi/DiffusionVideoEditing)

Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

[📄 Paper](https://arxiv.org/pdf/2406.08801) | [🌐 Project Page](https://fudan-generative-vision.github.io/hallo/#/) | [💻 Code](https://github.com/fudan-generative-vision/hallo)

Listen, denoise, action! Audio-driven motion synthesis with diffusion models

[📄 Paper](https://arxiv.org/abs/2211.09707) | [🌐 Project Page](https://www.speech.kth.se/research/listen-denoise-action/) | [💻 Code](https://github.com/simonalexanderson/ListenDenoiseAction/)

CoDi: Any-to-Any Generation via Composable Diffusion

[📄 Paper](http://arxiv.org/abs/2305.11846) | [🌐 Project Page](https://codi-gen.github.io/) | [💻 Code](https://github.com/microsoft/i-Code/tree/main/i-Code-V3)

Generative Disco: Text-to-Video Generation for Music Visualization

[📄 Paper](https://arxiv.org/abs/2304.08551)

AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion

[📄 Paper](https://arxiv.org/abs/2305.04001)

EMO: Emote Portrait Alive Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions

[📄 Paper](https://arxiv.org/abs/2402.17485) | [🌐 Project Page](https://humanaigc.github.io/emote-portrait-alive/) | [💻 Code](https://github.com/HumanAIGC/EMO)

Context-aware Talking Face Video Generation

[📄 Paper](https://arxiv.org/abs/2402.18092)

## Universal Control
ControlNeXt: Powerful and Efficient Control for Image and Video Generation

[📄 Paper](https://arxiv.org/abs/2408.06070) | [🌐 Project Page](https://pbihao.github.io/projects/controlnext/index.html) | [💻 Code](https://github.com/dvlab-research/ControlNeXt)

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

[📄 Paper](https://arxiv.org/abs/2408.06070) | [🌐 Project Page](https://controlavideo.github.io/) | [💻 Code](https://github.com/Weifeng-Chen/control-a-video)

ControlVideo: Training-free Controllable Text-to-Video Generation

[📄 Paper](https://arxiv.org/abs/2305.13077) | [💻 Code](https://github.com/YBYBZhang/ControlVideo)

TrackGo: A Flexible and Efficient Method for Controllable Video Generation

[📄 Paper](https://controlavideo.github.io/#paper) | [🌐 Project Page](https://zhtjtcz.github.io/TrackGo-Page/#) | [💻 Code](https://zhtjtcz.github.io/TrackGo-Page/#)

VideoComposer: Compositional Video Synthesis with Motion Controllability

[📄 Paper](https://arxiv.org/abs/2306.02018) | [🌐 Project Page](https://videocomposer.github.io/) | [💻 Code](https://github.com/damo-vilab/videocomposer)

Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance

[📄 Paper](https://arxiv.org/abs/2306.00943) | [🌐 Project Page](https://doubiiu.github.io/projects/Make-Your-Video/) | [💻 Code](https://github.com/VideoCrafter/Make-Your-Video)

UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control

[📄 Paper](https://arxiv.org/pdf/2403.02332.pdf) | [🌐 Project Page](https://unified-attention-control.github.io/) | [💻 Code](https://github.com/XuweiyiChen/UniCtrl)

SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models

[📄 Paper](https://arxiv.org/abs/2311.16933) | [🌐 Project Page](https://guoyww.github.io/projects/SparseCtrl/) | [💻 Code](https://github.com/guoyww/AnimateDiff#202312-animatediff-v3-and-sparsectrl)

VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet

[📄 Paper](https://arxiv.org/abs/2307.14073) | [🌐 Project Page](https://vcg-aigc.github.io/) | [💻 Code](https://github.com/ZhihaoHu/VideoControlNet)

Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models

[📄 Paper](https://arxiv.org/abs/2407.15642) | [🌐 Project Page](https://maxin-cn.github.io/cinemo_project/) | [💻 Code](https://github.com/maxin-cn/Cinemo)

## Camera Control

MotionMaster: Training-free Camera Motion Transfer For Video Generation

[📄 Paper](https://arxiv.org/pdf/2404.15789) | [🌐 Project Page](https://sjtuplayer.github.io/projects/MotionMaster/) | [💻 Code](https://github.com/sjtuplayer/MotionMaster)

CinePreGen: Camera Controllable Video Previsualization via Engine-powered Diffusion

[📄 Paper](https://arxiv.org/html/2408.17424v1)

CamViG: Camera Aware Image-to-Video Generation with Multimodal Transformers

[📄 Paper](https://arxiv.org/abs/2405.13195)

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

[📄 Paper](https://arxiv.org/abs/2402.03162) | [🌐 Project Page](https://direct-a-video.github.io/) | [💻 Code](https://github.com/ysy31415/direct_a_video)

MotionCtrl: A Unified and Flexible Motion Controller for Video Generation

[📄 Paper](https://arxiv.org/pdf/2312.03641.pdf) | [🌐 Project Page](https://wzhouxiff.github.io/projects/MotionCtrl/) | [💻 Code](https://github.com/TencentARC/MotionCtrl)

CameraCtrl: Enabling Camera Control for Text-to-Video Generation

[📄 Paper](https://arxiv.org/abs/2404.02101) | [🌐 Project Page](https://hehao13.github.io/projects-CameraCtrl/) | [💻 Code](https://github.com/hehao13/CameraCtrl)

VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control

[📄 Paper](https://arxiv.org/abs/2407.12781) | [🌐 Project Page](https://snap-research.github.io/vd3d/)

Controlling Space and Time with Diffusion Models

[📄 Paper](https://arxiv.org/pdf/2407.07860) | [🌐 Project Page](https://4d-diffusion.github.io/)

CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation

[📄 Paper](https://arxiv.org/abs/2406.02509) | [🌐 Project Page](https://ir1d.github.io/CamCo/)

Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control

[📄 Paper](https://arxiv.org/pdf/2405.17414) | [🌐 Project Page](https://collaborativevideodiffusion.github.io/)

HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation

[📄 Paper](https://arxiv.org/abs/2407.17438) | [🌐 Project Page](https://humanvid.github.io/) | [💻 Code](https://github.com/zhenzhiwang/HumanVid)

Training-free Camera Control for Video Generation

[📄 Paper](https://arxiv.org/pdf/2406.10126) | [🌐 Project Page](https://lifedecoder.github.io/CamTrol/)

Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text

[📄 Paper](https://arxiv.org/pdf/2406.17601) | [🌐 Project Page](https://imlixinyang.github.io/director3d-page/) | [💻 Code](https://github.com/imlixinyang/director3d)

MotionBooth: Motion-Aware Customized Text-to-Video Generation

[📄 Paper](http://arxiv.org/abs/2406.17758v1) | [💻 Code](https://github.com/jianzongwu/MotionBooth)

DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models

[📄 Paper](https://primecai.github.io/static/pdfs/diffdreamer.pdf) | [🌐 Project Page]([https://imlixinyang.github.io/director3d-page/](https://primecai.github.io/diffdreamer))

## Trajectory Control

FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models

[📄 Paper](https://arxiv.org/abs/2406.16863) | [🌐 Project Page](http://haonanqiu.com/projects/FreeTraj.html) | [💻 Code](https://github.com/arthur-qiu/FreeTraj)

TrailBlazer: Trajectory Control for Diffusion-Based Video Generation

[📄 Paper](http://arxiv.org/abs/2401.00896) | [🌐 Project Page](https://hohonu-vicml.github.io/Trailblazer.Page/) | [💻 Code](https://github.com/hohonu-vicml/Trailblazer)

DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory

[📄 Paper](https://www.microsoft.com/en-us/research/publication/dragnuwa-fine-grained-control-in-video-generation-by-integrating-text-image-and-trajectory/bibtex/) | [🌐 Project Page](https://www.microsoft.com/en-us/research/project/dragnuwa/) | [💻 Code](https://github.com/ProjectNUWA/DragNUWA)

Tora: Trajectory-oriented Diffusion Transformer for Video Generation

[📄 Paper](https://arxiv.org/abs/2407.21705) | [🌐 Project Page](https://ali-videoai.github.io/tora_video/)

Controllable Longer Image Animation with Diffusion Models

[📄 Paper](https://arxiv.org/abs/2405.17306) | [🌐 Project Page](https://wangqiang9.github.io/Controllable.github.io/)

MotionCtrl: A Unified and Flexible Motion Controller for Video Generation

[📄 Paper](https://arxiv.org/pdf/2312.03641.pdf) | [🌐 Project Page](https://wzhouxiff.github.io/projects/MotionCtrl/) | [💻 Code](https://github.com/TencentARC/MotionCtrl)

MotionBooth: Motion-Aware Customized Text-to-Video Generation

[📄 Paper](http://arxiv.org/abs/2406.17758v1) | [💻 Code](https://github.com/jianzongwu/MotionBooth)

Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics

[📄 Paper](https://arxiv.org/pdf/2408.04631) | [🌐 Project Page](https://vgg-puppetmaster.github.io/) | [💻 Code](https://github.com/RuiningLi/puppet-master)

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

[📄 Paper](https://arxiv.org/abs/2402.03162) | [🌐 Project Page](https://direct-a-video.github.io/) | [💻 Code](https://github.com/ysy31415/direct_a_video)

Generative Image Dynamics

[📄 Paper](https://arxiv.org/abs/2309.07906) | [🌐 Project Page](https://generative-dynamics.github.io/)

Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation

[📄 Paper](https://arxiv.org/abs/2401.10150)

Video Diffusion Models are Training-free Motion Interpreter and Controlle

[📄 Paper](https://arxiv.org/abs/2405.14864) | [🌐 Project Page](https://xizaoqu.github.io/moft/)

## Subject Control

Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos

[📄 Paper](https://arxiv.org/pdf/2404.17571)

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

[📄 Paper](https://arxiv.org/abs/2402.03162) | [🌐 Project Page](https://direct-a-video.github.io/) | [💻 Code](https://github.com/ysy31415/direct_a_video)

ActAnywhere: Subject-Aware Video Background Generation

[📄 Paper](https://arxiv.org/abs/2401.10822) | [🌐 Project Page](https://actanywhere.github.io/)

MotionBooth: Motion-Aware Customized Text-to-Video Generation

[📄 Paper](http://arxiv.org/abs/2406.17758v1) | [💻 Code](https://github.com/jianzongwu/MotionBooth)

Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation

[📄 Paper](https://arxiv.org/abs/2307.06940) | [💻 Code](https://github.com/AILab-CVC/Animate-A-Story)

One-Shot Learning Meets Depth Diffusion in Multi-Object Videos

[📄 Paper](https://arxiv.org/abs/2408.16704)

## Area Control

Boximator: Generating Rich and Controllable Motions for Video Synthesis

[📄 Paper](https://arxiv.org/abs/2402.01566) | [🌐 Project Page](https://boximator.github.io/)

Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts

[📄 Paper](https://arxiv.org/abs/2403.08268) | [🌐 Project Page](https://follow-your-click.github.io/) | [💻 Code](https://github.com/mayuelala/FollowYourClick)

AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance

[📄 Paper](https://arxiv.org/pdf/2311.12886.pdf) | [🌐 Project Page](https://animationai.github.io/AnimateAnything/) | [💻 Code](https://github.com/alibaba/animate-anything)

Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling

[📄 Paper](https://arxiv.org/abs/2401.15977) | [🌐 Project Page](https://xiaoyushi97.github.io/Motion-I2V/)

Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion

[📄 Paper](https://arxiv.org/abs/2407.13759) | [🌐 Project Page](https://boyangdeng.com/streetscapes/)

## Video Control

Customizing Motion in Text-to-Video Diffusion Models

[📄 Paper](https://arxiv.org/pdf/2312.04966.pdf) | [🌐 Project Page](https://joaanna.github.io/customizing_motion/)

MotionClone: Training-Free Motion Cloning for Controllable Video Generation

[📄 Paper](https://arxiv.org/abs/2406.05338) | [🌐 Project Page](https://bujiazi.github.io/motionclone.github.io/) | [💻 Code](https://github.com/Bujiazi/MotionClone)

VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models

[📄 Paper](https://video-motion-customization.github.io/static/video-motion-customization(vmc)-arxiv.pdf) | [🌐 Project Page](https://video-motion-customization.github.io/) | [💻 Code](https://github.com/HyeonHo99/Video-Motion-Customization)

Motion Inversion for Video Customization

[📄 Paper](https://arxiv.org/abs/2403.20193) | [🌐 Project Page](https://wileewang.github.io/MotionInversion/) | [💻 Code](https://github.com/EnVision-Research/MotionInversion)

## Brain Control

NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties

[📄 Paper](https://arxiv.org/abs/2402.01590)

## ID Control

Identity-Preserving Text-to-Video Generation by Frequency Decomposition

[📄 Paper](https://arxiv.org/abs/2411.17440) | [🌐 Project Page](https://pku-yuangroup.github.io/ConsisID/) | [💻 Code](https://github.com/PKU-YuanGroup/ConsisID)

Movie Gen: A Cast of Media Foundation Models

[📄 Paper](https://ai.meta.com/static-resource/movie-gen-research-paper)

ID-Animator: Zero-Shot Identity-Preserving Human Video Generation

[📄 Paper](https://arxiv.org/abs/2404.15275) | [🌐 Project Page](https://id-animator.github.io/) | [💻 Code](https://github.com/ID-Animator/ID-Animator)

VideoBooth: Diffusion-based Video Generation with Image Prompts

[📄 Paper](https://github.com/Vchitect/VideoBooth/blob/main/xxxx) | [🌐 Project Page](https://vchitect.github.io/VideoBooth-project/) | [💻 Code](https://github.com/Vchitect/VideoBooth)

Magic-Me: Identity-Specific Video Customized Diffusion

[📄 Paper](https://arxiv.org/abs/2402.09368) | [🌐 Project Page](https://magic-me-webpage.github.io/) | [💻 Code](https://github.com/Zhen-Dong/Magic-Me)