Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-conditional-content-generation
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
https://github.com/haofanwang/awesome-conditional-content-generation
- Tracking Papers on Diffusion Models
- Taming Diffusion Models for Music-driven Conducting Motion Generation
- [Code
- Music-Driven Group Choreography
- Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
- [Code
- Magic: Multi Art Genre Intelligent Choreography Dataset and Network for 3D Dance Generation
- Pretrained Diffusion Models for Unified Human Motion Synthesis
- EDGE: Editable Dance Generation From Music
- You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection
- GroupDancer: Music to Multi-People Dance Synthesis with Style Collaboration
- A Brand New Dance Partner: Music-Conditioned Pluralistic Dancing Controlled by Multiple Dance Genres
- [Code
- Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
- [Code
- Dance Style Transfer with Cross-modal Transformer
- [Upcoming Code
- Music-driven Dance Regeneration with Controllable Key Pose Constraints
- AI Choreographer: Music Conditioned 3D Dance Generation with AIST++
- [Code
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
- [Code
- GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
- Human Motion Diffusion as a Generative Prior
- [Code
- T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations
- [Code
- Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models
- Executing your Commands via Motion Diffusion in Latent Space
- [Code
- MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels
- [Code
- MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
- Executing your Commands via Motion Diffusion in Latent Space
- [Upcoming Code
- UDE: A Unified Driving Engine for Human Motion Generation
- [Upcoming Code
- MotionBERT: Unified Pretraining for Human Motion Analysis
- [Code
- Human Motion Diffusion Model
- [Code
- FLAME: Free-form Language-based Motion Synthesis & Editing
- MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
- [Code
- TEMOS: Generating diverse human motions from textual descriptions
- [Code
- GIMO: Gaze-Informed Human Motion Prediction in Context
- [Code
- MotionCLIP: Exposing Human Motion Generation to CLIP Space
- [Code
- Generating Diverse and Natural 3D Human Motions from Text
- [Code
- AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
- [Code
- Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents
- [Code
- here
- Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
- [Code
- GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis
- [Code
- DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model
- DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis
- Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation
- [Incoming Code
- Generating Holistic 3D Human Motion from Speech
- Audio-Driven Co-Speech Gesture Video Generation
- Listen, denoise, action! Audio-driven motion synthesis with diffusion models
- ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech
- [Code
- BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
- [Code
- EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
- [Code
- Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation
- [Code
- SEEG: Semantic Energized Co-speech Gesture Generation
- [Code
- FaceFormer: Speech-Driven 3D Facial Animation with Transformers
- [Code
- Freeform Body Motion Generation from Speech
- [Code
- Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders
- [Code
- Learning Speech-driven 3D Conversational Gestures from Video
- [Code
- Learning Individual Styles of Conversational Gesture
- [Code
- here
- InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
- [Code
- Stochastic Multi-Person 3D Motion Forecasting
- [Code
- HumanMAC: Masked Motion Completion for Human Motion Prediction
- [Code
- BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
- [Upcoming Code
- Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors
- [Code
- PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting
- [Code
- NeMF: Neural Motion Fields for Kinematic Animation
- [Code
- Multi-Person Extreme Motion Prediction
- [Code
- MotionMixer: MLP-based 3D Human Body Pose Forecasting
- [Code
- Multi-Person 3D Motion Prediction with Multi-Range Transformers
- MIME: Human-Aware 3D Scene Generation
- Scene Synthesis from Human Motion
- [Code
- TEACH: Temporal Action Compositions for 3D Humans
- [Code
- Motion In-betweening via Two-stage Transformers
- Skeleton2Humanoid: Animating Simulated Characters for Physically-plausible Motion In-betweening
- [Upcoming Code
- Conditional Motion In-betweening
- [Code
- SkeletonMAE: Spatial-Temporal Masked Autoencoders for Self-supervised Skeleton Action Recognition
- A Unified Framework for Real Time Motion Completion
- Transformer based Motion In-betweening
- [Code
- Generative Tweening: Long-term Inbetweening of 3D Human Motions
- here
- Adding Conditional Control to Text-to-Image Diffusion Models
- SpaText: Spatio-Textual Representation for Controllable Image Generation
- Sketch-Guided Text-to-Image Diffusion Models
- Make-A-Story: Visual Memory Conditioned Consistent Story Generation
- Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
- [Upcoming Code
- InstructPix2Pix: Learning to Follow Image Editing Instructions
- Null-text Inversion for Editing Real Images using Guided Diffusion Models
- HumanDiffusion: a Coarse-to-Fine Alignment Diffusion Framework for Controllable Text-Driven Person Image Generation
- Imagic: Text-Based Real Image Editing with Diffusion Models
- Self-Guided Diffusion Models
- On Distillation of Guided Diffusion Models
- DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
- [Code
- Prompt-to-Prompt Image Editing with Cross Attention Control
- [Code
- Improved Vector Quantized Diffusion Models
- [Code
- Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
- Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
- [Code
- Vector Quantized Diffusion Model for Text-to-Image Synthesis
- [Code
- High-Resolution Image Synthesis with Latent Diffusion Models
- [Code
- Text-To-4D Dynamic Scene Generation
- [Code
- Structure and Content-Guided Video Synthesis with Diffusion Models
- Latent Video Diffusion Models for High-Fidelity Video Generation with Arbitrary Lengths
- [Upcoming Code
- MagicVideo: Efficient Video Generation With Latent Diffusion Models
- Text2LIVE: Text-Driven Layered Image and Video Editing
- [Code
- Point-E: A System for Generating 3D Point Clouds from Complex Prompts
- DreamFusion: Text-to-3D using 2D Diffusion
Programming Languages
Keywords
motion-generation
10
deep-learning
9
diffusion-models
7
motion-prediction
7
human-motion-prediction
6
generative-model
6
pytorch
6
diffusion
5
motion-forecasting
5
motion
3
pose-forecasting
3
pose-prediction
3
eccv2022
3
pytorch-implementation
3
3d-generation
3
iccv2023
3
object-pose
2
human-scene-interaction
2
belfusion
2
ddim
2
ddpm
2
generative-models
2
human-motion-generation
2
human-object-interaction
2
generative-ai
2
6d
2
3d-human-pose
2
cvpr2022
2
co-speech-gesture
2
gesture-generation
2
nerf
2
text-driven
2
diffusion-model
2
vq-vae
2
gpt
2
latent-diffusion
2
ldm
2
disentangled-representations
2
graph-neural-networks
2
stable-diffusion
2
cvpr2023
2
aigc
1
affective-computing
1
text2image
1
intelligent-agent
1
text-processing
1
virtual-agent
1
music-generation
1
talking-face-generation
1
audio-visual-learning
1