Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-conditional-content-generation
Update-to-data resources for conditional content generation, including human motion generation, image or video generation and editing.
https://github.com/haofanwang/awesome-conditional-content-generation
Last synced: 2 days ago
JSON representation
-
Papers
-
Music-Driven motion generation
- [Code
- Music-Driven Group Choreography
- [Code
- Pretrained Diffusion Models for Unified Human Motion Synthesis
- [Code
- [Code
- [Upcoming Code
- AI Choreographer: Music Conditioned 3D Dance Generation with AIST++
- [Code
- Taming Diffusion Models for Music-driven Conducting Motion Generation
- Magic: Multi Art Genre Intelligent Choreography Dataset and Network for 3D Dance Generation
- EDGE: Editable Dance Generation From Music
- You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection
- GroupDancer: Music to Multi-People Dance Synthesis with Style Collaboration
- A Brand New Dance Partner: Music-Conditioned Pluralistic Dancing Controlled by Multiple Dance Genres
- Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
- Dance Style Transfer with Cross-modal Transformer
- Music-driven Dance Regeneration with Controllable Key Pose Constraints
-
Text-Driven motion generation
- [Code
- [Code
- T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations
- [Code
- [Upcoming Code
- [Code
- UDE: A Unified Driving Engine for Human Motion Generation
- [Upcoming Code
- [Code
- [Code
- [Code
- [Code
- [Code
- MotionCLIP: Exposing Human Motion Generation to CLIP Space
- [Code
- Generating Diverse and Natural 3D Human Motions from Text
- [Code
- [Code
- [Code
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
- GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
- Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models
- Executing your Commands via Motion Diffusion in Latent Space
- MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels
- MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
- UDE: A Unified Driving Engine for Human Motion Generation
- MotionBERT: Unified Pretraining for Human Motion Analysis
- FLAME: Free-form Language-based Motion Synthesis & Editing
- MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
- TEMOS: Generating diverse human motions from textual descriptions
- GIMO: Gaze-Informed Human Motion Prediction in Context
- AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
- Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents
-
Audio-Driven motion generation
- here
- [Code
- [Code
- Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation
- [Incoming Code
- [Code
- BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
- [Code
- [Code
- [Code
- [Code
- FaceFormer: Speech-Driven 3D Facial Animation with Transformers
- [Code
- [Code
- Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders
- [Code
- [Code
- GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis
- DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model
- DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis
- Generating Holistic 3D Human Motion from Speech
- Audio-Driven Co-Speech Gesture Video Generation
- Listen, denoise, action! Audio-driven motion synthesis with diffusion models
- ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech
- EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
- Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation
- SEEG: Semantic Energized Co-speech Gesture Generation
- Freeform Body Motion Generation from Speech
- Learning Speech-driven 3D Conversational Gestures from Video
- Learning Individual Styles of Conversational Gesture
-
Human motion prediction
- InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
- [Code
- Stochastic Multi-Person 3D Motion Forecasting
- [Code
- [Code
- BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
- [Upcoming Code
- Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors
- [Code
- [Code
- [Code
- [Code
- [Code
- here
- HumanMAC: Masked Motion Completion for Human Motion Prediction
- PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting
- NeMF: Neural Motion Fields for Kinematic Animation
- Multi-Person Extreme Motion Prediction
- MotionMixer: MLP-based 3D Human Body Pose Forecasting
- Multi-Person 3D Motion Prediction with Multi-Range Transformers
-
Motion Applications
- [Code
- [Code
- MIME: Human-Aware 3D Scene Generation
- Scene Synthesis from Human Motion
- TEACH: Temporal Action Compositions for 3D Humans
- Motion In-betweening via Two-stage Transformers
- Conditional Motion In-betweening
- SkeletonMAE: Spatial-Temporal Masked Autoencoders for Self-supervised Skeleton Action Recognition
- A Unified Framework for Real Time Motion Completion
- Transformer based Motion In-betweening
- Generative Tweening: Long-term Inbetweening of 3D Human Motions
-
Text-Image Generation
- SpaText: Spatio-Textual Representation for Controllable Image Generation
- Sketch-Guided Text-to-Image Diffusion Models
- Make-A-Story: Visual Memory Conditioned Consistent Story Generation
- Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
- InstructPix2Pix: Learning to Follow Image Editing Instructions
- Null-text Inversion for Editing Real Images using Guided Diffusion Models
- HumanDiffusion: a Coarse-to-Fine Alignment Diffusion Framework for Controllable Text-Driven Person Image Generation
- Imagic: Text-Based Real Image Editing with Diffusion Models
- Self-Guided Diffusion Models
- On Distillation of Guided Diffusion Models
- DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
- Prompt-to-Prompt Image Editing with Cross Attention Control
- Improved Vector Quantized Diffusion Models
- Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
- Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
- Vector Quantized Diffusion Model for Text-to-Image Synthesis
- High-Resolution Image Synthesis with Latent Diffusion Models
-
Text-Video Generation
-
Text-3D Image Generation
-
-
Uncategorized
-
Uncategorized
-
Categories
Sub Categories
Keywords
motion-generation
9
motion-prediction
7
deep-learning
7
human-motion-prediction
6
diffusion
5
pytorch
5
motion-forecasting
5
generative-model
4
diffusion-models
4
pose-prediction
3
pose-forecasting
3
3d-generation
3
pytorch-implementation
3
iccv2023
3
cvpr2023
2
diffusion-model
2
motion
2
vq-vae
2
text-driven
2
gpt
2
co-speech-gesture
2
nerf
2
gesture-generation
2
graph-neural-networks
2
eccv2022
2
disentangled-representations
2
ldm
2
latent-diffusion
2
human-motion-generation
2
generative-models
2
ddpm
2
ddim
2
belfusion
2
object-pose
2
human-scene-interaction
2
human-object-interaction
2
generative-ai
2
6d
2
3d-human-pose
2
affective-computing
1
text-analysis
1
human-motion
1
skeleton-based-action-recognition
1
mesh-recovery
1
3d-pose-estimation
1
text-to-motion
1
text2image
1
music-generation
1
diversity
1
talking-head
1