Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image
(ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis.
https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image
List: Awesome-Text-to-Image
awseome-list generative-adversarial-network image-generation image-manipulation image-synthesis multimodal multimodal-deep-learning survey text-to-face text-to-image
Last synced: 8 days ago
JSON representation
(ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis.
- Host: GitHub
- URL: https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image
- Owner: Yutong-Zhou-cv
- License: mit
- Created: 2020-10-13T06:39:22.000Z (about 4 years ago)
- Default Branch: 2024-Version-2.0
- Last Pushed: 2024-04-08T10:27:04.000Z (7 months ago)
- Last Synced: 2024-04-14T12:06:08.622Z (7 months ago)
- Topics: awseome-list, generative-adversarial-network, image-generation, image-manipulation, image-synthesis, multimodal, multimodal-deep-learning, survey, text-to-face, text-to-image
- Homepage:
- Size: 69.2 MB
- Stars: 1,853
- Watchers: 67
- Forks: 172
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-few-shot-generation - Code
- project-awesome - Yutong-Zhou-cv/Awesome-Text-to-Image - (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis. (Others)
- ultimate-awesome - Awesome-Text-to-Image - (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis. (Other Lists / PowerShell Lists)
- awesome-llm-and-aigc - Yutong-Zhou-cv/Awesome-Text-to-Image - Zhou-cv/Awesome-Text-to-Image?style=social"/> : (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis. (Summary)
- awesome-llm-and-aigc - Yutong-Zhou-cv/Awesome-Text-to-Image - Zhou-cv/Awesome-Text-to-Image?style=social"/> : (ෆ`꒳´ෆ) A Survey on Text-to-Image Generation/Synthesis. (Summary)
README
#
𝓐𝔀𝓮𝓼𝓸𝓶𝓮 𝓣𝓮𝔁𝓽📝-𝓽𝓸-𝓘𝓶𝓪𝓰𝓮🌇
![GitHub stars](https://img.shields.io/github/stars/Yutong-Zhou-cv/Awesome-Text-to-Image.svg?color=red&style=for-the-badge)
![GitHub forks](https://img.shields.io/github/forks/Yutong-Zhou-cv/Awesome-Text-to-Image.svg?style=for-the-badge)
![GitHub activity](https://img.shields.io/github/last-commit/Yutong-Zhou-cv/Awesome-Text-to-Image?color=yellow&style=for-the-badge)
![GitHub issues](https://img.shields.io/github/issues/Yutong-Zhou-cv/Awesome-Text-to-Image?style=for-the-badge)
![GitHub closed issues](https://img.shields.io/github/issues-closed/Yutong-Zhou-cv/Awesome-Text-to-Image?color=inactive&style=for-the-badge)
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FYutong-Zhou-cv%2Fawesome-Text-to-Image&count_bg=%23DD4B78&title_bg=%23555555&icon=jabber.svg&icon_color=%23E7E7E7&title=Hits(2023.05~)&edge_flat=false)](https://hits.seeyoufarm.com)𝓐 𝓬𝓸𝓵𝓵𝓮𝓬𝓽𝓲𝓸𝓷 𝓸𝓯 𝓻𝓮𝓼𝓸𝓾𝓻𝓬𝓮𝓼 𝓸𝓷 𝓽𝓮𝔁𝓽-𝓽𝓸-𝓲𝓶𝓪𝓰𝓮 𝓼𝔂𝓷𝓽𝓱𝓮𝓼𝓲𝓼/𝓶𝓪𝓷𝓲𝓹𝓾𝓵𝓪𝓽𝓲𝓸𝓷 𝓽𝓪𝓼𝓴𝓼.
## ⭐ Citation
If you find this paper and repo helpful for your research, please cite it below:
```bibtex
@inproceedings{zhou2023vision+,
title={Vision+ Language Applications: A Survey},
author={Zhou, Yutong and Shimada, Nobutaka},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={826--842},
year={2023}
}```
## 🎑 News
> [!TIP]
> **Version 1.0** (All-in-one version) can be found [here](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/tree/2024-Version-1.0) and will be **stop updating from 24/02/29**.
* [24/02/29] Update **"Awesome Text to Image" Version 2.0**! *Paper With Code* and *Other Related Works* will also be gradually updated in March.
* [23/05/26] 🔥 Add our survey paper "[**Vision + Language Applications: A Survey**](https://openaccess.thecvf.com/content/CVPR2023W/GCV/html/Zhou_Vision__Language_Applications_A_Survey_CVPRW_2023_paper.html)" and a special [**Best Collection**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/main/%5BCVPRW%202023%F0%9F%8E%88%5D%20%20Best%20Collection.md) list!
* [23/04/04] "**Vision + Language Applications: A Survey**" was accepted by CVPRW2023.
* [20/10/13] **Awesome-Text-to-Image** repo is created.## * To Do*
* - [ ] Add **Topic Order** list and **Chronological Order** list
* - [x] Add [**Best Collection**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/main/%5BCVPRW%202023%F0%9F%8E%88%5D%20%20Best%20Collection.md)
* - [x] Create [**⏳Recently Focused Papers**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/main/%E2%8F%B3Recently%20Focused%20Papers.md)## *Content*
* - [ ] [**1. Description**](#head1)* - [ ] [**2. Quantitative Evaluation Metrics**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/2-Quantitative%20Evaluation%20Metrics.md)
* - [ ] [**3. Datasets**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/3-Datasets.md)* - [ ] [**4. Project**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/4-Project.md)
* - [ ] [5. Paper With Code](#head5)
* - [ ] [Text to Face👨🏻🧒👧🏼🧓🏽](#head-t2f)
* - [ ] [Specific Issues🤔](#head-si)
* - [ ] [**Survey**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/5.0-Survey.md)
* - [ ] [2024](#head-2024)
* - [ ] [2023](#head-2023)
* - [x] [**2022**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/5.3-2022.md)
* - [x] [**2021**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/5.2-2021.md)
* - [x] [**2016~2020**](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image/blob/2024-Version-2.0/Lists/5.1-2016~2020.md)
* - [ ] [6. Other Related Works](#head6)
* - [ ] [📝Prompt Engineering📝](#head-pe)
* - [ ] [⭐Multimodality⭐](#head-mm)
* - [ ] [🛫Applications🛫](#head-app)
* - [ ] [Text+Image/Video → Image/Video](#head-ti2i)
* - [ ] [Text+Layout → Image](#head-tl2i)
* - [ ] [Others+Text+Image/Video → Image/Video](#head-oti2i)
* - [ ] [Layout/Mask → Image](#head-l2i)
* - [ ] [Label-set → Semantic maps](#head-l2s)
* - [ ] [Speech → Image](#head-s2i)
* - [ ] [Scene Graph → Image](#head-sg2i)
* - [ ] [Text → Visual Retrieval](#head-t2vr)
* - [ ] [Text → 3D/Motion/Shape/Mesh/Object...](#head-t2m)
* - [ ] [Text → Video](#head-t2v)
* - [ ] [Text → Music](#head-t2music)* [Contact Me](#head7)
* [Contributors](#head8)## *Description*
* In the last few decades, the fields of Computer Vision (CV) and Natural Language Processing (NLP) have been made several major technological breakthroughs in deep learning research. Recently, researchers interested in combining semantic information and visual information in these traditionally independent fields.
A number of studies have been conducted on text-to-image synthesis techniques that transfer input textual descriptions (keywords or sentences) into realistic images.* Papers, codes, and datasets for the text-to-image task are available here.
>🐌 Markdown Format:
> * (Conference/Journal Year) **Title**, First Author et al. [[Paper](URL)] [[Code](URL)] [[Project](URL)]## *Paper With Code*
* **Text to Face👨🏻🧒👧🏼🧓🏽**
* (arXiv preprint 2024) [💬 Dataset] **15M Multimodal Facial Image-Text Dataset**, Dawei Dai et al. [[Paper](https://arxiv.org/abs/2407.08515)]
* (arXiv preprint 2024) [💬 3D] **Portrait3D: Text-Guided High-Quality 3D Portrait Generation Using Pyramid Representation and GANs Prior**, Yiqian Wu et al. [[Paper](https://arxiv.org/abs/2404.10394v1)]
* (CVPR 2024) **CosmicMan: A Text-to-Image Foundation Model for Humans**, Shikai Li et al. [[Paper](https://arxiv.org/abs/2404.01294)] [[Project](https://cosmicman-cvpr2024.github.io/)]
* (ICML 2024) **Fast Text-to-3D-Aware Face Generation and Manipulation via Direct Cross-modal Mapping and Geometric Regularization**, Jinlu Zhang et al. [[Paper](https://arxiv.org/abs/2403.06702)] [[Code](https://github.com/Aria-Zhangjl/E3-FaceNet)]
* (IJACSA 2023) **Mukh-Oboyob: Stable Diffusion and BanglaBERT enhanced Bangla Text-to-Face Synthesis**, Aloke Kumar Saha et al. [[Paper](https://thesai.org/Publications/ViewPaper?Volume=14&Issue=11&Code=IJACSA&SerialNo=142)] [[Code](https://github.com/Codernob/Mukh-Oboyob)]
* (SIGGRAPH 2023) [💬 3D] **DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance**, Longwen Zhang et al. [[Paper](https://arxiv.org/abs/2304.03117)] [[Project](https://sites.google.com/view/dreamface)] [[HuggingFace](https://huggingface.co/spaces/DEEMOSTECH/ChatAvatar)]
* (CVPR 2023) [💬 3D] **High-Fidelity 3D Face Generation from Natural Language Descriptions**, Menghua Wu et al. [[Paper](https://arxiv.org/abs/2305.03302)] [[Code](https://github.com/zhuhao-nju/describe3d)] [[Project](https://mhwu2017.github.io/)]
* (CVPR 2023) **Collaborative Diffusion for Multi-Modal Face Generation and Editing**, Ziqi Huang et al. [[Paper](https://arxiv.org/abs/2304.10530v1)] [[Code](https://github.com/ziqihuangg/Collaborative-Diffusion)] [[Project](https://ziqihuangg.github.io/projects/collaborative-diffusion.html)]
* (Pattern Recognition 2023) **Where you edit is what you get: Text-guided image editing with region-based attention**, Changming Xiao et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S0031320323001589)] [[Code](https://github.com/Big-Brother-Pikachu/Where2edit)]
* (arXiv preprint 2022) **Bridging CLIP and StyleGAN through Latent Alignment for Image Editing**, Wanfeng Zheng et al. [[Paper](https://arxiv.org/abs/2210.04506)]
* (ACMMM 2022) **Learning Dynamic Prior Knowledge for Text-to-Face Pixel Synthesis**, Jun Peng et al. [[Paper](https://dl.acm.org/doi/10.1145/3503161.3547818)]
* (ACMMM 2022) **Towards Open-Ended Text-to-Face Generation, Combination and Manipulation**, Jun Peng et al. [[Paper](https://dl.acm.org/doi/abs/10.1145/3503161.3547758)]
* (BMVC 2022) **clip2latent: Text driven sampling of a pre-trained StyleGAN using denoising diffusion and CLIP**, Justin N. M. Pinkney et al. [[Paper](https://arxiv.org/abs/2210.02347v1)] [[Code](https://github.com/justinpinkney/clip2latent)]
* (arXiv preprint 2022) **ManiCLIP: Multi-Attribute Face Manipulation from Text**, Hao Wang et al. [[Paper](https://arxiv.org/abs/2210.00445)]
* (arXiv preprint 2022) **Generated Faces in the Wild: Quantitative Comparison of Stable Diffusion, Midjourney and DALL-E 2**, Ali Borji, [[Paper](https://arxiv.org/abs/2210.00586)] [[Code](https://github.com/aliborji/GFW)] [[Data](https://drive.google.com/file/d/1EhbUK64J3d0_chmD2mpBuWB-Ic7LeFlP/view)]
* (arXiv preprint 2022) **Text-Free Learning of a Natural Language Interface for Pretrained Face Generators**, Xiaodan Du et al. [[Paper](https://arxiv.org/abs/2209.03953)] [[Code](https://github.com/duxiaodan/Fast_text2StyleGAN)]
* (Knowledge-Based Systems-2022) **CMAFGAN: A Cross-Modal Attention Fusion based Generative Adversarial Network for attribute word-to-face synthesis**, Xiaodong Luo et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S0950705122008863)]
* (Neural Networks-2022) **DualG-GAN, a Dual-channel Generator based Generative Adversarial Network for text-to-face synthesis**, Xiaodong Luo et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S0893608022003161)]
* (arXiv preprint 2022) **Text-to-Face Generation with StyleGAN2**, D. M. A. Ayanthi et al. [[Paper](https://arxiv.org/abs/2205.12512)]
* (CVPR 2022) **StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis**, Zhiheng Li et al. [[Paper](https://arxiv.org/abs/2203.15799)] [[Code](https://github.com/zhihengli-UR/StyleT2I)]
* (arXiv preprint 2022) **StyleT2F: Generating Human Faces from Textual Description Using StyleGAN2**, Mohamed Shawky Sabae et al. [[Paper](https://arxiv.org/abs/2204.07924)] [[Code](https://github.com/DarkGeekMS/Retratista)]
* (CVPR 2022) **AnyFace: Free-style Text-to-Face Synthesis and Manipulation**, Jianxin Sun et al. [[Paper](https://arxiv.org/abs/2203.15334)]
* (IEEE Transactions on Network Science and Engineering-2022) **TextFace: Text-to-Style Mapping based Face Generation and Manipulation**, Xianxu Hou et al. [[Paper](https://ieeexplore.ieee.org/abstract/document/9737433)]
* (CVPR 2021) **TediGAN: Text-Guided Diverse Image Generation and Manipulation**, Weihao Xia et al. [[Paper](https://arxiv.org/pdf/2012.03308.pdf)] [[Extended Version](https://arxiv.org/pdf/2104.08910.pdf)][[Code](https://github.com/IIGROUP/TediGAN)] [[Dataset](https://github.com/IIGROUP/Multi-Modal-CelebA-HQ-Dataset)] [[Colab](https://colab.research.google.com/github/weihaox/TediGAN/blob/main/playground.ipynb)] [[Video](https://www.youtube.com/watch?v=L8Na2f5viAM)]
* (FG 2021) **Generative Adversarial Network for Text-to-Face Synthesis and Manipulation with Pretrained BERT Model**, Yutong Zhou et al. [[Paper](https://ieeexplore.ieee.org/document/9666791)]
* (ACMMM 2021) **Multi-caption Text-to-Face Synthesis: Dataset and Algorithm**, Jianxin Sun et al. [[Paper](https://dl.acm.org/doi/10.1145/3474085.3475391)] [[Code](https://github.com/cripac-sjx/SEA-T2F)]
* (ACMMM 2021) **Generative Adversarial Network for Text-to-Face Synthesis and Manipulation**, Yutong Zhou. [[Paper](https://dl.acm.org/doi/abs/10.1145/3474085.3481026)]
* (WACV 2021) **Faces a la Carte: Text-to-Face Generation via Attribute Disentanglement**, Tianren Wang et al. [[Paper](https://openaccess.thecvf.com/content/WACV2021/papers/Wang_Faces_a_la_Carte_Text-to-Face_Generation_via_Attribute_Disentanglement_WACV_2021_paper.pdf)]
* (arXiv preprint 2019) **FTGAN: A Fully-trained Generative Adversarial Networks for Text to Face Generation**, Xiang Chen et al. [[Paper](https://arxiv.org/abs/1904.05729)][<🎯Back to Top>](#head-content)
* **Specific Issues🤔**
* (arXiv preprint 2024) [💬 Gender Bias Alignment] **PopAlign: Population-Level Alignment for Fair Text-to-Image Generation**, Shufan Li et al. [[Paper](https://arxiv.org/abs/2406.19668)] [[Code](https://github.com/jacklishufan/PopAlignSDXL)]
* (arXiv preprint 2024) [💬 Fine-Grained Feedback] **Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation**, Katherine M. Collins et al. [[Paper](https://arxiv.org/abs/2406.16807)]
* (CVPR 2024-Best Paper) [💬 Human Feedback] **Rich Human Feedback for Text-to-Image Generation**, Youwei Liang et al. [[Paper](https://arxiv.org/abs/2312.10240)]
* (ICLR 2024) [💬 Unauthorized Data] **DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models**, Zhenting Wang et al. [[Paper](https://openreview.net/pdf?id=f8S3aLm0Vp)] [[Code](https://github.com/ZhentingWang/DIAGNOSIS)]
* (CVPR 2024) [💬 Open-set Bias Detection] **OpenBias: Open-set Bias Detection in Text-to-Image Generative Models**, Moreno D'Incà et al. [[Paper](https://arxiv.org/abs/2404.07990)]
* (arXiv preprint 2024) [💬 Spatial Consistency] **Getting it Right: Improving Spatial Consistency in Text-to-Image Models**, Agneet Chatterjee et al. [[Paper](https://arxiv.org/abs/2404.01197)] [[Project](https://spright-t2i.github.io/)] [[Code](https://github.com/SPRIGHT-T2I/SPRIGHT)] [[Dataset](https://huggingface.co/datasets/SPRIGHT-T2I/spright)]
* (arXiv preprint 2024) [💬 Safety] **SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models**, Xinfeng Li et al. [[Paper](https://arxiv.org/abs/2404.06666)] [[Code](https://github.com/LetterLiGo/text-agnostic-governance)]
* (arXiv preprint 2024) [💬 Aesthetic] **Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation**, Daiqing Li et al. [[Paper](https://arxiv.org/abs/2402.17245)] [[Project](https://blog.playgroundai.com/playground-v2-5/)] [[HuggingFace](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)]
* (EMNLP 2023) [💬 Text Visualness] **Learning the Visualness of Text Using Large Vision-Language Models**, Gaurav Verma et al. [[Paper](https://arxiv.org/abs/2305.10434)] [[Project](https://gaurav22verma.github.io/text-visualness/)]
* (arXiv preprint 2023) [💬 Against Malicious Adaptation] **IMMA: Immunizing text-to-image Models against Malicious Adaptation**, Yijia Zheng et al. [[Paper](https://arxiv.org/abs/2311.18815)] [[Project](https://zhengyjzoe.github.io/imma/)]
* (arXiv preprint 2023) [💬 Principled Recaptioning] **A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation**, Eyal Segalis et al. [[Paper](https://arxiv.org/abs/2310.16656)]
* ⭐⭐(NeurIPS 2023) [💬 Holistic Evaluation] **Holistic Evaluation of Text-To-Image Models**, Tony Lee et al. [[Paper](https://arxiv.org/abs/2311.04287)] [[Code](https://github.com/stanford-crfm/helm)] [[Project](https://crfm.stanford.edu/heim/v1.1.0/)]
* (ICCV 2023) [💬 Safety] **Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis**, Lukas Struppek et al. [[Paper](https://arxiv.org/abs/2211.02408)] [[Code](https://github.com/LukasStruppek/Rickrolling-the-Artist)]
* (arXiv preprint 2023) [💬 Natural Attack Capability] **Intriguing Properties of Diffusion Models: A Large-Scale Dataset for Evaluating Natural Attack Capability in Text-to-Image Generative Models**, Takami Sato et al. [[Paper](https://arxiv.org/abs/2308.15692)]
* (ACL 2023) [💬 Bias] **A Multi-dimensional study on Bias in Vision-Language models**, Gabriele Ruggeri et al. [[Paper](https://aclanthology.org/2023.findings-acl.403/)]
* (FAACT 2023) [💬 Demographic Stereotypes] **Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale**, Federico Bianchi et al. [[Paper](https://dl.acm.org/doi/abs/10.1145/3593013.3594095)]
* (arXiv preprint 2023) [💬 Robustness] **Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks**, Hongcheng Gao et al. [[Paper](https://arxiv.org/abs/2306.13103)]
* (CVPR 2023) [💬 Adversarial Robustness Analysis] **RIATIG: Reliable and Imperceptible Adversarial Text-to-Image Generation With Natural Prompts**, Han Liu et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2023/html/Liu_RIATIG_Reliable_and_Imperceptible_Adversarial_Text-to-Image_Generation_With_Natural_Prompts_CVPR_2023_paper.html)]
* (arXiv preprint 2023) [💬 Textual Inversion] **Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation**, Anton Voronov et al. [[Paper](https://arxiv.org/abs/2302.04841)] [[Code](https://github.com/yandex-research/DVAR)]
* (arXiv preprint 2022) [💬 Interpretable Intervention] **Not Just Pretty Pictures: Text-to-Image Generators Enable Interpretable Interventions for Robust Representations**, Jianhao Yuan et al. [[Paper](https://arxiv.org/abs/2212.11237)]
* (arXiv preprint 2022) [💬 Ethical Image Manipulation] **Judge, Localize, and Edit: Ensuring Visual Commonsense Morality for Text-to-Image Generation**, Seongbeom Park et al. [[Paper](https://arxiv.org/abs/2212.03507)]
* (arXiv preprint 2022) [💬 Creativity Transfer] **Inversion-Based Creativity Transfer with Diffusion Models**, Yuxin Zhang et al. [[Paper](https://arxiv.org/abs/2211.13203)]
* (arXiv preprint 2022) [💬 Ambiguity] **Is the Elephant Flying? Resolving Ambiguities in Text-to-Image Generative Models**, Ninareh Mehrabi et al. [[Paper](https://arxiv.org/abs/2211.12503)]
* (arXiv preprint 2022) [💬 Racial Politics] **A Sign That Spells: DALL-E 2, Invisual Images and The Racial Politics of Feature Space**, Fabian Offert et al. [[Paper](https://arxiv.org/abs/2211.06323)]
* (arXiv preprint 2022) [💬 Privacy Analysis] **Membership Inference Attacks Against Text-to-image Generation Models**, Yixin Wu et al. [[Paper](https://arxiv.org/abs/2210.00968)]
* (arXiv preprint 2022) [💬 Authenticity Evaluation for Fake Images] **DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Diffusion Models**, Zeyang Sha et al. [[Paper](https://arxiv.org/abs/2210.06998v1)]
* (arXiv preprint 2022) [💬 Cultural Bias] **The Biased Artist: Exploiting Cultural Biases via Homoglyphs in Text-Guided Image Generation Models**, Lukas Struppek et al. [[Paper](https://arxiv.org/abs/2209.08891)][<🎯Back to Top>](#head-content)
* **2024**
* (arXiv preprint 2024) **MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis**, Wanggui He et al. [[Paper](https://arxiv.org/abs/2407.07614)]
* (Kuaishou) **Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis**, Sixian Zhang et al. [[Paper](https://github.com/Kwai-Kolors/Kolors/blob/master/imgs/Kolors_paper.pdf)] [[Code](https://github.com/Kwai-Kolors/Kolors)] [[Project](https://kwai-kolors.github.io/post/post-2/)]
* (CVPR 2024) [💬Human Preferences] **Learning Multi-dimensional Human Preference for Text-to-Image Generation**, Sixian Zhang et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Learning_Multi-Dimensional_Human_Preference_for_Text-to-Image_Generation_CVPR_2024_paper.pdf)] [[Code](https://github.com/wangbohan97/Kolors-MPS)] [[Project](https://kwai-kolors.github.io/post/post-1/)]
* (CVPR 2024) [💬 Text-to-layout → Text+Layout-to-Image] **Grounded Text-to-Image Synthesis with Attention Refocusing**, Quynh Phung et al. [[Paper](https://arxiv.org/abs/2306.05427)] [[Project](https://attention-refocusing.github.io/)] [[Code](https://github.com/Attention-Refocusing/attention-refocusing)]
* (arXiv preprint 2024) **Dimba: Transformer-Mamba Diffusion Models**, Zhengcong Fei et al. [[Paper](https://arxiv.org/abs/2406.01159)]
* (arXiv preprint 2024) [💬 Generation and Editing] **MultiEdits: Simultaneous Multi-Aspect Editing with Text-to-Image Diffusion Models**, Mingzhen Huang et al. [[Paper](https://arxiv.org/abs/2406.00985)] [[Project](https://mingzhenhuang.com/projects/MultiEdits.html)]
* (arXiv preprint 2024) **AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation**, Junhao Cheng et al. [[Paper](https://arxiv.org/abs/2406.01388)] [[Project](https://howe183.github.io/AutoStudio.io/)] [[Code](https://github.com/donahowe/AutoStudio)]
* (arXiv preprint 2024) **TheaterGen: Character Management with LLM for Consistent Multi-turn Image Generation**, Junhao Cheng et al. [[Paper](https://arxiv.org/abs/2404.18919)] [[Project](https://howe140.github.io/theatergen.io/)] [[Code](https://github.com/donahowe/Theatergen)]
* (CVPR 2024) **Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following**, Yutong Feng et al. [[Paper](https://arxiv.org/abs/2311.17002)] [[Project](https://ranni-t2i.github.io/Ranni/)] [[Code](https://github.com/ali-vilab/Ranni)]
* (arXiv preprint 2024) **CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching**, Dongzhi Jiang et al. [[Paper](https://arxiv.org/abs/2404.03653)] [[Project](https://caraj7.github.io/comat/)] [[Code](https://github.com/CaraJ7/CoMat)]
* (arXiv preprint 2024) **TextCraftor: Your Text Encoder Can be Image Quality Controller**, Yanyu Li et al. [[Paper](https://arxiv.org/abs/2403.18978)]
* (CVPR 2024) **ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations**, Maitreya Patel et al. [[Paper](https://arxiv.org/abs/2312.04655)] [[Project](https://eclipse-t2i.vercel.app/)] [[Code](https://github.com/eclipse-t2i/eclipse-inference)] [[Hugging Face](https://huggingface.co/spaces/ECLIPSE-Community/ECLIPSE-Kandinsky-v2.2)]
* (arXiv preprint 2024) **SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data**, Jialu Li et al. [[Paper](https://arxiv.org/abs/2403.06952)] [[Project](https://selma-t2i.github.io/)] [[Code](https://github.com/jialuli-luka/SELMA)]
* (ICLR 2024) **PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis**, Junsong Chen et al. [[Paper](https://arxiv.org/abs/2310.00426)] [[Project](https://pixart-alpha.github.io/)] [[Code](https://github.com/PixArt-alpha/PixArt-alpha)] [[Hugging Face](https://huggingface.co/spaces/PixArt-alpha/PixArt-LCM)]
* (arXiv preprint 2024) **PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation**, Junsong Chen et al. [[Paper](https://arxiv.org/abs/2403.04692)]
* (arXiv preprint 2024) **PIXART-δ: Fast and Controllable Image Generation with Latent Consistency Models**, Junsong Chen et al. [[Paper](https://arxiv.org/abs/2401.05252)]
* (CVPR 2024) **Discriminative Probing and Tuning for Text-to-Image Generation**, Leigang Qu et al. [[Paper](https://arxiv.org/abs/2403.04321)] [[Project](https://dpt-t2i.github.io/)]
* (CVPR 2024) **RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization**, Mengqi Huang et al. [[Paper](https://arxiv.org/abs/2403.00483)] [[Project](https://corleone-huang.github.io/realcustom/)]
* ⭐(arXiv preprint 2024) **SDXL-Lightning: Progressive Adversarial Diffusion Distillation**, Shanchuan Lin et al. [[Paper](https://arxiv.org/abs/2402.13929)] [[HuggingFace](https://huggingface.co/ByteDance/SDXL-Lightning)] [[Demo](https://fastsdxl.ai/)]
* ⭐(arXiv preprint 2024) **RealCompo: Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models**, Xinchen Zhang et al. [[Paper](https://arxiv.org/abs/2402.12908)] [[Code](https://github.com/YangLing0818/RealCompo)]
* (arXiv preprint 2024) **Learning Continuous 3D Words for Text-to-Image Generation**, Ta-Ying Cheng et al. [[Paper](https://arxiv.org/abs/2402.08654)] [[Project](https://ttchengab.github.io/continuous_3d_words/)] [[Code](https://github.com/ttchengab/continuous_3d_words_code/)]
* (arXiv preprint 2024) **DiffusionGPT: LLM-Driven Text-to-Image Generation System**, Jie Qin et al. [[Paper](https://arxiv.org/abs/2401.10061)] [[Project](https://diffusiongpt.github.io/)] [[Code](https://github.com/DiffusionGPT/DiffusionGPT)]
* (arXiv preprint 2024) **DressCode: Autoregressively Sewing and Generating Garments from Text Guidance**, Kai He et al. [[Paper](https://arxiv.org/abs/2401.16465)] [[Project](https://sites.google.com/view/projectpage-dresscode)][<🎯Back to Top>](#head-content)
* **2023**
* (arXiv preprint 2023) **CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation**, Zineng Tang et al. [[Paper](https://arxiv.org/abs/2311.18775)] [[Project](https://codi-2.github.io/)] [[Code](https://github.com/microsoft/i-Code/tree/main/CoDi-2)]
* (arXiv preprint 2023) **DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models**, Sungnyun Kim et al. [[Paper](https://arxiv.org/abs/2305.15194)] [[Code](https://github.com/sungnyun/diffblender)] [[Project](https://sungnyun.github.io/diffblender/)]
* (arXiv preprint 2023) **ElasticDiffusion: Training-free Arbitrary Size Image Generation**, Moayed Haji-Ali et al. [[Paper](https://arxiv.org/abs/2311.18822)] [[Project](https://elasticdiffusion.github.io/)] [[Code](https://github.com/moayedhajiali/elasticdiffusion-official)] [[Demo](https://replicate.com/moayedhajiali/elasticdiffusion)]
* (ICCV 2023) **BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion**, Jinheng Xie et al. [[Paper](https://arxiv.org/abs/2307.10816)] [[Code](https://github.com/showlab/BoxDiff)]
* (arXiv preprint 2023) **Late-Constraint Diffusion Guidance for Controllable Image Synthesis**, Chang Liu et al. [[Paper](https://arxiv.org/abs/2305.11520)] [[Code](https://github.com/AlonzoLeeeooo/LCDG)]
* (arXiv preprint 2023) **An Image is Worth Multiple Words: Multi-attribute Inversion for Constrained Text-to-Image Synthesis**, Aishwarya Agarwal et al. [[Paper](https://arxiv.org/abs/2311.11919)]
* ⭐(arXiv preprint 2023) **UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs**, Yanwu Xu et al. [[Paper](https://arxiv.org/abs/2311.09257)]
* (ICCV 2023) **ITI-GEN: Inclusive Text-to-Image Generation**, Cheng Zhang et al. [[Paper](https://openaccess.thecvf.com/content/ICCV2023/html/Zhang_ITI-GEN_Inclusive_Text-to-Image_Generation_ICCV_2023_paper.html)] [[Code](https://github.com/humansensinglab/ITI-GEN)] [[Project](https://czhang0528.github.io/iti-gen)]
* (arXiv preprint 2023) **Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models**, Zeqiang Lai et al. [[Paper](https://arxiv.org/abs/2310.07653)] [[Code](https://github.com/Zeqiang-Lai/Mini-DALLE3)] [[Demo](http://139.224.23.16:10085/)] [[Project](https://minidalle3.github.io/)]
* (arXiv preprint 2023) [💬Evaluation] **GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment**, Dhruba Ghosh et al. [[Paper](https://arxiv.org/abs/2310.11513v1)] [[Code](https://github.com/djghosh13/geneval)]
* ⭐(arXiv preprint 2023) **Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion**, Anton Razzhigaev et al. [[Paper](https://arxiv.org/abs/2310.03502)] [[Code](https://github.com/ai-forever/Kandinsky-2)] [[Demo](https://fusionbrain.ai/en/editor/)] [[Demo Video](https://www.youtube.com/watch?v=c7zHPc59cWU)] [[Hugging Face](https://huggingface.co/kandinsky-community)]
* ⭐⭐(ICCV 2023) **Adding Conditional Control to Text-to-Image Diffusion Models**, Lvmin Zhang et al. [[Paper](https://arxiv.org/abs/2302.05543)] [[Code](https://github.com/lllyasviel/ControlNet)]
* (ICCV 2023) **DiffCloth: Diffusion Based Garment Synthesis and Manipulation via Structural Cross-modal Semantic Alignment**, Xujie Zhang et al. [[Paper](https://arxiv.org/abs/2308.11206)]
* (ICCV 2023) **Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models**, Nan Liu et al. [[Paper](https://arxiv.org/abs/2306.05357)] [[Code](https://github.com/nanlliu/Unsupervised-Compositional-Concepts-Discovery)] [[Project](https://energy-based-model.github.io/unsupervised-concept-discovery/)]
* (arXiv preprint 2023) **Text-to-Image Generation for Abstract Concepts**, Jiayi Liao et al. [[Paper](https://arxiv.org/abs/2309.14623)]
* (arXiv preprint 2023) **T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation**, Kaiyi Huang et al. [[Paper](https://arxiv.org/abs/2307.06350)] [[Code](https://github.com/Karine-Huang/T2I-CompBench)] [[Project](https://karine-h.github.io/T2I-CompBench/)]
* (arXiv preprint 2023) [💬 Evaluation] **Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis**, Xiaoshi Wu et al. [[Paper](https://arxiv.org/abs/2306.09341)] [[Code](https://github.com/tgxs002/HPSv2)]
* (arXiv preprint 2023) **Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark**, Shuyu Yang et al. [[Paper](https://arxiv.org/abs/2306.02898)] [[Code](https://github.com/Shuyu-XJTU/APTM)] [[Project](https://www.zdzheng.xyz/publication/Towards-2023)]
* (arXiv preprint 2023) **Synthesizing Artistic Cinemagraphs from Text**, Aniruddha Mahapatra et al. [[Paper](https://arxiv.org/abs/2306.02236)] [[Code](https://github.com/text2cinemagraph/artistic-cinemagraph)] [[Project](https://text2cinemagraph.github.io/website/)]
* (arXiv preprint 2023) **Detector Guidance for Multi-Object Text-to-Image Generation**, Luping Liu et al. [[Paper](https://arxiv.org/abs/2306.02236)]
* (arXiv preprint 2023) **A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis**, Aishwarya Agarwal et al. [[Paper](https://arxiv.org/abs/2306.14544)]
* (arXiv preprint 2023) [💬Evaluation] **ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models**, Maitreya Patel et al. [[Paper](https://arxiv.org/abs/2306.04695)] [[Code](https://github.com/ConceptBed/evaluations)] [[Project](https://conceptbed.github.io/)]
* ⭐(arXiv preprint 2023) **StyleDrop: Text-to-Image Generation in Any Style**, Kihyuk Sohn et al. [[Paper](https://arxiv.org/abs/2306.00983)] [[Project](https://styledrop.github.io/)]
* ⭐⭐(arXiv preprint 2023) **Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models**, Xingqian Xu et al. [[Paper](https://arxiv.org/abs/2305.16223)] [[Code](https://github.com/SHI-Labs/Prompt-Free-Diffusion)] [[Hugging Face](https://huggingface.co/spaces/shi-labs/Prompt-Free-Diffusion)]
* ⭐⭐ (SIGGRAPH 2023) **Blended Latent Diffusion**, Omri Avrahami et al. [[Paper](https://arxiv.org/abs/2206.02779)] [[Code](https://github.com/omriav/blended-latent-diffusion)] [[Project](https://omriavrahami.com/blended-latent-diffusion-page/)]
* (CVPR 2023) [💬Controllable] **SpaText: Spatio-Textual Representation for Controllable Image Generation**, Omri Avrahami et al. [[Paper](https://arxiv.org/abs/2211.14305)] [[Project](https://omriavrahami.com/spatext/)]
* ⭐⭐ (arXiv 2023) **The Chosen One: Consistent Characters in Text-to-Image Diffusion Models**, Omri Avrahami et al. [[Paper](https://arxiv.org/abs/2311.10093)] [[Code](https://github.com/ZichengDuan/TheChosenOne)] [[Project](https://omriavrahami.com/the-chosen-one/)]
* (CVPR 2023) [💬Stable Diffusion with Brain] **High-resolution image reconstruction with latent diffusion models from human brain activity**, Yu Takagi et al. [[Paper](https://www.biorxiv.org/content/10.1101/2022.11.18.517004v1)]
* (arXiv preprint 2023) **BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing**, Dongxu Li et al. [[Paper](https://arxiv.org/abs/2305.14720)]
* (arXiv preprint 2023) [💬Evaluation] **LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation**, Yujie Lu et al. [[Paper](https://arxiv.org/abs/2305.11116)] [[Code](https://github.com/YujieLu10/LLMScore)]
* (arXiv preprint 2023) **P+ : Extended Textual Conditioning in Text-to-Image Generation**, Andrey Voynov et al. [[Paper](https://arxiv.org/abs/2303.09522)] [[Project](https://prompt-plus.github.io/)]
* (arXiv preprint 2023) **Taming Encoder for Zero Fine-tuning Image Customization with Text-to-Image Diffusion Models**, Xuhui Jia et al. [[Paper](https://arxiv.org/abs/2304.02642)]
* (ICML 2023) **TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation**, Zhaoyan Liu et al. [[Paper](https://arxiv.org/abs/2304.13742)] [[Code](https://github.com/layer6ai-labs/tr0n)] [[Hugging Face](https://huggingface.co/spaces/Layer6/TR0N)]
* (ICLR 2023) [💬3D]**DreamFusion: Text-to-3D using 2D Diffusion**, Ben Poole et al. [[Paper (arXiv)](https://arxiv.org/abs/2209.14988)] [[Paper (OpenReview)](https://openreview.net/forum?id=FjNys5c7VyY)] [[Project](https://dreamfusion3d.github.io/)] [[Short Read](https://www.louisbouchard.ai/dreamfusion/)]
* (ICLR 2023) **Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis**, Weixi Feng et al. [[Paper (arXiv)](https://arxiv.org/abs/2212.05032)] [[Paper (OpenReview)](https://openreview.net/forum?id=PUIqjT4rzq7)] [[Code](https://github.com/shunk031/training-free-structured-diffusion-guidance)]
* ⭐⭐(arXiv preprint 2023) **Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation**, Yuval Kirstain et al. [[Paper](https://arxiv.org/abs/2305.01569)] [[Code](https://github.com/yuvalkirstain/PickScore)] [[Dataset](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1)] [[Online Application](https://pickapic.io/)] [[PickScore](https://huggingface.co/yuvalkirstain/PickScore_v1)]
* (arXiv preprint 2023) **TTIDA: Controllable Generative Data Augmentation via Text-to-Text and Text-to-Image Models**, Yuwei Yin et al. [[Paper](https://arxiv.org/abs/2304.08821)]
* (arXiv preprint 2023) [💬 Textual Inversion] **Controllable Textual Inversion for Personalized Text-to-Image Generation**, Jianan Yang et al. [[Paper](https://arxiv.org/abs/2304.05265)]
* (arXiv preprint 2023) **Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion**, Seongmin Lee et al. [[Paper](https://arxiv.org/abs/2305.03509)] [[Project](https://poloclub.github.io/diffusion-explainer/)]
* ⭐⭐(Findings of ACL 2023) [💬 Multi-language-to-Image] **AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities**, Zhongzhi Chen et al. [[Paper](https://arxiv.org/abs/2211.06679)] [[Code-AltDiffusion](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion-m18)] [[Code-AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP-m18)] [[Hugging Face](https://huggingface.co/BAAI/AltDiffusion-m18)]
* (arXiv preprint 2023) [💬 Seed selection] **It is all about where you start: Text-to-image generation with seed selection**, Dvir Samuel et al. [[Paper](https://arxiv.org/abs/2304.14530)]
* (arXiv preprint 2023) [💬 Audio/Sound/Multi-language-to-Image] **GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation**, Can Qin et al. [[Paper](https://arxiv.org/abs/2303.10056)]
* (arXiv preprint 2023) [💬Faithfulness Evaluation] **TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering**, Yushi Hu et al. [[Paper](https://arxiv.org/abs/2303.11897)] [[Project](https://tifa-benchmark.github.io/)] [[Code](https://github.com/Yushi-Hu/tifa)]
* (arXiv preprint 2023) **InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning**, Jing Shi et al. [[Paper](https://arxiv.org/abs/2304.03411)] [[Project](https://jshi31.github.io/InstantBooth/)]
* (TOMM 2023) **LFR-GAN: Local Feature Refinement based Generative Adversarial Network for Text-to-Image Generation**, Zijun Deng et al. [[Paper](https://dl.acm.org/doi/10.1145/3589002)] [[Code](https://github.com/PKU-ICST-MIPL/LFR-GAN_TOMM2023)]
* (ICCV 2023) **Expressive Text-to-Image Generation with Rich Text**, Songwei Ge et al. [[Paper](https://arxiv.org/abs/2304.06720)] [[Code](https://github.com/SongweiGe/rich-text-to-image)] [[Project](https://rich-text-to-image.github.io/)] [[Demo](https://huggingface.co/spaces/songweig/rich-text-to-image/discussions)]
* (arXiv preprint 2023) [💬Human Preferences] **ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation**, Jiazheng Xu et al. [[Paper](https://arxiv.org/abs/2304.05977)] [[Code](https://github.com/THUDM/ImageReward)]
* (arXiv preprint 2023) **eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers**, Yogesh Balaji et al. [[Paper](https://arxiv.org/abs/2211.01324)] [[Project](https://research.nvidia.com/labs/dir/eDiff-I/)]
* (CVPR 2023) **GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis**, Ming Tao et al. [[Paper](https://arxiv.org/abs/2301.12959)] [[Code](https://github.com/tobran/GALIP)]
* (CVPR 2023) [💬Human Evaluation] **Toward Verifiable and Reproducible Human Evaluation for Text-to-Image Generation**, Mayu Otani et al. [[Paper](https://arxiv.org/abs/2304.01816)]
* (arXiv preprint 2023) **Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models**, Lukas Höllein et al. [[Paper](https://arxiv.org/abs/2303.11989)] [[Project](https://lukashoel.github.io/text-to-room/)] [[Code](https://github.com/lukasHoel/text2room)] [[Video](https://www.youtube.com/watch?v=fjRnFL91EZc)]
* (arXiv preprint 2023) **Editing Implicit Assumptions in Text-to-Image Diffusion Models**, Hadas Orgad et al. [[Paper](https://arxiv.org/abs/2303.08084)] [[Project](https://time-diffusion.github.io/)] [[Code](https://github.com/bahjat-kawar/time-diffusion)]
* ⭐⭐(arXiv preprint 2023) **Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models**, Chenfei Wu et al. [[Paper](https://arxiv.org/abs/2303.04671)] [[Code](https://github.com/microsoft/visual-chatgpt)]
* (arXiv preprint 2023) **X&Fuse: Fusing Visual Information in Text-to-Image Generation**, Yuval Kirstain et al. [[Paper](https://arxiv.org/abs/2303.01000v1)]
* (CVPR 2023) [💬Stable Diffusion with Brain] **High-resolution image reconstruction with latent diffusion models from human brain activity**, Yu Takagi et al. [[Paper](https://www.biorxiv.org/content/10.1101/2022.11.18.517004v1)] [[Project](https://sites.google.com/view/stablediffusion-with-brain/)] [[Code](https://github.com/yu-takagi/StableDiffusionReconstruction)]
* ⭐⭐(arXiv preprint 2023) **Universal Guidance for Diffusion Models**, Arpit Bansal et al. [[Paper](https://arxiv.org/abs/2302.07121)] [[Code](https://github.com/arpitbansal297/Universal-Guided-Diffusion)]
* ⭐(arXiv preprint 2023) **Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models**, Hila Chefer et al. [[Paper](https://arxiv.org/abs/2301.13826)] [[Project](https://attendandexcite.github.io/Attend-and-Excite/)] [[Code](https://github.com/AttendAndExcite/Attend-and-Excite)]
* (BMVC 2023) **Divide & Bind Your Attention for Improved Generative Semantic Nursing**, Yumeng Li et al. [[Paper](https://arxiv.org/abs/2307.10864)] [[Project](https://sites.google.com/view/divide-and-bind)] [[Code](https://github.com/boschresearch/Divide-and-Bind)]
* (IEEE Transactions on Multimedia) **ALR-GAN: Adaptive Layout Refinement for Text-to-Image Synthesis**, Hongchen Tan et al. [[Paper](https://ieeexplore.ieee.org/abstract/document/10023990)]
* ⭐(CVPR 2023) **Multi-Concept Customization of Text-to-Image Diffusion**, Nupur Kumari et al. [[Paper](https://arxiv.org/abs/2212.04488)] [[Project](https://www.cs.cmu.edu/~custom-diffusion/)] [[Code](https://github.com/adobe-research/custom-diffusion)] [[Hugging Face](https://huggingface.co/spaces/nupurkmr9/custom-diffusion)]
* (CVPR 2023) **GLIGEN: Open-Set Grounded Text-to-Image Generation**, Yuheng Li et al. [[Paper](https://arxiv.org/abs/2301.07093)] [[Code](https://github.com/gligen/GLIGEN)] [[Project](https://gligen.github.io/)] [[Hugging Face Demo](https://huggingface.co/spaces/gligen/demo)]
* (arXiv preprint 2023) **Attribute-Centric Compositional Text-to-Image Generation**, Yuren Cong et al. [[Paper](https://arxiv.org/abs/2301.01413)] [[Project](https://github.com/yrcong/ACTIG)]
* (arXiv preprint 2023) **Muse: Text-To-Image Generation via Masked Generative Transformers**, Huiwen Chang et al. [[Paper](https://arxiv.org/abs/2301.00704v1)] [[Project](https://muse-model.github.io/)][<🎯Back to Top>](#head-content)
## *6. Other Related Works*
* **📝Prompt Engineering📝**
* (CHI 2024) **PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement**, Zhijie Wang et al. [[Paper](https://arxiv.org/abs/2403.04014)]
* (arXiv preprint 2024) **Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation**, Yutong He et al. [[Paper](https://arxiv.org/abs/2403.191039)]
* (EMNLP 2023) **BeautifulPrompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis**, Tingfeng Cao et al. [[Paper](https://arxiv.org/abs/2311.06752)]
* (arXiv preprint 2023) [💬Optimizing Prompts] **NeuroPrompts: An Adaptive Framework to Optimize Prompts for Text-to-Image Generation**, Shachar Rosenman et al. [[Paper](https://arxiv.org/abs/2311.12229)] [[Video Demo](https://www.youtube.com/watch?v=Cmca_RWYn2g)]
* (arXiv preprint 2022) [💬Optimizing Prompts] **Optimizing Prompts for Text-to-Image Generation**, Yaru Hao et al. [[Paper](https://arxiv.org/abs/2212.09611)] [[Code](https://github.com/microsoft/LMOps)] [[Hugging Face](https://huggingface.co/spaces/microsoft/Promptist)]
* (arXiv preprint 2022) [💬Aesthetic Image Generation] **Best Prompts for Text-to-Image Models and How to Find Them**, Nikita Pavlichenko et al. [[Paper](https://arxiv.org/abs/2209.11711)]
* (arXiv preprint 2022) **A Taxonomy of Prompt Modifiers for Text-To-Image Generation**, Jonas Oppenlaender [[Paper](https://arxiv.org/abs/2204.13988)]
* (CHI 2022) **Design Guidelines for Prompt Engineering Text-to-Image Generative Models**, Vivian Liu et al. [[Paper](https://dl.acm.org/doi/abs/10.1145/3491102.3501825)][<🎯Back to Top>](#head-content)
* **⭐Multimodality⭐**
* (arXiv preprint 2024) **4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**, Roman Bachmann et al. [[Paper](https://arxiv.org/abs/2406.09406)] [[4M-Paper](https://arxiv.org/abs/2312.06647)] [[Project](https://4m.epfl.ch/)] [[Code](https://github.com/apple/ml-4m/)]
* 📚 Any-to-any, RGB-to-all(Caption, BBox, Semantic segmentation, depth, ...), Fine-grained generation & editing, Multimodal guidance, Any-to-RGB Retrieval, RGB-to-any retrieval,
* (arXiv preprint 2024) **Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance**, Kuan Heng Lin et al. [[Paper](https://arxiv.org/abs/2406.07540)] [[Project](https://genforce.github.io/ctrl-x/)]
* 📚 Structure (natural images, canny maps, normal maps, wireframes, 3D meshes, etc.) + Image → Image, Structure (mask, 3D mesh, canny maps, depth maps, etc.) + Text → Image
* (arXiv preprint 2024) **Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers**, Peng Gao et al. [[Paper](https://arxiv.org/abs/2405.05945)] [[Code](https://github.com/Alpha-VLLM/Lumina-T2X)]
* 📚 Text → Image/Video/Audio/3D/Music
* (ICLR 2024) **Cross-Modal Contextualized Diffusion Models for Text-Guided Visual Generation and Editing**, Ling Yang et al. [[Paper](https://arxiv.org/abs/2402.16627v1)] [[Code](https://github.com/YangLing0818/ContextDiff?tab=readme-ov-file)]
* 📚 Text → Image, Text → Video
* (arXiv preprint 2024) **TMT: Tri-Modal Translation between Speech, Image, and Text by Processing Different Modalities as Different Languages**, Minsu Kim et al. [[Paper](https://arxiv.org/abs/2402.16021v1)]
* 📚 Image → Text, Image → Speech, Text → Image, Speech → Image, Speech → Text, Text → Speech
* ⭐⭐(NeurIPS 2023) **CoDi: Any-to-Any Generation via Composable Diffusion**, Zineng Tang et al. [[Paper](https://arxiv.org/abs/2305.11846)] [[Project](https://codi-gen.github.io/)] [[Code](https://github.com/microsoft/i-Code/tree/main/i-Code-V3)]
* 📚[Single-to-Single Generation] Text → Image, Audio → Image, Image → Video, Image → Audio, Audio → Text, Image → Text
* 📚[Multi-Outputs Joint Generation] Text → Video + Audio, Text → Text + Audio + Image, Text + Image → Text + Image
* 📚[Multiple Conditioning] Text + Audio → Image, Text + Image → Image, Text + Audio + Image → Image, Text + Audio → Video, Text + Image → Video, Video + Audio → Text, Image + Audio → Audio, Text + Image → Audio
* ⭐⭐(CVPR 2023) **ImageBind: One Embedding Space To Bind Them All**, Rohit Girdhar et al. [[Paper](https://arxiv.org/abs/2305.05665)] [[Project](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/)] [[Code](https://github.com/facebookresearch/ImageBind)]
* 📚Image-to-Audio retrieval, Audio-to-Image retrieval, Text-to-Image+Audio, Audio+Image-to-Image, Audio-to-Image generation, Zero-shot text to audio retrieval and classification...
* ⭐(CVPR 2023) **Scaling up GANs for Text-to-Image Synthesis**, Minguk Kang et al. [[Paper](https://arxiv.org/abs/2303.05511)] [[Project](https://mingukkang.github.io/GigaGAN/)]
* 📚Text-to-Image, Controllable image synthesis (Style Mixing, Prompt Interpolation, Prompt Mixing), Super Resolution (Text-conditioned, Unconditional)
* (arXiv preprint 2023) **DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models**, Sungnyun Kim et al. [[Paper](https://arxiv.org/abs/2305.15194)] [[Code](https://github.com/sungnyun/diffblender)] [[Project](https://sungnyun.github.io/diffblender/)]
* 📚Text-to-Image, Multimodal controllable image synthesis, Text + Image + Spatial/Non-spatial Tokens → Image
* (arXiv preprint 2023) **TextIR: A Simple Framework for Text-based Editable Image Restoration**, Yunpeng Bai et al. [[Paper](https://arxiv.org/abs/2302.14736)] [[Code](https://github.com/haha-lisa/RDM-Region-Aware-Diffusion-Model)]
* 📚Image Inpainting, Image Colorization, Image Super-resolution, Image Editing via Degradation
* (arXiv preprint 2023) **Modulating Pretrained Diffusion Models for Multimodal Image Synthesis**, Cusuh Ham et al. [[Paper](https://arxiv.org/abs/2302.12764)]
* 📚Sketch-to-Image, Segmentation-to-Image, Text+Sketch-to-Image, Text+Segmentation-to-Image, Text+Sketch+Segmentation-to-Image
* (arXiv preprint 2023) **Muse: Text-To-Image Generation via Masked Generative Transformers**, Huiwen Chang et al. [[Paper](https://arxiv.org/abs/2301.00704v1)] [[Project](https://muse-model.github.io/)]
* 📚Text-to-Image, Zero-shot+Mask-free editing, Zero-shot Inpainting/Outpainting
* (arXiv preprint 2022) **Versatile Diffusion: Text, Images and Variations All in One Diffusion Model**, Xingqian Xu et al. [[Paper](https://arxiv.org/abs/2211.08332)] [[Code](https://github.com/SHI-Labs/Versatile-Diffusion)] [[Hugging Face](https://huggingface.co/spaces/shi-labs/Versatile-Diffusion)]
* 📚Text-to-Image, Image-Variation, Image-to-Text, Disentanglement, Text+Image-Guided Generation, Editable I2T2I
* (arXiv preprint 2022) **Frido: Feature Pyramid Diffusion for Complex Scene Image Synthesis**, Wan-Cyuan Fan et al. [[Paper](https://arxiv.org/abs/2208.13753)] [[Code](https://github.com/davidhalladay/Frido)]
* 📚Text-to-Image, Scene Gragh to Image, Layout-to-Image, Uncondition Image Generation
* (arXiv preprint 2022) **NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis**, Chenfei Wu et al. [[Paper](https://arxiv.org/abs/2207.09814)] [[Code](https://github.com/microsoft/NUWA)] [[Project](https://nuwa-infinity.microsoft.com/#/)]
* 📚Unconditional Image Generation(HD), Text-to-Image(HD), Image Animation(HD), Image Outpainting(HD), Text-to-Video(HD)
* (ECCV 2022) **NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion**, Chenfei Wu et al. [[Paper](https://arxiv.org/abs/2111.12417)] [[Code](https://github.com/microsoft/NUWA)]
* **Multimodal Pretrained Model for Multi-tasks🎄**: Text-To-Image, Sketch-to-Image, Image Completion, Text-Guided Image Manipulation, Text-to-Video, Video Prediction, Sketch-to-Video, Text-Guided Video Manipulation
* (ACMMM 2022) **Rethinking Super-Resolution as Text-Guided Details Generation**, Chenxi Ma et al. [[Paper](https://arxiv.org/abs/2207.06604)]
* 📚Text-to-Image, High-resolution, Text-guided High-resolution
* (arXiv preprint 2022) **Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation**, Ye Zhu et al. [[Paper](https://arxiv.org/abs/2206.07771)] [[Code](https://github.com/L-YeZhu/CDCD)]
* 📚Text-to-Image, Dance-to-Music, Class-to-Image
* (arXiv preprint 2022) **M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing**, Zhikang Li et al. [[Paper](https://arxiv.org/abs/2205.11705)]
* 📚Text-to-Image, Unconditional Image Generation, Local-editing, Text-guided Local-editing, In/Out-painting, Style-mixing
* (CVPR 2022) **Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning**, Yogesh Balaji et al. [[Paper](https://arxiv.org/abs/2203.02573)] [[Code](https://github.com/snap-research/MMVID)] [Project](https://snap-research.github.io/MMVID/)
* 📚Text-to-Video, Independent Multimodal Controls, Dependent Multimodal Controls
* ⭐⭐(CVPR 2022) **High-Resolution Image Synthesis with Latent Diffusion Models**, Robin Rombach et al. [[Paper](https://arxiv.org/abs/2112.10752)] [[Code](https://github.com/CompVis/latent-diffusion)] [[Stable Diffusion Code](https://github.com/CompVis/stable-diffusion)]
* 📚Text-to-Image, Conditional Latent Diffusion, Super-Resolution, Inpainting
* ⭐⭐(arXiv preprint 2022) **Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework**, Peng Wang et al. [[Paper](https://arxiv.org/abs/2202.03052v1)] [[Code](https://github.com/ofa-sys/ofa)] [[Hugging Face](https://huggingface.co/OFA-Sys)]
* 📚Text-to-Image Generation, Image Captioning, Text Summarization, Self-Supervised Image Classification, **[SOTA]** Referring Expression Comprehension, Visual Entailment, Visual Question Answering
* (arXiv preprint 2021) **Multimodal Conditional Image Synthesis with Product-of-Experts GANs**, Xun Huang et al. [[Paper](https://arxiv.org/abs/2112.05130)] [[Project](https://deepimagination.cc/PoE-GAN/)]
* 📚Text-to-Image, Segmentation-to-Image, Text+Segmentation/Sketch/Image→Image, Sketch+Segmentation/Image→Image, Segmentation+Image→Image
* (NeurIPS 2021) **M6-UFC: Unifying Multi-Modal Controls for Conditional Image Synthesis via Non-Autoregressive Generative Transformers**, Zhu Zhang et al. [[Paper](https://arxiv.org/abs/2105.14211)]
* 📚Text-to-Image, Sketch-to-Image, Style Transfer, Image Inpainting, Multi-Modal Control to Image
* (arXiv preprint 2021) **ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation**, Han Zhang et al. [[Paper](https://arxiv.org/abs/2112.15283)]
* A pre-trained **10-billion** parameter model: ERNIE-ViLG.
* A large-scale dataset of **145 million** high-quality Chinese image-text pairs.
* 📚Text-to-Image, Image Captioning, Generative Visual Question Answering
* (arXiv preprint 2021) **Multimodal Conditional Image Synthesis with Product-of-Experts GANs**, Xun Huang et al. [[Paper](https://arxiv.org/abs/2112.05130)] [[Project](https://deepimagination.cc/PoE-GAN/)]
* 📚Text-to-Image, Segmentation-to-Image, Text+Segmentation/Sketch/Image → Image, Sketch+Segmentation/Image → Image, Segmentation+Image → Image
* (arXiv preprint 2021) **L-Verse: Bidirectional Generation Between Image and Text**, Taehoon Kim et al. [[Paper](https://arxiv.org/abs/2111.11133)] [[Code](https://github.com/tgisaturday/L-Verse)]
* 📚Text-To-Image, Image-To-Text, Image Reconstruction
* (arXiv preprint 2021) [💬Semantic Diffusion Guidance] **More Control for Free! Image Synthesis with Semantic Diffusion Guidance**, Xihui Liu et al. [[Paper](https://arxiv.org/abs/2112.05744)] [[Project](https://xh-liu.github.io/sdg/)]
* 📚Text-To-Image, Image-To-Image, Text+Image → Image[<🎯Back to Top>](#head-content)
* **🛫Applications🛫**
* (arXiv preprint 2024) [💬Multi-Concept Composition] **Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition**, Chun-Hsiao Yeh et al. [[Paper](https://arxiv.org/abs/2402.15504)] [[Project](https://danielchyeh.github.io/Gen4Gen/)] [[Code](https://github.com/louisYen/Gen4Gen)]
* (arXiv preprint 2023) [💬3D Hairstyle Generation] **HAAR: Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles**, Vanessa Sklyarova et al. [[Paper](https://arxiv.org/abs/2312.11666)] [[Project](https://haar.is.tue.mpg.de/)]
* (arXiv preprint 2023) [💬Image Super-Resolution] **Image Super-Resolution with Text Prompt Diffusion**, Zheng Chen et al. [[Paper](https://arxiv.org/abs/2311.14282)] [[Code](https://github.com/zhengchen1999/PromptSR)]
* (2023) [💬Image Editing] **Generative Fill**. [[Project](https://www.adobe.com/products/photoshop/generative-fill.html)]
* (arXiv preprint 2023) [💬LLMs] **LLM as an Art Director (LaDi): Using LLMs to improve Text-to-Media Generators**, Allen Roush et al. [[Paper](https://arxiv.org/abs/2311.03716v1)]
* (arXiv preprint 2023) [💬Segmentation] **SegGen: Supercharging Segmentation Models with Text2Mask and Mask2Img Synthesis**, Hanrong Ye et al. [[Paper](https://arxiv.org/abs/2311.03355)] [[Project](https://seggenerator.github.io/)]
* (arXiv preprint 2023) [💬Text Editing] **DiffUTE: Universal Text Editing Diffusion Model**, Haoxing Chen et al. [[Paper](https://arxiv.org/abs/2305.10825)]
* (arXiv preprint 2023) [💬Text Character Generation] **TextDiffuser: Diffusion Models as Text Painters**, Jingye Chen et al. [[Paper](https://arxiv.org/abs/2305.10855)]
* (CVPR 2023) [💬Open-Vocabulary Panoptic Segmentation] **Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models**, Jiarui Xu et al. [[Paper](https://arxiv.org/abs/2303.04803)] [[Code](https://github.com/NVlabs/ODISE)] [Project](https://jerryxu.net/ODISE/)] [HuggingFace](https://huggingface.co/spaces/xvjiarui/ODISE)]
* (arXiv preprint 2023) [💬Chinese Text Character Generation] **GlyphDraw: Learning to Draw Chinese Characters in Image Synthesis Models Coherently**, Jian Ma et al. [[Paper](https://arxiv.org/abs/2303.17870)] [[Project](https://1073521013.github.io/glyph-draw.github.io/)]
* (arXiv preprint 2023) [💬Grounded Generation] **Guiding Text-to-Image Diffusion Model Towards Grounded Generation**, Ziyi Li et al. [[Paper](https://arxiv.org/abs/2301.05221)] [[Code](https://github.com/Lipurple/Grounded-Diffusion)] [Project](https://lipurple.github.io/Grounded_Diffusion/)]
* (arXiv preprint 2022) [💬Semantic segmentation] **CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation**, Yuqi Lin et al. [[Paper](https://arxiv.org/abs/2212.09506)] [[Code](https://github.com/linyq2117/CLIP-ES)]
* (arXiv preprint 2022) [💬Unsupervised semantic segmentation] **Peekaboo: Text to Image Diffusion Models are Zero-Shot Segmentors**, Ryan Burgert et al. [[Paper](https://arxiv.org/abs/2211.13224)]
* (SIGGRAPH Asia 2022) [💬Text+Speech → Gesture] **Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings**, Tenglong Ao et al. [[Paper](https://arxiv.org/abs/2210.01448)] [[Code](https://github.com/Aubrey-ao/HumanBehaviorAnimation)]
* (arXiv preprint 2022) [💬Text+Image+Shape → Image] **Shape-Guided Diffusion with Inside-Outside Attention**, Dong Huk Park et al. [[Paper](https://arxiv.org/abs/2212.00210v1)] [[Project](https://shape-guided-diffusion.github.io/)][<🎯Back to Top>](#head-content)
* **Text+Image/Video → Image/Video**
* (CVPR 2024) **SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models**, Yuzhou Huang et al. [[Paper](https://arxiv.org/abs/2312.06739)] [[Project](https://yuzhou914.github.io/SmartEdit/)] [[Code](https://github.com/TencentARC/SmartEdit)]
* (arXiv preprint 2024) **MM-Diff: High-Fidelity Image Personalization via Multi-Modal Condition Integration**, Zhichao Wei et al. [[Paper](https://arxiv.org/abs/2403.15059)]
* (CVPR 2024) **Instruct-Imagen: Image Generation with Multi-modal Instruction**, Hexiang Hu et al. [[Paper](https://arxiv.org/abs/2401.01952)] [[Project](https://instruct-imagen.github.io/)]
* (arXiv preprint 2024) [💬NERF] **InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes**, Mohamad Shahbazi et al. [[Paper](https://arxiv.org/abs/2401.05335)] [[Project](https://mohamad-shahbazi.github.io/inserf/)]
* (arXiv preprint 2023) **ViCo: Plug-and-play Visual Condition for Personalized Text-to-image Generation**, Shaozhe Hao et al. [[Paper](https://arxiv.org/abs/2306.00971)] [[Code](https://github.com/haoosz/ViCo)]
* (arXiv preprint 2023) [💬Video Editing] **MagicStick: Controllable Video Editing via Control Handle Transformations**, Yue Ma et al. [[Paper](https://arxiv.org/abs/2312.03047v1)] [[Project](https://magic-stick-edit.github.io/)] [[Code](https://github.com/mayuelala/MagicStick)]
* (arXiv preprint 2023) **Lego: Learning to Disentangle and Invert Concepts Beyond Object Appearance in Text-to-Image Diffusion Models**, Chen Henry Wu et al. [[Paper](https://arxiv.org/abs/2311.13833)]
* (ACMMM 2023) [💬Style Transfer] **ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors**, Jingwen Chen et al. [[Paper](https://arxiv.org/abs/2311.05463)]
* (ICCV 2023) **A Latent Space of Stochastic Diffusion Models for Zero-Shot Image Editing and Guidance**, Chen Henry Wu et al. [[Paper](https://openaccess.thecvf.com/content/ICCV2023/papers/Wu_A_Latent_Space_of_Stochastic_Diffusion_Models_for_Zero-Shot_Image_ICCV_2023_paper.pdf)] [[Arxiv](https://arxiv.org/abs/2210.05559)] [[Code](https://github.com/chenwu98/cycle-diffusion)]
* (arXiv preprint 2023) [💬Multi-Subject Generation] **VideoDreamer: Customized Multi-Subject Text-to-Video Generation with Disen-Mix Finetuning**, Hong Chen et al. [[Paper](https://arxiv.org/abs/2311.00990v1)] [[Project](https://videodreamer23.github.io/)] [[Code](https://github.com/videodreamer23/videodreamer23.github.io)]
* (arXiv preprint 2023) [💬Video Editing] **CCEdit: Creative and Controllable Video Editing via Diffusion Models**, Ruoyu Feng et al. [[Paper](https://arxiv.org/abs/2309.16496)] [[Demo video](https://www.youtube.com/watch?v=UQw4jq-igN4)]
* ⭐⭐ (SIGGRAPH Asia 2023) **Break-A-Scene: Extracting Multiple Concepts from a Single Image**, Omri Avrahami et al. [[Paper](https://arxiv.org/abs/2305.16311)] [[Project](https://omriavrahami.com/break-a-scene/)] [[Code](https://github.com/google/break-a-scene)]
* (arXiv preprint 2023) **Visual Instruction Inversion: Image Editing via Visual Prompting**, Thao Nguyen et al. [[Paper](https://arxiv.org/abs/2307.14331)] [[Project](https://thaoshibe.github.io/visii/)]
* (CVPR 2023) [💬3D Shape Editing] **ShapeTalk: A Language Dataset and Framework for 3D Shape Edits and Deformations**, Panos Achlioptas et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Achlioptas_ShapeTalk_A_Language_Dataset_and_Framework_for_3D_Shape_Edits_CVPR_2023_paper.pdf)] [[Code](https://github.com/optas/changeit3d)] [[Project](https://changeit3d.github.io/)]
* (arXiv preprint 2023) [💬Colorization] **DiffColor: Toward High Fidelity Text-Guided Image Colorization with Diffusion Models**, Jianxin Lin et al. [[Paper](https://arxiv.org/abs/2308.01655)]
* (ICCV 2023) [💬Video Editing] **FateZero: Fusing Attentions for Zero-shot Text-based Video Editing**, Chenyang Qi et al. [[Paper](https://arxiv.org/abs/2303.09535)] [[Code](https://github.com/ChenyangQiQi/FateZero)] [[Project](https://fate-zero-edit.github.io/)] [Hugging Face](https://huggingface.co/spaces/chenyangqi/FateZero)]
* (arXiv preprint 2023) [💬3D] **AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose**, Huichao Zhang et al. [[Paper](https://arxiv.org/abs/2308.03610)] [[Project](https://avatarverse3d.github.io/)]
* (ACM Transactions on Graphics 2023) **CLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing**, Ahmet Canberk Baykal et al. [[Paper](https://arxiv.org/abs/2307.08397)]
* (arXiv preprint 2023) ⭐⭐**AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning**, Yuwei Guo et al. [[Paper](https://arxiv.org/abs/2307.04725)] [[Project](https://animatediff.github.io/)] [[Code](https://github.com/guoyww/animatediff/)]
* (ICLR 2023) **DiffEdit: Diffusion-based semantic image editing with mask guidance**, Guillaume Couairon et al. [[Paper](https://arxiv.org/abs/2210.11427v1)]
* (arXiv preprint 2023) **Controlling Text-to-Image Diffusion by Orthogonal Finetuning**, Zeju Qiu et al. [[Paper](https://arxiv.org/abs/2306.07280)] [[Project](https://oft.wyliu.com/)] [[Code](https://github.com/Zeju1997/oft)]
* (arXiv preprint 2023) [💬Reject Human Instructions] **Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation**, Zhiwei Zhang et al. [[Paper](https://arxiv.org/abs/2303.05983)] [[Project](https://matrix-alpha.github.io/)] [[Code](https://github.com/matrix-alpha/Accountable-Textual-Visual-Chat)]
* (arXiv preprint 2023) **MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation**, Marco Bellagente et al. [[Paper](https://arxiv.org/abs/2305.15296)]
* (CVPR 2023) **Text-Guided Unsupervised Latent Transformation for Multi-Attribute Image Manipulation**, Xiwen Wei et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2023/html/Wei_Text-Guided_Unsupervised_Latent_Transformation_for_Multi-Attribute_Image_Manipulation_CVPR_2023_paper.html)]
* (arXiv preprint 2023) **Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models**, Shihao Zhao et al. [[Paper](https://arxiv.org/abs/2305.16322v1)] [[Project](https://shihaozhaozsh.github.io/unicontrolnet/)]
* (arXiv preprint 2023) **Unified Multi-Modal Latent Diffusion for Joint Subject and Text Conditional Image Generation**, Yiyang Ma et al. [[Paper](https://arxiv.org/abs/2303.09319)]
* (arXiv preprint 2023) **DisenBooth: Disentangled Parameter-Efficient Tuning for Subject-Driven Text-to-Image Generation**, Hong Chen et al. [[Paper](https://arxiv.org/abs/2305.03374)]
* (arXiv preprint 2023) [💬Image Editing] **Guided Image Synthesis via Initial Image Editing in Diffusion Model**, Jiafeng Mao et al. [[Paper](https://arxiv.org/abs/2305.03382)]
* (arXiv preprint 2023) [💬Image Editing] **Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models**, Wenkai Dong et al. [[Paper](https://arxiv.org/abs/2305.04441)]
* (CVPR 2023) **DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation**, Nataniel Ruiz et al. [[Paper](https://arxiv.org/abs/2208.12242)] [[Project](https://dreambooth.github.io/)]
* (arXiv preprint 2023) **Shape-Guided Diffusion with Inside-Outside Attention**, Dong Huk Park et al. [[Paper](https://arxiv.org/abs/2212.00210)] [[Code](https://github.com/shape-guided-diffusion/shape-guided-diffusion)] [[Project](https://shape-guided-diffusion.github.io/)] [Hugging Face](https://huggingface.co/spaces/shape-guided-diffusion/shape-guided-diffusion)]
* (arXiv preprint 2023) [💬Image Editing] **iEdit: Localised Text-guided Image Editing with Weak Supervision**, Rumeysa Bodur et al. [[Paper](https://arxiv.org/abs/2305.05947)]
* (PR 2023) [💬Person Re-identification] **BDNet: A BERT-based Dual-path Network for Text-to-Image Cross-modal Person Re-identification**, Qiang Liu et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S0031320323003370)]
* (arXiv preprint 2023) **MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models**, Jing Zhao et al. [[Paper](https://arxiv.org/abs/2303.13126)] [[Code](https://github.com/MagicFusion/MagicFusion.github.io)] [Project](https://magicfusion.github.io/)]
* (CVPR 2023) [💬3D] **TAPS3D: Text-Guided 3D Textured Shape Generation from Pseudo Supervision**, Jiacheng Wei et al. [[Paper](https://arxiv.org/abs/2303.13273)]
* ⭐⭐(arXiv preprint 2023) [💬Image Editing] **MasaCtrl: Tuning-free Mutual Self-Attention Control for Consistent Image Synthesis and Editing**, Mingdeng Cao et al. [[Paper](https://arxiv.org/abs/2304.08465)] [[Code](https://github.com/TencentARC/MasaCtrl)] [[Project](https://ljzycmd.github.io/projects/MasaCtrl/)]
* (arXiv preprint 2023) **Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos**, Yue Ma et al. [[Paper](https://arxiv.org/abs/2304.01186)] [[Code](https://github.com/mayuelala/FollowYourPose)] [[Hugging Face](https://huggingface.co/spaces/YueMafighting/FollowYourPose)]
* ⭐⭐(arXiv preprint 2023) [💬Image Editing] **Delta Denoising Score**, Amir Hertz et al. [[Paper](https://arxiv.org/abs/2304.07090)]
* (arXiv preprint 2023) **Subject-driven Text-to-Image Generation via Apprenticeship Learning**, Wenhu Chen et al. [[Paper](https://arxiv.org/abs/2304.00186)] [[Project](https://delta-denoising-score.github.io/)]
* (arXiv preprint 2023) [💬Image Editing] **Region-Aware Diffusion for Zero-shot Text-driven Image Editing**, Nisha Huang et al. [[Paper](https://arxiv.org/abs/2302.11797)] [[Code](https://github.com/haha-lisa/RDM-Region-Aware-Diffusion-Model)]
* ⭐⭐(arXiv preprint 2023) [💬Text+Video → Video]**Structure and Content-Guided Video Synthesis with Diffusion Models**, Patrick Esser et al. [[Paper](https://arxiv.org/abs/2302.03011)] [[Project](https://research.runwayml.com/gen1)]
* (arXiv preprint 2023) **ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation**, Yuxiang Wei et al. [[Paper](https://arxiv.org/abs/2302.13848)]
* (arXiv preprint 2023) [💬Fashion Image Editing] **FICE: Text-Conditioned Fashion Image Editing With Guided GAN Inversion**, Martin Pernuš et al. [[Paper](https://arxiv.org/abs/2301.02110)] [[Code](https://github.com/MartinPernus/FICE)]
* (AAAI 2023) **CLIPVG: Text-Guided Image Manipulation Using Differentiable Vector Graphics**, Yiren Song et al. [[Paper](https://arxiv.org/abs/2212.02122v1)]
* (AAAI 2023) **DE-Net: Dynamic Text-guided Image Editing Adversarial Networks**, Ming Tao et al. [[Paper](https://arxiv.org/abs/2206.01160)] [[Code](https://github.com/tobran/DE-Net)]
* (arXiv preprint 2022) **Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation**, Narek Tumanyan et al. [[Paper](https://arxiv.org/abs/2211.12572)] [[Project](https://pnp-diffusion.github.io/)]
* (arXiv preprint 2022) [💬Text+Image → Video] **Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation**, Tsu-Jui Fu et al. [[Paper](https://arxiv.org/abs/2211.12824)]
* (arXiv preprint 2022) [💬Image Stylization] **DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization**, Nisha Huang et al. [[Paper](https://arxiv.org/abs/2211.10682)] [[Code](https://github.com/haha-lisa/Diffstyler)]
* (arXiv preprint 2022) **Null-text Inversion for Editing Real Images using Guided Diffusion Models**, Ron Mokady et al. [[Paper](https://arxiv.org/abs/2211.09794)] [[Project]([https://www.timothybrooks.com/instruct-pix2pix](https://null-text-inversion.github.io/))]
* (arXiv preprint 2022) **InstructPix2Pix: Learning to Follow Image Editing Instructions**, Tim Brooks et al. [[Paper](https://arxiv.org/abs/2211.09800)] [[Project](https://www.timothybrooks.com/instruct-pix2pix)]
* (ECCV 2022) [💬Style Transfer] **Language-Driven Artistic Style Transfer**, Tsu-Jui Fu et al. [[Paper](https://link.springer.com/chapter/10.1007/978-3-031-20059-5_41)] [[Code](https://github.com/tsujuifu/pytorch_ldast)]
* (arXiv preprint 2022) **Bridging CLIP and StyleGAN through Latent Alignment for Image Editing**, Wanfeng Zheng et al. [[Paper](https://arxiv.org/abs/2210.04506)]
* (NeurIPS 2022) **One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations**, Yiming Zhu et al. [[Paper](https://arxiv.org/abs/2210.07883)] [[Code](https://github.com/KumapowerLIU/FFCLIP)]
* (BMVC 2022) **LDEdit: Towards Generalized Text Guided Image Manipulation via Latent Diffusion Models**, Paramanand Chandramouli et al. [[Paper](https://arxiv.org/abs/2210.02249v1)]
* (ACMMM 2022) [💬Iterative Language-based Image Manipulation] **LS-GAN: Iterative Language-based Image Manipulation via Long and Short Term Consistency Reasoning**, Gaoxiang Cong et al. [[Paper](https://dl.acm.org/doi/abs/10.1145/3503161.3548206)]
* (ACMMM 2022) [💬Digital Art Synthesis] **Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion**, Huang Nisha et al. [[Paper](https://arxiv.org/abs/2209.13360)] [[Code](https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion)]
* (SIGGRAPH Asia 2022) [💬HDR Panorama Generation] **Text2Light: Zero-Shot Text-Driven HDR Panorama Generation**, Zhaoxi Chen et al. [[Paper](https://arxiv.org/abs/2209.09898)] [[Project](https://frozenburning.github.io/projects/text2light/)] [[Code](https://github.com/FrozenBurning/Text2Light)]
* (arXiv preprint 2022) **LANIT: Language-Driven Image-to-Image Translation for Unlabeled Data**, Jihye Park et al. [[Paper](https://arxiv.org/abs/2208.14889)] [[Project](https://ku-cvlab.github.io/LANIT/)] [[Code](https://github.com/KU-CVLAB/LANIT)]
* (ACMMM PIES-ME 2022) [💬3D Semantic Style Transfer] **Language-guided Semantic Style Transfer of 3D Indoor Scenes**, Bu Jin et al. [[Paper](https://arxiv.org/abs/2208.07870)] [[Code](https://github.com/AIR-DISCOVER/LASST)]
* (arXiv preprint 2022) [💬Face Animation] **Language-Guided Face Animation by Recurrent StyleGAN-based Generator**, Tiankai Hang et al. [[Paper](https://arxiv.org/abs/2208.05617)] [[Code](https://github.com/TiankaiHang/language-guided-animation)]
* (arXiv preprint 2022) [💬Fashion Design] **ARMANI: Part-level Garment-Text Alignment for Unified Cross-Modal Fashion Design**, Xujie Zhang et al. [[Paper](https://arxiv.org/abs/2208.05621)] [[Code](https://github.com/Harvey594/ARMANI)]
* (arXiv preprint 2022) [💬Image Colorization] **TIC: Text-Guided Image Colorization**, Subhankar Ghosh et al. [[Paper](https://arxiv.org/abs/2208.02843)]
* (ECCV 2022) [💬Animating Human Meshes] **CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes**, Kim Youwang et al. [[Paper](https://arxiv.org/abs/2206.04382)] [[Code](https://github.com/Youwang-Kim/CLIP-Actor)]
* (ECCV 2022) [💬Pose Synthesis] **TIPS: Text-Induced Pose Synthesis**, Prasun Roy et al. [[Paper](https://arxiv.org/abs/2207.11718)] [[Code](https://github.com/prasunroy/tips)] [[Project](https://prasunroy.github.io/tips/)]
* (ACMMM 2022) [💬Person Re-identification] **Learning Granularity-Unified Representations for Text-to-Image Person Re-identification**, Zhiyin Shao et al. [[Paper](https://arxiv.org/abs/2207.07802)] [[Code](https://github.com/ZhiyinShao-H/LGUR)]
* (ACMMM 2022) **Towards Counterfactual Image Manipulation via CLIP**, Yingchen Yu et al. [[Paper](https://arxiv.org/abs/2207.02812)] [[Code](https://github.com/yingchen001/CF-CLIP)]
* (ACMMM 2022) [💬Monocular Depth Estimation] **Can Language Understand Depth?**, Wangbo Zhao et al. [[Paper](https://arxiv.org/abs/2207.01077)] [[Code](https://github.com/Adonis-galaxy/DepthCLIP)]
* (arXiv preprint 2022) [💬Image Style Transfer] **Referring Image Matting**, Tsu-Jui Fu et al. [[Paper](https://arxiv.org/abs/2106.00178)]
* (CVPR 2022) [💬Image Segmentation] **Image Segmentation Using Text and Image Prompts**, Timo Lüddecke et al. [[Paper](https://arxiv.org/abs/2112.10003)] [[Code](https://github.com/timojl/clipseg)]
* (CVPR 2022) [💬Video Segmentation] **Modeling Motion with Multi-Modal Features for Text-Based Video Segmentation**, Wangbo Zhao et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_Modeling_Motion_With_Multi-Modal_Features_for_Text-Based_Video_Segmentation_CVPR_2022_paper.pdf)] [[Code](https://github.com/wangbo-zhao/2022cvpr-mmmmtbvs)]
* (arXiv preprint 2022) [💬Image Matting] **Referring Image Matting**, Sebastian Loeschcke et al. [[Paper](https://arxiv.org/abs/2206.05149)] [[Dataset](https://github.com/JizhiziLi/RIM)]
* (arXiv preprint 2022) [💬Stylizing Video Objects] **Text-Driven Stylization of Video Objects**, Sebastian Loeschcke et al. [[Paper](https://arxiv.org/abs/2206.12396)] [[Project](https://sloeschcke.github.io/Text-Driven-Stylization-of-Video-Objects/)]
* (arXiv preprint 2022) **DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection**, Yunhao Ge et al. [[Paper](https://arxiv.org/abs/2206.09592)]
* (IEEE Transactions on Neural Networks and Learning Systems 2022) [💬Pose-Guided Person Generation] **Verbal-Person Nets: Pose-Guided Multi-Granularity Language-to-Person Generation**, Deyin Liu et al. [[Paper](https://ieeexplore.ieee.org/document/9732175)]
* (SIGGRAPH 2022) [💬3D Avatar Generation] **AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars**, Fangzhou Hong et al. [[Paper](https://arxiv.org/abs/2205.08535)] [[Code](https://github.com/hongfz16/AvatarCLIP)] [[Project](https://hongfz16.github.io/projects/AvatarCLIP.html)]
* ⭐⭐(arXiv preprint 2022) [💬Image & Video Editing] **Text2LIVE: Text-Driven Layered Image and Video Editing**, Omer Bar-Tal et al. [[Paper](https://arxiv.org/abs/2204.02491)] [[Project](https://text2live.github.io/)]
* (Machine Vision and Applications 2022) **Paired-D++ GAN for image manipulation with text**, Duc Minh Vo et al. [[Paper](https://link.springer.com/article/10.1007/s00138-022-01298-7)]
* (CVPR 2022) [💬Hairstyle Transfer] **HairCLIP: Design Your Hair by Text and Reference Image**, Tianyi Wei et al. [[Paper](https://arxiv.org/abs/2112.05142)] [[Code](https://github.com/wty-ustc/HairCLIP)]
* (CVPR 2022) [💬NeRF] **CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields**, Can Wang et al. [[Paper](https://arxiv.org/abs/2112.05139)] [[Code](https://github.com/cassiePython/CLIPNeRF)] [[Project](https://cassiepython.github.io/clipnerf/)]
* (CVPR 2022) **DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation**, Gwanghyun Kim et al. [[Paper](https://arxiv.org/abs/2110.02711)]
* (CVPR 2022) **ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation**, Jianan Wang et al. [[Paper](https://arxiv.org/abs/2204.04428)] [[Project](https://jawang19.github.io/manitrans/)]
* ⭐⭐ (CVPR 2022) **Blended Diffusion for Text-driven Editing of Natural Images**, Omri Avrahami et al. [[Paper](https://arxiv.org/abs/2111.14818)] [[Code](https://github.com/omriav/blended-diffusion)] [[Project](https://omriavrahami.com/blended-diffusion-page/)]
* (CVPR 2022) **Predict, Prevent, and Evaluate: Disentangled Text-Driven Image Manipulation Empowered by Pre-Trained Vision-Language Model**, Zipeng Xu et al. [[Paper](https://arxiv.org/abs/2111.13333)] [[Code](https://github.com/zipengxuc/PPE-Pytorch)]
* (CVPR 2022) [💬Style Transfer] **CLIPstyler: Image Style Transfer with a Single Text Condition**, Gihyun Kwon et al. [[Paper](https://arxiv.org/abs/2112.00374)] [[Code](https://github.com/paper11667/CLIPstyler)]
* (arXiv preprint 2022) [💬Multi-person Image Generation] **Pose Guided Multi-person Image Generation From Text**, Soon Yau Cheong et al. [[Paper](https://arxiv.org/abs/2203.04907)]
* (arXiv preprint 2022) [💬Image Style Transfer] **StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation**, Peter Schaldenbrand et al. [[Paper](https://arxiv.org/abs/2202.12362)] [[Dataset](https://www.kaggle.com/pittsburghskeet/drawings-with-style-evaluation-styleclipdraw)] [[Code](https://github.com/pschaldenbrand/StyleCLIPDraw)] [[Demo](https://replicate.com/pschaldenbrand/style-clip-draw)]
* (arXiv preprint 2022) [💬Image Style Transfer] **Name Your Style: An Arbitrary Artist-aware Image Style Transfer**, Zhi-Song Liu et al. [[Paper](https://arxiv.org/abs/2202.13562)]
* (arXiv preprint 2022) [💬3D Avatar Generation] **Text and Image Guided 3D Avatar Generation and Manipulation**, Zehranaz Canfes et al. [[Paper](https://arxiv.org/abs/2202.06079)] [[Project](https://catlab-team.github.io/latent3D/)]
* (arXiv preprint 2022) [💬Image Inpainting] **NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN**, Minheng Ni et al. [[Paper](https://arxiv.org/abs/2202.05009)]
* ⭐(arXiv preprint 2021) [💬Text+Image → Video] **Make It Move: Controllable Image-to-Video Generation with Text Descriptions**, Yaosi Hu et al. [[Paper](https://arxiv.org/abs/2112.02815)]
* (arXiv preprint 2021) [💬NeRF] **Zero-Shot Text-Guided Object Generation with Dream Fields**, Ajay Jain et al. [[Paper](https://arxiv.org/abs/2112.01455)] [[Project](https://ajayj.com/dreamfields)]
* (NeurIPS 2021) **Instance-Conditioned GAN**, Arantxa Casanova et al. [[Paper](https://arxiv.org/abs/2109.05070)] [[Code](https://github.com/facebookresearch/ic_gan)]
* (ICCV 2021) **Language-Guided Global Image Editing via Cross-Modal Cyclic Mechanism**, Wentao Jiang et al. [[Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Jiang_Language-Guided_Global_Image_Editing_via_Cross-Modal_Cyclic_Mechanism_ICCV_2021_paper.pdf)]
* (ICCV 2021) **Talk-to-Edit: Fine-Grained Facial Editing via Dialog**, Yuming Jiang et al. [[Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Jiang_Talk-To-Edit_Fine-Grained_Facial_Editing_via_Dialog_ICCV_2021_paper.pdf)] [[Project](https://www.mmlab-ntu.com/project/talkedit/)] [[Code](https://github.com/yumingj/Talk-to-Edit)]
* (ICCVW 2021) **CIGLI: Conditional Image Generation from Language & Image**, Xiaopeng Lu et al. [[Paper](https://openaccess.thecvf.com/content/ICCV2021W/CLVL/papers/Lu_CIGLI_Conditional_Image_Generation_From_Language__Image_ICCVW_2021_paper.pdf)] [[Code](https://github.com/vincentlux/CIGLI?utm_source=catalyzex.com)]
* (ICCV 2021) **StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery**, Or Patashnik et al. [[Paper](https://arxiv.org/abs/2103.17249)] [[Code](https://github.com/orpatashnik/StyleCLIP)]
* (arXiv preprint 2021) **Paint by Word**, David Bau et al. [[Paper](https://arxiv.org/pdf/2103.10951.pdf)]
* ⭐(arXiv preprint 2021) **Zero-Shot Text-to-Image Generation**, Aditya Ramesh et al. [[Paper](https://arxiv.org/pdf/2102.12092.pdf)] [[Code](https://github.com/openai/DALL-E)] [[Blog](https://openai.com/blog/dall-e/)] [[Model Card](https://github.com/openai/DALL-E/blob/master/model_card.md)] [[Colab](https://colab.research.google.com/drive/1KA2w8bA9Q1HDiZf5Ow_VNOrTaWW4lXXG?usp=sharing)]
* (NeurIPS 2020) **Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation**, Bowen Li et al. [[Paper](https://arxiv.org/pdf/2010.12136.pdf)]
* (CVPR 2020) **ManiGAN: Text-Guided Image Manipulation**, Bowen Li et al. [[Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Li_ManiGAN_Text-Guided_Image_Manipulation_CVPR_2020_paper.pdf)] [[Code](https://github.com/mrlibw/ManiGAN)]
* (ACMMM 2020) **Text-Guided Neural Image Inpainting**, Lisai Zhang et al. [[Paper](https://arxiv.org/pdf/2004.03212.pdf)] [[Code](https://github.com/idealwhite/TDANet)]
* (ACMMM 2020) **Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach**, Yahui Liu et al. [[Paper](https://arxiv.org/pdf/2008.04200.pdf)]
* (NeurIPS 2018) **Text-adaptive generative adversarial networks: Manipulating images with natural language**, Seonghyeon Nam et al. [[Paper](http://papers.nips.cc/paper/7290-text-adaptive-generative-adversarial-networks-manipulating-images-with-natural-language.pdf)] [[Code](https://github.com/woozzu/tagan)][<🎯Back to Top>](#head-content)
* **Text+Layout → Image**
* (ECCV 2024) **Training-free Composite Scene Generation for Layout-to-Image Synthesis**, Jiaqi Liu et al. [[Paper](https://arxiv.org/abs/2407.13609)]
* (CVPR 2024) **Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis**, Marianna Ohanyan et al. [[Paper](https://arxiv.org/abs/2406.04032)] [[Code](https://github.com/Picsart-AI-Research/Zero-Painter)]
* (CVPR 2024) **MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis**, Dewei Zhou et al. [[Paper](https://arxiv.org/abs/2402.05408)] [[Project](https://migcproject.github.io/)] [[Code](https://github.com/limuloo/MIGC)]
* (ICLR 2024) **Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive**, Yumeng Li et al. [[Paper](https://arxiv.org/abs/2401.08815)] [[Project](https://yumengli007.github.io/ALDM/)] [[Code](https://github.com/boschresearch/ALDM)]
* (ICCV 2023) **Dense Text-to-Image Generation with Attention Modulation**, Yunji Kim et al. [[Paper](https://arxiv.org/abs/2308.12964)] [[Code](https://github.com/naver-ai/DenseDiffusion)]
* (arXiv preprint 2023) **Training-Free Layout Control with Cross-Attention Guidance**, Minghao Chen et al. [[Paper](https://arxiv.org/abs/2304.03373)] [[Code](https://github.com/silent-chen/layout-guidance)] [[Project](https://silent-chen.github.io/layout-guidance/)][<🎯Back to Top>](#head-content)
* **Others+Text+Image/Video → Image/Video**
* (arXiv preprint 2024) [💬Skeleton/Sketch] **ECNet: Effective Controllable Text-to-Image Diffusion Models**, Sicheng Li et al. [[Paper](https://arxiv.org/abs/2403.18417)]
* (ICCV 2023) [💬Skeleton] **HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation**, Xuan Ju et al. [[Paper](https://arxiv.org/abs/2304.04269)] [[Project](https://idea-research.github.io/HumanSD/)] [[Code](https://github.com/IDEA-Research/HumanSD)] [[Video](https://drive.google.com/file/d/1Djc2uJS5fmKnKeBnL34FnAAm3YSH20Bb/view)]
* (arXiv preprint 2023) [💬Sound+Speech→Robotic Painting] **Robot Synesthesia: A Sound and Emotion Guided AI Painter**, Vihaan Misra et al. [[Paper](https://arxiv.org/abs/2302.04850)]
* (arXiv preprint 2022) [💬Sound] **Robust Sound-Guided Image Manipulation**, Seung Hyun Lee et al. [[Paper](https://arxiv.org/abs/2208.14114)][<🎯Back to Top>](#head-content)
* **Layout/Mask → Image**
* (CVPR 2024) [💬Instance information +Text→Image] **InstanceDiffusion: Instance-level Control for Image Generation**, XuDong Wang et al. [[Paper](https://arxiv.org/abs/2402.03290)] [[Project](https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/)] [[Code](https://github.com/frank-xwang/InstanceDiffusion)]
* (arXiv preprint 2023) [💬Text→Layout→Image] **LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation**, Leigang Qu et al. [[Paper](https://arxiv.org/abs/2308.05095)]
* (CVPR 2023) [💬Mask+Text→Image] **SceneComposer: Any-Level Semantic Image Synthesis**, Yu Zeng et al. [[Paper](https://arxiv.org/abs/2211.11742)] [[Demo](https://forms.microsoft.com/pages/responsepage.aspx?id=Wht7-jR7h0OUrtLBeN7O4fEq8XkaWWJBhiLWWMELo2NUMjJYS0FDS0RISUVBUllMV0FRSzNCOTFTQy4u)]
* (CVPR 2023) **Freestyle Layout-to-Image Synthesis**, Han Xue et al. [[Paper](https://arxiv.org/abs/2303.14412)] [[Code](https://github.com/essunny310/FreestyleNet)]
* (CVPR 2023) **LayoutDiffusion: Controllable Diffusion Model for Layout-to-image Generation**, Guangcong Zheng et al. [[Paper](https://arxiv.org/abs/2303.17189)] [[Code](https://github.com/ZGCTroy/LayoutDiffusion)]
* (Journal of King Saud University - Computer and Information Sciences) [Survey] **Image Generation Models from Scene Graphs and Layouts: A Comparative Analysis**, Muhammad Umair Hassan et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S1319157823000897)]
* (CVPR 2022) **Modeling Image Composition for Complex Scene Generation**, Zuopeng Yang et al. [[Paper](https://arxiv.org/abs/2206.00923)] [[Code](https://github.com/JohnDreamer/TwFA)]
* (CVPR 2022) **Interactive Image Synthesis with Panoptic Layout Generation**, Bo Wang et al. [[Paper](https://arxiv.org/abs/2203.02104)]
* (CVPR 2021 [AI for Content Creation Workshop](http://visual.cs.brown.edu/workshops/aicc2021/)) **High-Resolution Complex Scene Synthesis with Transformers**, Manuel Jahn et al. [[Paper](https://arxiv.org/pdf/2105.06458.pdf)]
* (CVPR 2021) **Context-Aware Layout to Image Generation with Enhanced Object Appearance**, Sen He et al. [[Paper](https://arxiv.org/pdf/2103.11897.pdf)] [[Code](https://github.com/wtliao/layout2img)][<🎯Back to Top>](#head-content)
* **Label-set → Semantic maps**
* (ECCV 2020) **Controllable image synthesis via SegVAE**, Yen-Chi Cheng et al. [[Paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123520154.pdf)] [[Code](https://github.com/yccyenchicheng/SegVAE)][<🎯Back to Top>](#head-content)
* **Speech → Image**
* (IEEE/ACM Transactions on Audio, Speech and Language Processing-2021) **Generating Images From Spoken Descriptions**, Xinsheng Wang et al. [[Paper](https://dl.acm.org/doi/10.1109/TASLP.2021.3053391)] [[Code](https://github.com/xinshengwang/S2IGAN)] [[Project](https://xinshengwang.github.io/project/s2igan/)]
* (INTERSPEECH 2020)**[Extent Version👆] S2IGAN: Speech-to-Image Generation via Adversarial Learning**, Xinsheng Wang et al. [[Paper](https://arxiv.org/abs/2005.06968)]
* (IEEE Journal of Selected Topics in Signal Processing-2020) **Direct Speech-to-Image Translation**, Jiguo Li et al. [[Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9067083)] [[Code](https://github.com/smallflyingpig/speech-to-image-translation-without-text)] [[Project](https://smallflyingpig.github.io/speech-to-image/main)][<🎯Back to Top>](#head-content)
* **Scene Graph → Image**
* (arXiv preprint 2023) **Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training**, Ling Yang et al. [[Paper](https://arxiv.org/abs/2211.11138)]
* (CVPR 2018) **Image Generation from Scene Graphs**, Justin Johnson et al. [[Paper](https://openaccess.thecvf.com/content_cvpr_2018/CameraReady/0764.pdf)] [[Code](https://github.com/google/sg2im)][<🎯Back to Top>](#head-content)
* **Text → Visual Retrieval**
* (ECIR 2023) **Scene-Centric vs. Object-Centric Image-Text Cross-Modal Retrieval: A Reproducibility Study**, Mariya Hendriksen et al. [[Paper](https://arxiv.org/abs/2301.05174)] [[Code](https://github.com/mariyahendriksen/ecir23-object-centric-vs-scene-centric-CMR)]
* (ECIR 2022) **Extending CLIP for Category-to-image Retrieval in E-commerce**, Mariya Hendriksen et al. [[Paper](https://arxiv.org/abs/2112.11294)] [[Code](https://github.com/mariyahendriksen/ecir2022_category_to_image_retrieval)]
* (ACMMM 2022) **CAIBC: Capturing All-round Information Beyond Color for Text-based Person Retrieval**, Zijie Wang et al. [[Paper](https://arxiv.org/abs/2209.05773)]
* (AAAI 2022) **Cross-Modal Coherence for Text-to-Image Retrieval**, Malihe Alikhani et al. [[Paper](https://arxiv.org/abs/2109.11047)]
* (ECCV [RWS 2022](https://vap.aau.dk/rws-eccv2022/)) [💬Person Retrieval] **See Finer, See More: Implicit Modality Alignment for Text-based Person Retrieval**, Xiujun Shu et al. [[Paper](https://arxiv.org/abs/2208.08608)] [[Code](https://github.com/TencentYoutuResearch/PersonRetrieval-IVT)]
* (ECCV 2022) [💬Text+Sketch→Visual Retrieval] **A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch**, Patsorn Sangkloy et al. [[Paper](https://arxiv.org/abs/2208.03354)] [[Project](https://patsorn.me/projects/tsbir/)]
* (Neurocomputing-2022) **TIPCB: A simple but effective part-based convolutional baseline for text-based person search**, Yuhao Chen et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S0925231222004726)] [[Code](https://github.com/OrangeYHChen/TIPCB?utm_source=catalyzex.com)]
* (arXiv preprint 2021) [💬Dataset] **FooDI-ML: a large multi-language dataset of food, drinks and groceries images and descriptions**, David Amat Olóndriz et al. [[Paper](https://arxiv.org/abs/2110.02035)] [[Code](https://github.com/glovo/foodi-ml-dataset)]
* (CVPRW 2021) **TIED: A Cycle Consistent Encoder-Decoder Model for Text-to-Image Retrieval**, Clint Sebastian et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2021W/AICity/papers/Sebastian_TIED_A_Cycle_Consistent_Encoder-Decoder_Model_for_Text-to-Image_Retrieval_CVPRW_2021_paper.pdf)]
* (CVPR 2021) **T2VLAD: Global-Local Sequence Alignment for Text-Video Retrieval**, Xiaohan Wang et al. [[Paper](https://arxiv.org/pdf/2104.10054.pdf)]
* (CVPR 2021) **Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers**, Antoine Miech et al. [[Paper](https://arxiv.org/pdf/2103.16553.pdf)]
* (IEEE Access 2019) **Query is GAN: Scene Retrieval With Attentional Text-to-Image Generative Adversarial Network**, RINTARO YANAGI et al. [[Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8868179)][<🎯Back to Top>](#head-content)
* **Text → 3D/Motion/Shape/Mesh/Object...**
* (ACMMM 2024) [💬Text → 3D] **PlacidDreamer: Advancing Harmony in Text-to-3D Generation**, Shuo Huang et al. [[Paper](https://arxiv.org/abs/2407.13976)] [[Code](https://github.com/HansenHuang0823/PlacidDreamer)]
* (Meta) [💬Text → 3D] **Meta 3D Gen**, Raphael Bensadoun et al. [[Paper](https://scontent-dus1-1.xx.fbcdn.net/v/t39.2365-6/449707112_509645168082163_2193712134508658234_n.pdf?_nc_cat=111&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=TdfUsn5eGzgQ7kNvgEir1_g&_nc_ht=scontent-dus1-1.xx&oh=00_AYCH-Fbi8CL2l3Yc3ehAr-Itl5B6Wbo7KtXeONb8KCJ_mg&oe=668C1291)]
* (arXiv preprint 2024) [💬Text → 3D] **Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects**, Raphael Bensadoun et al. [[Paper](https://arxiv.org/abs/2407.02430v1)]
* (arXiv preprint 2024) [💬Text → 3D] **Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry, Texture, and PBR Materials**, Yawar Siddiqui et al. [[Paper](https://arxiv.org/abs/2407.02445v1)] [[Project](https://assetgen.github.io/)]
* (arXiv preprint 2024) [💬Text → 3D] **3DStyleGLIP: Part-Tailored Text-Guided 3D Neural Stylization**, SeungJeh Chung et al. [[Paper](https://arxiv.org/abs/2404.02634v1)]
* (arXiv preprint 2024) [💬Text → 3D] **LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis**, Kevin Xie et al. [[Paper](https://arxiv.org/abs/2403.15385)] [[Project](https://research.nvidia.com/labs/toronto-ai/LATTE3D/)]
* (IEEE Transactions on Visualization and Computer Graphics) [💬Text → Motion] **GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation**, Xuehao Gao et al. [[Paper](https://arxiv.org/abs/2401.02142v1)]
* (arXiv preprint 2023) [💬Text → 4D] **4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling**, Sherwin Bahmani et al. [[Paper](https://arxiv.org/abs/2311.17984)] [[Project](https://sherwinbahmani.github.io/4dfy/)] [[Code](https://github.com/sherwinbahmani/4dfy)]
* (arXiv preprint 2023) [💬Text → 3D] **MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry and Texture**, Lincong Feng et al. [[Paper](https://arxiv.org/abs/2311.10123)] [[Project](https://metadreamer3d.github.io/)]
* (arXiv preprint 2023) [💬Text → 3D] **One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion**, Minghua Liu et al. [[Paper](https://arxiv.org/abs/2311.07885)] [[Project](https://sudo-ai-3d.github.io/One2345plus_page/)]
* (NeurIPS 2023) [💬Text → 3D] **One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization**, Minghua Liu et al. [[Paper](https://arxiv.org/abs/2306.16928)] [[Project](https://one-2-3-45.github.io/)] [[Code](https://github.com/One-2-3-45/One-2-3-45)]
* (ACMMM 2023) [💬Text+Sketch → 3D] **Control3D: Towards Controllable Text-to-3D Generation**, Yang Chen et al. [[Paper](https://arxiv.org/abs/2311.05461)]
* (SIGGRAPH Asia 2023 & TOG) [💬Text → 3D] **EXIM: A Hybrid Explicit-Implicit Representation for Text-Guided 3D Shape Generation**, Zhengzhe Liu et al. [[Paper](https://arxiv.org/abs/2311.01714v1)] [[Code](https://github.com/liuzhengzhe/EXIM)]
* (arXiv preprint 2023) [💬Text → 3D] **PaintHuman: Towards High-fidelity Text-to-3D Human Texturing via Denoised Score Distillation**, Jianhui Yu et al. [[Paper](https://arxiv.org/abs/2310.09458v1)]
* (arXiv preprint 2023) [💬Text → Motion] **Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model**, Yin Wang et al. [[Paper](https://arxiv.org/abs/2309.06284)]
* (arXiv preprint 2023) [💬Text → 3D] **IT3D: Improved Text-to-3D Generation with Explicit View Synthesis**, Yiwen Chen et al. [[Paper](https://arxiv.org/abs/2308.11473)] [[Code](https://github.com/buaacyw/IT3D-text-to-3D)]
* (arXiv preprint 2023) [💬Text → 3D] **HD-Fusion: Detailed Text-to-3D Generation Leveraging Multiple Noise Estimation**, Jinbo Wu et al. [[Paper](https://arxiv.org/abs/2307.16183)]
* (arXiv preprint 2023) [💬Text → 3D] **T2TD: Text-3D Generation Model based on Prior Knowledge Guidance**, Weizhi Nie et al. [[Paper](https://arxiv.org/abs/2305.15753)]
* (arXiv preprint 2023) [💬Text → 3D] **ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation**, Zhengyi Wang et al. [[Paper](https://arxiv.org/abs/2305.16213)] [[Project](https://ml.cs.tsinghua.edu.cn/prolificdreamer/)]
* (arXiv preprint 2023) [💬Text+Mesh → Mesh] **X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance**, Yiwei Ma et al. [[Paper](https://arxiv.org/abs/2303.15764)] [[Project](https://xmu-xiaoma666.github.io/Projects/X-Mesh/)] [[Code](https://github.com/xmu-xiaoma666/X-Mesh)]
* (arXiv preprint 2023) [💬Text → Motion] **T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations**, Jianrong Zhang et al. [[Paper](https://arxiv.org/abs/2301.06052)] [[Project](https://mael-zys.github.io/T2M-GPT/)] [[Code](https://github.com/Mael-zys/T2M-GPT)] [[Hugging Face](https://huggingface.co/vumichien/T2M-GPT)]
* (arXiv preprint 2023) [💬Text → 3D] **DreamHuman: Animatable 3D Avatars from Text**, Nikos Kolotouros et al. [[Paper](https://arxiv.org/abs/2306.09329)] [[Project](https://dream-human.github.io/)]
* (arXiv preprint 2023) [💬Text → 3D] **ATT3D: Amortized Text-to-3D Object Synthesis**, Jonathan Lorraine et al. [[Paper](https://arxiv.org/abs/2306.07349)] [[Project](https://research.nvidia.com/labs/toronto-ai/ATT3D/)]
* (arXiv preprint 2022) [💬Text → 3D] **Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models**, Jiale Xu et al. [[Paper](https://arxiv.org/abs/2212.14704)] [[Project](https://bluestyle97.github.io/dream3d/)]
* (arXiv preprint 2022) [💬3D Generative Model] **DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model**, Gwanghyun Kim et al. [[Paper](https://arxiv.org/abs/2211.16374)] [[Code](https://github.com/gwang-kim/DATID-3D)] [[Project](https://datid-3d.github.io/)]
* (arXiv preprint 2022) [💬Point Clouds] **Point-E: A System for Generating 3D Point Clouds from Complex Prompts**, Alex Nichol et al. [[Paper](https://arxiv.org/abs/2212.08751)] [[Code](https://github.com/openai/point-e)]
* (arXiv preprint 2022) [💬Text → 3D] **Magic3D: High-Resolution Text-to-3D Content Creation**, Chen-Hsuan Lin et al. [[Paper](https://arxiv.org/abs/2211.10440)] [[Project](https://deepimagination.cc/Magic3D/)]
* (arXiv preprint 2022) [💬Text → Shape] **Diffusion-SDF: Text-to-Shape via Voxelized Diffusion**, Muheng Li et al. [[Paper](https://arxiv.org/abs/2212.03293)] [[Code](https://github.com/ttlmh/Diffusion-SDF)]
* (NIPS 2022) [💬Mesh] **TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition**, Yongwei Chen et al. [[Paper](https://arxiv.org/abs/2210.11277)] [[Project](https://cyw-3d.github.io/tango/)] [[Code](https://github.com/Gorilla-Lab-SCUT/tango)]
* (arXiv preprint 2022) [💬Human Motion Generation] **Human Motion Diffusion Model**, Guy Tevet et al. [[Paper](https://arxiv.org/abs/2209.14916)] [[Project](https://guytevet.github.io/mdm-page/)] [[Code](https://github.com/GuyTevet/motion-diffusion-model)]
* (arXiv preprint 2022) [💬Human Motion Generation] **MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model**, Mingyuan Zhang et al. [[Paper](https://arxiv.org/abs/2208.15001)] [[Project](https://mingyuan-zhang.github.io/projects/MotionDiffuse.html#)]
* (arXiv preprint 2022) [💬3D Shape] **ISS: Image as Stetting Stone for Text-Guided 3D Shape Generation**, Zhengzhe Liu et al. [[Paper](https://arxiv.org/abs/2209.04145)]
* (ECCV 2022) [💬Virtual Humans] **Compositional Human-Scene Interaction Synthesis with Semantic Control**, Kaifeng Zhao et al. [[Paper](https://arxiv.org/abs/2207.12824)] [[Project](https://zkf1997.github.io/COINS/index.html)] [[Code](https://github.com/zkf1997/COINS)]
* (CVPR 2022) [💬3D Shape] **Towards Implicit Text-Guided 3D Shape Generation**, Zhengzhe Liu et al. [[Paper](https://arxiv.org/abs/2203.14622)] [[Code](https://github.com/liuzhengzhe/Towards-Implicit-Text-Guided-Shape-Generation)]
* (CVPR 2022) [💬Object] **Zero-Shot Text-Guided Object Generation with Dream Fields**, Ajay Jain et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Jain_Zero-Shot_Text-Guided_Object_Generation_With_Dream_Fields_CVPR_2022_paper.pdf)] [[Project](https://ajayj.com/dreamfields)] [[Code](https://github.com/google-research/google-research/tree/master/dreamfields)]
* (CVPR 2022) [💬Mesh] **Text2Mesh: Text-Driven Neural Stylization for Meshes**, Oscar Michel et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Michel_Text2Mesh_Text-Driven_Neural_Stylization_for_Meshes_CVPR_2022_paper.pdf)] [[Project](https://threedle.github.io/text2mesh/)] [[Code](https://github.com/threedle/text2mesh)]
* (CVPR 2022) [💬Motion] **Generating Diverse and Natural 3D Human Motions from Text**, Chuan Guo et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Guo_Generating_Diverse_and_Natural_3D_Human_Motions_From_Text_CVPR_2022_paper.pdf)] [[Project](https://ericguo5513.github.io/text-to-motion/)] [[Code](https://github.com/EricGuo5513/text-to-motion)]
* (CVPR 2022) [💬Shape] **CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation**, Aditya Sanghi et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Sanghi_CLIP-Forge_Towards_Zero-Shot_Text-To-Shape_Generation_CVPR_2022_paper.pdf)] [[Code](https://github.com/AutodeskAILab/Clip-Forge)]
* (arXiv preprint 2022) [💬Motion] **TEMOS: Generating diverse human motions from textual descriptions**, Mathis Petrovich et al. [[Paper](https://arxiv.org/abs/2204.14109)] [[Project](https://mathis.petrovich.fr/temos/)] [[Code](https://github.com/Mathux/TEMOS)][<🎯Back to Top>](#head-content)
* **Text → Video**
* (arXiv preprint 2024) **MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequence**, Canyu Zhao et al. [[Paper](https://arxiv.org/abs/2407.16655)] [[Project](https://aim-uofa.github.io/MovieDreamer/)] [[Code](https://github.com/aim-uofa/MovieDreamer)] [[Demo Video](https://www.youtube.com/watch?v=aubRVOGrKLU)]
* 💥💥(OpenAI 2024) **Sora** [[Homepage](https://openai.com/sora)] [[Technical Report](https://openai.com/research/video-generation-models-as-world-simulators)] [[Sora with Audio](https://x.com/elevenlabsio/status/1759240084342059260?s=20)]
* (ICLR 2024) **ControlVideo: Training-free Controllable Text-to-Video Generation**, Yabo Zhang et al. [[Paper](https://arxiv.org/abs/2305.13077)] [[Code](https://github.com/YBYBZhang/ControlVideo)]
* (arXiv preprint 2024) **MagicVideo-V2: Multi-Stage High-Aesthetic Video Generation**, Weimin Wang et al. [[Paper](https://arxiv.org/abs/2401.04468)] [[Project](https://magicvideov2.github.io/)]
* (arXiv preprint 2023) **LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models**, Yaohui Wang et al. [[Paper](https://arxiv.org/abs/2309.15103)] [[Project](https://vchitect.github.io/LaVie-project/)] [[Code](https://github.com/Vchitect/LaVie)]
* (arXiv preprint 2023) **Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning**, Rohit Girdhar et al. [[Paper](https://arxiv.org/abs/2311.10709)] [[Project](https://emu-video.metademolab.com/)]
* (ICCV 2023) **Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators**, Levon Khachatryan et al. [[Paper](https://arxiv.org/abs/2303.13439)] [[Project](https://text2video-zero.github.io/)] [[Video](https://www.dropbox.com/s/uv90mi2z598olsq/Text2Video-Zero.MP4?dl=0)] [[Code](https://github.com/Picsart-AI-Research/Text2Video-Zero)] [[Hugging Face](https://huggingface.co/spaces/PAIR/Text2Video-Zero)]
* (NeurIPS 2023 Datasets and Benchmarks) **FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation**, Yuanxin Liu et al. [[Paper](https://arxiv.org/abs/2311.01813v1)] [[Project](https://github.com/llyx97/FETV)]
* (arXiv preprint 2023) **Optimal Noise pursuit for Augmenting Text-to-Video Generation**, Shijie Ma et al. [[Paper](https://arxiv.org/abs/2311.00949v1)]
* (arXiv preprint 2023) **Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation**, Jiaxi Gu et al. [[Paper](https://arxiv.org/abs/2309.03549)] [[Project](https://anonymous0x233.github.io/ReuseAndDiffuse/)]
* (arXiv preprint 2023) **Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts**, Yuyang Zhao et al. [[Paper](https://arxiv.org/abs/2305.08850)] [[Code](https://github.com/Make-A-Protagonist/Make-A-Protagonist)] [[Project](https://make-a-protagonist.github.io/)]
* 📚Image Editing, Background Editing, Text-to-Video Editing with Protagonist
* ⭐⭐(CVPR 2023) **Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models**, Andreas Blattmann et al. [[Paper](https://arxiv.org/abs/2304.08818)] [[Project](https://research.nvidia.com/labs/toronto-ai/VideoLDM/)]
* (arXiv preprint 2023) [💬Music Visualization] **Generative Disco: Text-to-Video Generation for Music Visualization**, Vivian Liu et al. [[Paper](https://arxiv.org/abs/2304.08551)]
* (arXiv preprint 2023) **Text-To-4D Dynamic Scene Generation**, Uriel Singer et al. [[Paper](https://arxiv.org/abs/2301.11280)] [[Project](https://make-a-video3d.github.io/)]
* (arXiv preprint 2022) **Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation**, Jay Zhangjie Wu et al. [[Paper](https://arxiv.org/abs/2212.11565)] [[Project](https://tuneavideo.github.io/)] [[Code](https://github.com/showlab/Tune-A-Video)]
* (arXiv preprint 2022) **MagicVideo: Efficient Video Generation With Latent Diffusion Models**, Daquan Zhou et al. [[Paper](https://arxiv.org/abs/2211.11018)] [[Project](https://magicvideo.github.io/#)]
* (arXiv preprint 2022) **Phenaki: Variable Length Video Generation From Open Domain Textual Description**, Ruben Villegas et al. [[Paper](https://arxiv.org/abs/2210.02399)]
* (arXiv preprint 2022) **Imagen Video: High Definition Video Generation with Diffusion Models**, Jonathan Ho et al. [[Paper](https://arxiv.org/abs/2210.02303v1)] [[Project](https://imagen.research.google/video/)]
* (arXiv preprint 2022) **Text-driven Video Prediction**, Xue Song et al. [[Paper](https://arxiv.org/abs/2210.02872)]
* (arXiv preprint 2022) **Make-A-Video: Text-to-Video Generation without Text-Video Data**, Uriel Singer et al. [[Paper](https://arxiv.org/abs/2209.14792)] [[Project](https://makeavideo.studio/)] [[Short read](https://www.louisbouchard.ai/make-a-video/)] [[Code](https://github.com/lucidrains/make-a-video-pytorch)]
* (ECCV 2022) [💬Story Continuation] **StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation**, Adyasha Maharana et al. [[Paper](https://arxiv.org/abs/2209.06192)] [[Code](https://github.com/adymaharana/storydalle)]
* (arXiv preprint 2022) [💬Story → Video] **Word-Level Fine-Grained Story Visualization**, Bowen Li et al. [[Paper](https://arxiv.org/abs/2208.02341)] [[Code](https://github.com/mrlibw/Word-Level-Story-Visualization)]
* (arXiv preprint 2022) **CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers**, Wenyi Hong et al. [[Paper](https://arxiv.org/abs/2205.15868)] [[Code](https://github.com/THUDM/CogVideo)]
* (CVPR 2022) **Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning**, Yogesh Balaji et al. [[Paper](https://arxiv.org/abs/2203.02573)] [[Code](https://github.com/snap-research/MMVID)] [Project](https://snap-research.github.io/MMVID/)
* (arXiv preprint 2022) **Video Diffusion Models**, Jonathan Ho et al. [[Paper](https://arxiv.org/abs/2204.03458)] [[Project](https://video-diffusion.github.io/)]
* (arXiv preprint 2021) [❌Genertation Task] **Transcript to Video: Efficient Clip Sequencing from Texts**, Ligong Han et al. [[Paper](https://arxiv.org/pdf/2107.11851.pdf)] [[Project](http://www.xiongyu.me/projects/transcript2video/)]
* (arXiv preprint 2021) **GODIVA: Generating Open-DomaIn Videos from nAtural Descriptions**, Chenfei Wu et al. [[Paper](https://arxiv.org/pdf/2104.14806.pdf)]
* (arXiv preprint 2021) **Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary**, Sibo Zhang et al. [[Paper](https://arxiv.org/pdf/2104.14631.pdf)]
* (IEEE Access 2020) **TiVGAN: Text to Image to Video Generation With Step-by-Step Evolutionary Generator**, DOYEON KIM et al. [[Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9171240)]
* (IJCAI 2019) **Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis**, Yogesh Balaji et al. [[Paper](https://www.ijcai.org/Proceedings/2019/0276.pdf)] [[Code](https://github.com/minrq/CGAN_Text2Video)]
* (IJCAI 2019) **IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation**, Kangle Deng et al. [[Paper](https://www.ijcai.org/Proceedings/2019/0307.pdf)]
* (CVPR 2019) [💬Story → Video] **StoryGAN: A Sequential Conditional GAN for Story Visualization**, Yitong Li et al. [[Paper](https://ojs.aaai.org/index.php/AAAI/article/view/12233https://openaccess.thecvf.com/content_CVPR_2019/html/Li_StoryGAN_A_Sequential_Conditional_GAN_for_Story_Visualization_CVPR_2019_paper.html)] [[Code](https://github.com/yitong91/StoryGAN?utm_source=catalyzex.com)]
* (AAAI 2018) **Video Generation From Text**, Yitong Li et al. [[Paper](https://ojs.aaai.org/index.php/AAAI/article/view/12233)]
* (ACMMM 2017) **To create what you tell: Generating videos from captions**, Yingwei Pan et al. [[Paper](https://dl.acm.org/doi/pdf/10.1145/3123266.3127905)][<🎯Back to Top>](#head-content)
* **Text → Music**
* ⭐(arXiv preprint 2023) **MusicLM: Generating Music From Text**, Andrea Agostinelli et al. [[Paper](https://arxiv.org/abs/2301.11325)] [[Project](https://google-research.github.io/seanet/musiclm/examples/)] [[MusicCaps](https://www.kaggle.com/datasets/googleai/musiccaps)][<🎯Back to Top>](#head-content)
## Contact Me
[![Star History Chart](https://api.star-history.com/svg?repos=Yutong-Zhou-cv/Awesome-Text-to-Image&type=Date)](https://star-history.com/#Yutong-Zhou-cv/Awesome-Text-to-Image&Date)
If you have any questions or comments, please feel free to contact [**Yutong**](https://elizazhou96.github.io/) ლ(╹◡╹ლ)
## Contributors
![Alt](https://repobeats.axiom.co/api/embed/2a1ae2aebaa287bfbf50a9aafdfde0406c1b0cfe.svg "Repobeats analytics image")
> Made with [contrib.rocks](https://contrib.rocks).