Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yyeboah/Awesome-Text-to-3D
A growing curation of Text-to-3D, Diffusion-to-3D works.
https://github.com/yyeboah/Awesome-Text-to-3D
List: Awesome-Text-to-3D
3dgs diffusion-to-3d image-to-3d nerf neural-rendering t23d text-to-3d tt3d
Last synced: about 1 month ago
JSON representation
A growing curation of Text-to-3D, Diffusion-to-3D works.
- Host: GitHub
- URL: https://github.com/yyeboah/Awesome-Text-to-3D
- Owner: yyeboah
- Created: 2023-07-06T11:23:30.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-21T05:15:47.000Z (7 months ago)
- Last Synced: 2024-05-21T06:24:56.475Z (7 months ago)
- Topics: 3dgs, diffusion-to-3d, image-to-3d, nerf, neural-rendering, t23d, text-to-3d, tt3d
- Language: TeX
- Homepage:
- Size: 416 KB
- Stars: 408
- Watchers: 24
- Forks: 20
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ultimate-awesome - Awesome-Text-to-3D - A growing curation of Text-to-3D, Diffusion-to-3D works. (Other Lists / Monkey C Lists)
README
# Awesome Text-to-3D [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) [![Maintained](https://img.shields.io/badge/Maintained%3F-yes-green.svg)]() [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](https://makeapullrequest.com)
The First Curation of Text-to-3D, Diffusion-to-3D works. Heavily inspired by [awesome-NeRF](https://github.com/awesome-NeRF/awesome-NeRF)
## Recent Updates :newspaper:
* `02.04.2024` - Begin linking to project pages and codes
* `09.02.2024` - Level One Categorization
* `11.11.2023` - Added Tutorial Videos
* `05.08.2023` - Provided citations in BibTeX
* `06.07.2023` - Created initial list## Papers :scroll:
X-to-3D
- [Zero-Shot Text-Guided Object Generation with Dream Fields](https://arxiv.org/abs/2112.01455), Ajay Jain et al., CVPR 2022 | [citation](./references/citations.bib#L1-L6) | [site](https://ajayj.com/dreamfields) | [code](https://github.com/google-research/google-research/tree/master/dreamfields)
- [CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation](https://arxiv.org/abs/2110.02624), Aditya Sanghi et al., Arxiv 2021 | [citation](./references/citations.bib#L8-L13) | [site]() | [code](https://github.com/AutodeskAILab/Clip-Forge)
- [PureCLIPNERF: Understanding Pure CLIP Guidance for Voxel Grid NeRF Models](https://arxiv.org/abs/2209.15172), Han-Hung Lee et al., Arxiv 2022 | [citation](./references/citations.bib#L29-L34) | [site](https://hanhung.github.io/PureCLIPNeRF/) | [code](https://github.com/hanhung/PureCLIPNeRF)
- [SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation](https://arxiv.org/abs/2212.04493), Yen-Chi Cheng et al., CVPR 2023 | [citation](./references/citations.bib#L43-L48) | [site](https://yccyenchicheng.github.io/SDFusion/) | [code](https://github.com/yccyenchicheng/SDFusion)
- [DreamFusion: Text-to-3D using 2D Diffusion](https://dreamfusion3d.github.io/), Ben Poole et al., ICLR 2023 | [citation](./references/citations.bib#L57-L62) | [site](https://dreamfusion3d.github.io/) | [code]()
- [Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models](https://arxiv.org/abs/2212.14704), Jiale Xu et al., Arxiv 2022 | [citation](./references/citations.bib#L64-L69) | [site](https://bluestyle97.github.io/dream3d/) | [code]()
- [Novel View Synthesis with Diffusion Models](https://arxiv.org/abs/2210.04628), Daniel Watson et al., Arxiv 2022 | [citation](./references/citations.bib#L78-L83) | [site](https://3d-diffusion.github.io/) | [code]()
- [NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views](https://arxiv.org/abs/2211.16431), Dejia Xu et al., Arxiv 2022 | [citation](./references/citations.bib#L85-L90) | [site](https://vita-group.github.io/NeuralLift-360/) | [code](https://github.com/VITA-Group/NeuralLift-360)
- [Point-E: A System for Generating 3D Point Clouds from Complex Prompts](https://arxiv.org/abs/2212.08751), Alex Nichol et al., Arxiv 2022 | [citation](./references/citations.bib#L92-L97) | [site]() | [code](https://github.com/openai/point-e)
- [Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures](https://arxiv.org/abs/2211.07600), Gal Metzer et al., Arxiv 2023 | [citation](./references/citations.bib#L99-L104) | [site]() | [code](https://github.com/eladrich/latent-nerf)
- [Magic3D: High-Resolution Text-to-3D Content Creation](https://research.nvidia.com/labs/dir/magic3d/), Chen-Hsuan Linet et al., CVPR 2023 | [citation](./references/citations.bib#L106-L111) | [site](https://research.nvidia.com/labs/dir/magic3d/) | [code]()
- [RealFusion: 360° Reconstruction of Any Object from a Single Image](https://arxiv.org/abs/2302.10663), Luke Melas-Kyriazi et al., CVPR 2023 | [citation](./references/citations.bib#L113-L118) | [site](https://lukemelas.github.io/realfusion/) | [code](https://github.com/lukemelas/realfusion)
- [Monocular Depth Estimation using Diffusion Models](https://arxiv.org/abs/2302.14816), Saurabh Saxena et al., Arxiv 2023 | [citation](./references/citations.bib#L120-L125) | [site](https://depth-gen.github.io/) | [code]()
- [SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction](https://arxiv.org/abs/2212.00792), Zhizhuo Zho et al., CVPR 2023 | [citation](./references/citations.bib#L127-L132) | [site](https://sparsefusion.github.io/) | [code](https://github.com/zhizdev/sparsefusion)
- [NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion](https://arxiv.org/abs/2302.10109), Jiatao Gu et al., ICML 2023 | [citation](./references/citations.bib#L134-L139) | [site](https://jiataogu.me/nerfdiff/) | [code]()
- [Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation](https://arxiv.org/abs/2212.00774), Haochen Wang et al., CVPR 2023 | [citation](./references/citations.bib#L141-L146) | [site](https://pals.ttic.edu/p/score-jacobian-chaining) | [code](https://github.com/pals-ttic/sjc/)
- [High-fidelity 3D Face Generation from Natural Language Descriptions](https://arxiv.org/abs/2305.03302), Menghua Wu et al., CVPR 2023 | [citation](./references/citations.bib#L148-L153) | [site](https://mhwu2017.github.io/) | [code](https://github.com/zhuhao-nju/describe3d)
- [TEXTure: Text-Guided Texturing of 3D Shapes](https://texturepaper.github.io/TEXTurePaper/), Elad Richardson Chen et al., SIGGRAPH 2023 | [citation](./references/citations.bib#L155-L160) | [site](https://texturepaper.github.io/TEXTurePaper/) | [code](https://github.com/TEXTurePaper/TEXTurePaper)
- [NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors](https://arxiv.org/abs/2212.03267), Congyue Deng et al., CVPR 2023 | [citation](./references/citations.bib#L162-L167) | [site]() | [code]()
- [DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models](https://arxiv.org/abs/2302.12231), Jamie Wynn et al., CVPR 2023 | [citation](./references/citations.bib#L169-L174) | [site]() | [code](https://github.com/nianticlabs/diffusionerf)
- [3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process](https://arxiv.org/abs/2303.10406), Yuhan Li et al., CVPR 2023 | [citation](./references/citations.bib#L540-L545) | [site]() | [code](https://github.com/colorful-liyu/3DQD)
- [DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model](https://gwang-kim.github.io/datid_3d/), Gwanghyun Kim et al., CVPR 2023 | [citation](./references/citations.bib#L176-L181) | [site](https://gwang-kim.github.io/datid_3d/) | [code](https://github.com/gwang-kim/DATID-3D)
- [Novel View Synthesis with Diffusion Models](https://arxiv.org/abs/2210.04628), Daniel Watson et al., ICLR 2023 | [citation](./references/citations.bib#L183-L188) | [site]() | [code]()
- [ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation](https://ml.cs.tsinghua.edu.cn/prolificdreamer/), Zhengyi Wang et al., Arxiv 2023 | [citation](./references/citations.bib#L190-L195) | [site]() | [code]()
- [3D-aware Image Generation using 2D Diffusion Models](https://arxiv.org/abs/2303.17905), Jianfeng Xiang et al., Arxiv 2023 | [citation](./references/citations.bib#L204-L209) | [site]() | [code]()
- [Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior](https://make-it-3d.github.io/), Junshu Tang et al., ICCV 2023 | [citation](./references/citations.bib#L211-L216) | [site]() | [code]()
- [GECCO: Geometrically-Conditioned Point Diffusion Models](https://arxiv.org/abs/2303.05916), Michał J. Tyszkiewicz et al., ICCV 2023 | [citation](./references/citations.bib#L694-L699) | [site]() | [code]()
- [Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond](https://arxiv.org/abs/2304.04968), Mohammadreza Armandpour et al., Arxiv 2023 | [citation](./references/citations.bib#L218-L223) | [site]() | [code]()
- [Generative Novel View Synthesis with 3D-Aware Diffusion Models](https://arxiv.org/abs/2304.02602), Eric R. Chan et al., Arxiv 2023 | [citation](./references/citations.bib#L232-L237) | [site]() | [code]()
- [Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields](https://arxiv.org/abs/2305.11588), Jingbo Zhang et al., Arxiv 2023 | [citation](./references/citations.bib#L239-L244) | [site]() | [code]()
- [Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors](https://guochengqian.github.io/project/magic123/), Guocheng Qian et al., Arxiv 2023 | [citation](./references/citations.bib#L246-L251) | [site]() | [code]()
- [DreamBooth3D: Subject-Driven Text-to-3D Generation](https://arxiv.org/abs/2303.13508/), Amit Raj et al., ICCV 2023 | [citation](./references/citations.bib#L253-L258) | [site]() | [code]()
- [Zero-1-to-3: Zero-shot One Image to 3D Object](https://zero123.cs.columbia.edu/), Ruoshi Liu et al., Arxiv 2023 | [citation](./references/citations.bib#L260-L265) | [site]() | [code]()
- [ATT3D: Amortized Text-to-3D Object Synthesis](https://research.nvidia.com/labs/toronto-ai/ATT3D/), Jonathan Lorraine et al., ICCV 2023 | [citation](./references/citations.bib#L288-L293) | [site]() | [code]()
- [Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation](https://neuralcarver.github.io/michelangelo/), Zibo Zhao et al., Arxiv 2023 | [citation](./references/citations.bib#L295-L300) | [site]() | [code]()
- [Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions](https://light.princeton.edu/publication/diffusion-sdf/), Gene Chou et al., Arxiv 2023 | [citation](./references/citations.bib#L302-L307) | [site]() | [code]()
- [HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance](https://hifa-team.github.io/HiFA-site/), Junzhe Zhu et al., Arxiv 2023 | [citation](./references/citations.bib#L309-L314) | [site]() | [code]()
- [LERF: Language Embedded Radiance Fields](https://www.lerf.io/), Justin Kerr et al., Arxiv 2023 | [citation](./references/citations.bib#L316-L321) | [site]() | [code]()
- [3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation](https://ku-cvlab.github.io/3DFuse/), Junyoung Seo et al., Arxiv 2023 | [citation](./references/citations.bib#L330-L335) | [site]() | [code]()
- [MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion](https://mvdiffusion.github.io/), Shitao Tang et al., Arxiv 2023 | [citation](./references/citations.bib#L337-L342) | [site]() | [code]()
- [One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization](https://one-2-3-45.github.io/), Minghua Liu et al., Arxiv 2023 | [citation](./references/citations.bib#L344-L349) | [site]() | [code]()
- [TextMesh: Generation of Realistic 3D Meshes From Text Prompts](https://arxiv.org/abs/2304.12439), Christina Tsalicoglou Liu et al., Arxiv 2023 | [citation](./references/citations.bib#L351-L356) | [site]() | [code]()
- [Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models](https://arxiv.org/abs/2305.16223), Xingqian Xu et al., Arxiv 2023 | [citation](./references/citations.bib#L358-L363) | [site]() | [code]()
- [SceneScape: Text-Driven Consistent Scene Generation](https://scenescape.github.io/), Rafail Fridman et al., Arxiv 2023 | [citation](./references/citations.bib#L365-L370) | [site]() | [code]()
- [CLIP-Mesh: Generating textured meshes from text using pretrained image-text models](https://www.nasir.lol/clipmesh), Nasir Khalid et al., Arxiv 2023 | [citation](./references/citations.bib#L379-L384) | [site]() | [code]()
- [Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models](https://lukashoel.github.io/text-to-room/), Lukas Höllein et al., Arxiv 2023 | [citation](./references/citations.bib#L386-L391) | [site]() | [code]()
- [Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction](https://arxiv.org/abs/2304.06714), Hansheng Chen et al., Arxiv 2023 | [citation](./references/citations.bib#L393-L398) | [site]() | [code]()
- [PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion](https://arxiv.org/abs/2304.01900), Gwanghyun Kim et al., ICCV 2023 | [citation](./references/citations.bib#L561-L566) | [site]() | [code]()
- [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463), Heewoo Jun et al., Arxiv 2023 | [citation](./references/citations.bib#L400-L405) | [site]() | [code]()
- [Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation](https://arxiv.org/abs/2307.03869), Aditya Sanghi et al., Arxiv 2023 | [citation](./references/citations.bib#L407-L412) | [site]() | [code]()
- [3D VADER - AutoDecoding Latent 3D Diffusion Models](https://snap-research.github.io/3DVADER/), Evangelos Ntavelis et al., Arxiv 2023 | [citation](./references/citations.bib#L428-L433) | [site]() | [code]()
- [DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views](https://arxiv.org/abs/2306.03414), Paul Yoo et al., Arxiv 2023 | [citation](./references/citations.bib#L42-L447) | [site]() | [code]()
- [Cap3D: Scalable 3D Captioning with Pretrained Models](https://arxiv.org/abs/2306.07279), Tiange Luo et al., Arxiv 2023 | [citation](./references/citations.bib#L477-L482) | [site]() | [code]()
- [InstructP2P: Learning to Edit 3D Point Clouds with Text Instructions](https://arxiv.org/abs/2306.07154), Jiale Xu et al., Arxiv 2023 | [citation](./references/citations.bib#L484-L489) | [site]() | [code]()
- [3D-LLM: Injecting the 3D World into Large Language Models](https://arxiv.org/abs/2307.12981), Yining Hong et al., Arxiv 2023 | [citation](./references/citations.bib#L498-L503) | [site]() | [code]()
- [Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation](https://arxiv.org/abs/2307.13908), Chaohui Yu et al., Arxiv 2023 | [citation](./references/citations.bib#L505-L510) | [site]() | [code]()
- [RGB-D-Fusion: Image Conditioned Depth Diffusion of Humanoid Subjects](https://arxiv.org/abs/2307.15988), Sascha Kirch et al., Arxiv 2023 | [citation](./references/citations.bib#L512-L517) | [site]() | [code]()
- [IT3D: Improved Text-to-3D Generation with Explicit View Synthesis](https://arxiv.org/abs/2308.11473), Yiwen Chen et al., Arxiv 2023 | [citation](./references/citations.bib#L603-L608) | [site]() | [code]()
- [MVDream: Multi-view Diffusion for 3D Generation](https://arxiv.org/abs/2308.16512), Yichun Shi et al., Arxiv 2023 | [citation](./references/citations.bib#L624-L629) | [site]() | [code]()
- [PointLLM: Empowering Large Language Models to Understand Point Clouds](https://arxiv.org/abs/2308.16911), Xu Runsen et al., Arxiv 2023 | [citation](./references/citations.bib#L631-L636) | [site]() | [code]()
- [SyncDreamer: Generating Multiview-consistent Images from a Single-view Image](https://arxiv.org/abs/2309.03453), Yuan Liu et al., Arxiv 2023 | [citation](./references/citations.bib#L645-L650) | [site]() | [code]()
- [Large-Vocabulary 3D Diffusion Model with Transformer](https://arxiv.org/abs/2309.07920), Ziang Cao et al., Arxiv 2023 | [citation](./references/citations.bib#L673-L678) | [site]() | [code]()
- [Progressive Text-to-3D Generation for Automatic 3D Prototyping](https://arxiv.org/abs/2309.14600), Han Yi et al., Arxiv 2023 | [citation](./references/citations.bib#L680-L685) | [site]() | [code]()
- [DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation](https://arxiv.org/abs/2309.16653), Jiaxiang Tang et al., Arxiv 2023 | [citation](./references/citations.bib#L687-L692) | [site]() | [code]()
- [SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D](https://arxiv.org/abs/2310.02596), Weiyu Li et al., Arxiv 2023 | [citation](./references/citations.bib#L701-L706) | [site]() | [code]()
- [Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors](https://arxiv.org/abs/2309.17261), Yukang Lin et al., Arxiv 2023 | [citation](./references/citations.bib#L715-L720) | [site]() | [code]()
- [GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors](https://arxiv.org/abs/2310.08529),Taoran Yi et al., Arxiv 2023 | [citation](./references/citations.bib#L722-L727) | [site]() | [code]()
- [Text-to-3D using Gaussian Splatting](https://arxiv.org/abs/2309.16585), Zilong Chen et al., Arxiv 2023 | [citation](./references/citations.bib#L729-L734) | [site]() | [code]()
- [Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model](https://arxiv.org/abs/2310.15110), Ruoxi Shi et al., Arxiv 2023 | [citation](./references/citations.bib#L750-L755) | [site]() | [code]()
- [DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior](https://arxiv.org/abs/2310.16818), Jingxiang Sun et al., Arxiv 2023 | [citation](./references/citations.bib#L757-L762) | [site]() | [code]()
- [HyperFields: Towards Zero-Shot Generation of NeRFs from Text](https://arxiv.org/abs/2310.17075), Sudarshan Babu et al., Arxiv 2023 | [citation](./references/citations.bib#L764-L769) | [site]() | [code]()
- [Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping](https://arxiv.org/abs/2310.12474), Zijie Pan et al., Arxiv 2023 | [citation](./references/citations.bib#L771-L776) | [site]() | [code]()
- [Text-to-3D with classifier score distillation](https://arxiv.org/abs/2310.19415), Xin Yu et al., Arxiv 2023 | [citation](./references/citations.bib#L778-L783) | [site]() | [code]()
- [Noise-Free Score Distillation](https://arxiv.org/abs/2310.17590), Oren Katzir et al., Arxiv 2023 | [citation](./references/citations.bib#L785-L790) | [site]() | [code]()
- [LRM: Large Reconstruction Model for Single Image to 3D](https://arxiv.org/abs/2311.04400), Yicong Hong et al., Arxiv 2023 | [citation](./references/citations.bib#L806-L811) | [site]() | [code]()
- [One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion](https://arxiv.org/abs/2311.07885), Minghua Liu et al., Arxiv 2023 | [citation](./references/citations.bib#L813-L818) | [site]() | [code]()
- [LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching](https://arxiv.org/abs/2311.11284), Yixun Liang et al., Arxiv 2023 | [citation](./references/citations.bib#L820-L825) | [site]() | [code]()
- [MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry and Texture](https://arxiv.org/abs/2311.10123), Lincong Feng et al., Arxiv 2023 | [citation](./references/citations.bib#L827-L832) | [site]() | [code]()
- [Adversarial Diffusion Distillation](https://arxiv.org/abs/2311.17042), Axel Sauer et al., Arxiv 2023 | [citation](./references/citations.bib#L849-L854) | [site]() | [code]()
- [MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers](https://arxiv.org/abs/2311.15475), Yawar Siddiqui et al., Arxiv 2023| [citation](./references/citations.bib#L863-L868) | [site]() | [code]()
- [DreamPropeller: Supercharge Text-to-3D Generation with Parallel Sampling](https://arxiv.org/abs/2311.17082), Linqi Zhou et al., Arxiv 2023| [citation](./references/citations.bib#L870-L875) | [site]() | [code]()
- [X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap Between Text-to-2D and Text-to-3D Generation](https://arxiv.org/abs/2312.00085), Yiwei Ma et al., Arxiv 2023 | [citation](./references/citations.bib#L884-L889) | [site]() | [code]()
- [StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D](https://arxiv.org/abs/2312.02189), Pengsheng Guo et al., Arxiv 2023 | [citation](./references/citations.bib#L898-L903) | [site]() | [code]()
- [CAD: Photorealistic 3D Generation via Adversarial Distillation](https://arxiv.org/abs/2312.06663), Ziyu Wan et al., Arxiv 2023 | [citation](./references/citations.bib#L905-L910) | [site]() | [code]()
- [RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D](https://arxiv.org/abs/2311.16918), Lingteng Qiu et al., Arxiv 2023 | [citation](./references/citations.bib#L912-L917) | [site]() | [code]()
- [Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion](https://arxiv.org/abs/2312.03869), Kira Prabhu et al., Arxiv 2023 | [citation](./references/citations.bib#L919-L924) | [site]() | [code]()
- [Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors](https://arxiv.org/abs/2312.04963), Lihe Ding et al., Arxiv 2023 | [citation](./references/citations.bib#L926-L931) | [site]() | [code]()
- [Text2Immersion: Generative Immersive Scene with 3D Gaussians](https://arxiv.org/abs/2312.09242), Hao Ouyang et al., Arxiv 2023 | [citation](./references/citations.bib#L940-L945) | [site]() | [code]()
- [Stable Score Distillation for High-Quality 3D Generation](https://arxiv.org/abs/2312.09305), Boshi Tang et al., Arxiv 2023 | [citation](./references/citations.bib#L975-L980) | [site]() | [code]()
- [Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object Structure via HyperNetworks](https://arxiv.org/abs/2312.16218), Christian Simon et al., Arxv 2023 | [citation](./references/citations.bib#L982-L987) | [site]() | [code]()
- [HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D](https://arxiv.org/abs/2312.15980), Sangmin Woo et al., Arxv 2023 | [citation](./references/citations.bib#L989-L994) | [site]() | [code]()
- [SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity](https://arxiv.org/abs/2401.00604), Peihao Wang et al., Arxiv 2024 | [citation](./references/citations.bib#L1017-L1022) | [site]() | [code]()
- [AGG: Amortized Generative 3D Gaussians for Single Image to 3D](https://arxiv.org/abs/2401.04099), Dejia Xu et al., Arxiv 2024 | [citation](./references/citations.bib#L1031-L1036) | [site]() | [code]()
- [Topology-Aware Latent Diffusion for 3D Shape Generation](https://arxiv.org/abs/2401.17603), Jiangbei Hu et al., Arxiv 2024 | [citation](./references/citations.bib#L1045-L1050) | [site]() | [code]()
- [AToM: Amortized Text-to-Mesh using 2D Diffusion](https://arxiv.org/abs/2402.00867), Guocheng Qian et al., Arxiv 2024 | [citation](./references/citations.bib#L1080-L1085)
- [LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation](https://arxiv.org/abs/2402.05054), Jiaxiang Tang et al., Arxiv 2024 | [citation](./references/citations.bib#L1087-L1092) | [site]() | [code]()
- [IM-3D: : Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation](https://arxiv.org/abs/2402.08682), Luke Melas-Kyriazi et al., Arxiv 2024 | [citation](./references/citations.bib#L1108-L1113) | [site]() | [code]()
- [L3GO: Language Agents with Chain-of-3D-Thoughts for Generating Unconventional Objects](https://arxiv.org/abs/2402.09052), Yutaro Yamada et al., Arxiv 2024 | [citation](./references/citations.bib#L1115-L1120) | [site]() | [code]()
- [MVD2: Efficient Multiview 3D Reconstruction for Multiview Diffusion](https://arxiv.org/abs/2402.14253), Xin-Yang Zheng et al., Arxiv 2024 | [citation](./references/citations.bib#L1122-L1127) | [site]() | [code]()
- [Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability](https://arxiv.org/abs/2402.12225), Xuelin Qian et al., Arxiv 2024 | [citation](./references/citations.bib#L1129-L1134) | [site]() | [code]()
- [SceneWiz3D: Towards Text-guided 3D Scene Composition](https://arxiv.org/abs/2312.08885), Qihang Zhang et al., CVPR 2024 | [citation](./references/citations.bib#L1136-L1141) | [site]() | [code]()
- [TripoSR: Fast 3D Object Reconstruction from a Single Image](https://arxiv.org/abs/2403.02151) Dmitry Tochilkin et al., Arxiv 2024 | [citation](./references/citations.bib#L1150-L1155) | [site]() | [code]()
- [V3D: Video Diffusion Models are Effective 3D Generators](https://arxiv.org/abs/2403.06738) Zilong Chen et al., Arxiv 2024 | [citation](./references/citations.bib#L1164-L1169) | [site]() | [code]()
- [CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model](https://arxiv.org/abs/2403.05034) Zhengyi Wang et al., Arxiv 2024 | [citation](./references/citations.bib#L1174-L1176) | [site]() | [code]()
- [Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation](https://arxiv.org/abs/2403.09625) Fangfu Liu et al., Arxiv 2024 | [citation](./references/citations.bib#L1178-L1183) | [site]() | [code]()
- [Isotropic3D: Image-to-3D Generation Based on a Single CLIP Embedding](https://arxiv.org/abs/2403.10395), Pengkun Liu et al., Arxiv 2024 | [citation](./references/citations.bib#L1185-L1190) | [site]() | [code]()
- [SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion](https://arxiv.org/abs/2403.12008), Vikram Volet et al., Arxiv 2024 | [citation](./references/citations.bib#L1192-L1197) | [site]() | [code]()
- [Generic 3D Diffusion Adapter Using Controlled Multi-View Editing](https://arxiv.org/abs/2403.12032), Hansheng Chen et al., Arxiv 2024 | [citation](./references/citations.bib#L1199-L1204) | [site]() | [code]()
- [GVGEN: Text-to-3D Generation with Volumetric Representation](https://arxiv.org/abs/2403.12957), Xianglong He et al., Arxiv 2024 | [citation](./references/citations.bib#L1206-L1211) | [site]() | [code]()
- [BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis](https://arxiv.org/abs/2403.11273), Lutao Jiang et al., Arxiv 2024 | [citation](./references/citations.bib#L1213-L1218) | [site]() | [code]()
- [LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis](https://research.nvidia.com/labs/toronto-ai/LATTE3D), Kevin Xie et al., Arxiv 2024 | [citation](./references/citations.bib#L1234-L1239) | [site]() | [code]()
- [Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation](https://arxiv.org/abs/2403.09625), Fangfu Liu et al., Arxiv 2024 | [citation](./references/citations.bib#L1241-L1246) | [site]() | [code]()
- [GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation](https://arxiv.org/abs/2403.14621), Yinghao Xu et al., Arxiv 2024 | [citation](./references/citations.bib#L1248-L1253) | [site]() | [code]()
- [VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation](https://arxiv.org/abs/2403.17001), Yang Chen et al., Arxiv 2024 | [citation](./references/citations.bib#L1255-L1260) | [site]() | [code]()- [DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion](https://arxiv.org/abs/2403.17237), Yuanze Lin et al., Arxiv 2024 | [citation](./references/citations.bib#L1262-L1267) | [site]() | [code]()
- [PointInfinity: Resolution-Invariant Point Diffusion Models](https://arxiv.org/abs/2404.03566), Zixuan Huang et al., Arxiv 2024 | [citation](./references/citations.bib#L1269-L1274) | [site](https://zixuanh.com/projects/pointinfinity.html) | [code]()
- [The More You See in 2D, the More You Perceive in 3D](https://arxiv.org/abs/2404.03652), Xinyang Han et al., Arxiv 2024 | [citation](./references/citations.bib#L1290-L1295) | [site](https://sap3d.github.io/) | [code]()
- [Hash3D: Training-free Acceleration for 3D Generation](https://arxiv.org/abs/2404.06091), Xingyi Yang et al., Arxiv 2024 | [citation](./references/citations.bib#L1297-L1302) | [site](https://adamdad.github.io/hash3D/) | [code](https://github.com/Adamdad/hash3D)
- [RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion](https://arxiv.org/abs/2404.07199), Jaidev Shriram et al., Arxiv 2024 | [citation](./references/citations.bib#L1304-L1309) | [site](https://realmdreamer.github.io/) | [code]()
- [TC4D: Trajectory-Conditioned Text-to-4D Generation](https://arxiv.org/abs/2403.17920), Sherwin Bahmani et al., Arxiv 2024 | [citation](./references/citations.bib#L1311-L1316) | [site](https://sherwinbahmani.github.io/tc4d/) | [code]()
- [Zero-shot Point Cloud Completion Via 2D Priors](https://arxiv.org/abs/2404.06814), Tianxin Huang et al., Arxiv 2024 | [citation](./references/citations.bib#L1318-L1323) | [site]() | [code]()
- [InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models](https://arxiv.org/abs/2404.07191), Jiale Xu et al., Arxiv 2024 | [citation](./references/citations.bib#L1325-L1330) | [site]() | [code](https://github.com/TencentARC/InstantMesh)
- [Zero-shot Point Cloud Completion Via 2D Priors](https://arxiv.org/abs/2404.06814), Tianxin Huang et al., Arxiv 2024 | [citation](./references/citations.bib#L1332-L1337) | [site]() | [code]()
- [CLIP-GS: CLIP-Informed Gaussian Splatting for Real-time and View-consistent 3D Semantic Understanding](https://arxiv.org/abs/2404.14249), Guibiao Liao et al., Arxiv 2024 | [citation](./references/citations.bib#L1360-L1365) | [site]() | [code]()
- [CAT3D: Create Anything in 3D with Multi-View Diffusion Models](https://arxiv.org/abs/2405.10314), Ruiqi Gao et al., Arxiv 2024 | [citation](./references/citations.bib#L1367-L1372) | [site](https://cat3d.github.io) | [code]()
- [Portrait3D: Text-Guided High-Quality 3D Portrait Generation Using Pyramid Representation and GANs Prior](https://arxiv.org/abs/2404.10394), Yiqian Wu et al., Arxiv 2024 | [citation](./references/citations.bib#L1402-L1407) | [site]() | [code]()
- [CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner](https://arxiv.org/abs/2405.14979), Weiyu Li et al., Arxiv 2024 | [citation](./references/citations.bib#L1409-L1414) | [site](https://craftsman3d.github.io/) | [code](https://github.com/wyysf-98/CraftsMan)
- [LDM: Large Tensorial SDF Model for Textured Mesh Generation](https://arxiv.org/abs/2405.14580), Rengan Xie et al., Arxiv 2024 | [citation](./references/citations.bib#L1416-L1421) | [site]() | [code]()
- [Dreamer XL: Towards High-Resolution Text-to-3D Generation via Trajectory Score Matching](https://arxiv.org/abs/2405.11252), Xingyu Miao et al., Arxiv 2024 | [citation](./references/citations.bib#L1423-L1428) | [site]() | [code]()
- [Era3D: High-Resolution Multiview Diffusion using Efficient Row-wise Attention](https://arxiv.org/abs/2405.11616), Peng Li et al., Arxiv 2024 | [citation](./references/citations.bib#L1430-L1435) | [site](https://penghtyx.github.io/Era3D/) | [code](https://github.com/pengHTYX/Era3D)
- [GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling](https://arxiv.org/abs/2403.19655), Bowen Zhang et al., Arxiv 2024 | [citation](./references/citations.bib#L1451-L1456) | [site](https://gaussiancube.github.io/) | [code](https://github.com/GaussianCube/)
- [Tetrahedron Splatting for 3D Generation](https://arxiv.org/abs/2406.01579), Chun Gu et al., Arxiv 2024 | [citation](./references/citations.bib#L1458-L1463) | [site]() | [code](https://github.com/fudan-zvg/tet-splatting)
- [L4GM: Large 4D Gaussian Reconstruction Model](https://arxiv.org/abs/2406.10324), Jiawei Ren et al., Arxiv 2024 | [citation](./references/citations.bib#L1472-L1477) | [site](https://research.nvidia.com/labs/toronto-ai/l4gm/) | [code]()
- [Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction](https://arxiv.org/abs/2403.18795), Taoran Yi et al., Arxiv 2024 | [citation](./references/citations.bib#L1486-L1491) | [site](https://florinshen.github.io/gamba-project/) | [code](https://github.com/SkyworkAI/Gamba)
- [HouseCrafter: Lifting Floorplans to 3D Scenes with 2D Diffusion Model](https://arxiv.org/abs/2406.20077), Hieu T. Nguyen et al., Arxiv 2024 | [citation](./references/citations.bib#L1493-L1493) | [site](https://neu-vi.github.io/houseCrafter/) | [code]()
- [Meta 3D Gen](https://arxiv.org/abs/2407.02599), Raphael Bensadoun et al., Arxiv 2024 | [citation](./references/citations.bib#L1500-L1505) | [site](https://ai.meta.com/research/publications/meta-3d-gen/) | [code]()
- [ScaleDreamer](https://arxiv.org/abs/2407.02040), Zhiyuan Ma et al., ECCV 2024 | [citation](./references/citations.bib#L1507-L1512) | [site](https://sites.google.com/view/scaledreamer-release/) | [code](https://github.com/theEricMa/ScaleDreamer)
- [YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals](https://arxiv.org/abs/2406.16273v1), Sandeep Mishra et al., Arxiv 2024 | [citation](./references/citations.bib#L1514-L1519) | [site](https://youdream3d.github.io/) | [code](https://github.com/YouDream3D/YouDream/)
- [RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models](https://arxiv.org/abs/2407.06938), Bowen Zhang et al., Arxiv 2024 | [citation](./references/citations.bib#L1521-L1526) | [site]() | [code]()
- [HoloDreamer: Holistic 3D Panoramic World Generation from Text Descriptions](https://arxiv.org/abs/2407.15187), Haiyang Zhou et al., Arxiv 2024 | [citation](./references/citations.bib#L1535-L1540) | [site](https://zhouhyocean.github.io/holodreamer/) | [code](https://github.com/zhouhyOcean/HoloDreamer)
- [PlacidDreamer: Advancing Harmony in Text-to-3D Generation](https://arxiv.org/abs/2407.13976), Shuo Huang et al., Arxiv 2024 | [citation](./references/citations.bib#L1543-L151548) | [site]() | [code]()
- [EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion](https://arxiv.org/abs/2312.06725), Zehuan Huang et al., CVPR 2024 | [citation](./references/citations.bib#L1550-L1555) | [site](https://huanngzh.github.io/EpiDiff/) | [code](https://github.com/huanngzh/EpiDiff)
- [Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion](https://arxiv.org/abs/2406.03184), Hao Wen et al., Arxiv 2024 | [citation](./references/citations.bib#L1557-L1562) | [site](https://costwen.github.io/Ouroboros3D/) | [code](https://github.com/Costwen/Ouroboros3D)
- [DreamReward: Text-to-3D Generation with Human Preference](https://arxiv.org/abs/2403.14613), Junliang Ye et al., ECCV 2024 | [citation](./references/citations.bib#L1564-L1569) | [site](https://jamesyjl.github.io/DreamReward/) | [code](https://github.com/liuff19/DreamReward)
- [Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle](https://arxiv.org/abs/2407.19548), Zhenyu Tang et al., Arxiv 2024 | [citation](./references/citations.bib#L1571-L1576) | [site](https://pku-yuangroup.github.io/Cycle3D/) | [code](https://github.com/PKU-YuanGroup/Cycle3D)
- [DreamInit: A General Framework to Boost 3D GS Initialization for Text-to-3D Generation by Lexical Richness](https://arxiv.org/abs/2408.01269), Lutao Jiang et al., Arxiv 2024 | [citation](./references/citations.bib#L1585-L1590) | [site](https://vlislab22.github.io/DreamInit/) | [code]()
- [TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling](https://arxiv.org/abs/2408.01291), Dong Huo et al., Arxiv 2024 | [citation](./references/citations.bib#L1599-L1604) | [site](https://dong-huo.github.io/TexGen/) | [code]()
- [DreamCouple: Exploring High Quality Text-to-3D Generation Via Rectified Flow](https://arxiv.org/abs/2408.05008), Hangyu Li et al., Arxiv 2024 | [citation](./references/citations.bib#L1606-L1611) | [site]() | [code]()
- [MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model](https://arxiv.org/abs/2408.10198), Minghua Liu et al., Arxiv 2024 | [citation](./references/citations.bib#L1613-L1618) | [site](https://meshformer3d.github.io/) | [code]()
- [Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models](https://arxiv.org/abs/2409.07452), Haibo Yang et al., Arxiv 2024 | [citation](./references/citations.bib#L1641-L1646) | [site]() | [code]()
- [MVGaussian: High-Fidelity text-to-3D Content Generation with Multi-View Guidance and Surface Densification](https://arxiv.org/abs/2409.06620), Phu Pham et al., Arxiv 2024 | [citation](./references/citations.bib#L1648-L1653) | [site](https://mvgaussian.github.io/) | [code](https://github.com/mvgaussian/mvgaussian)
- [Geometry Image Diffusion: Fast and Data-Efficient Text-to-3D with Image-Based Surface Representation](https://arxiv.org/abs/2409.03718), Slava Elizarov et al., Arxiv 2024 | [citation](./references/citations.bib#L1655-L1660) | [site](https://unity-research.github.io/Geometry-Image-Diffusion.github.io/) | [code]()
- [Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion](https://arxiv.org/abs/2409.11406), Zhenwei Wang et al., Arxiv 2024 | [citation](./references/citations.bib#L1662-L1667) | [site](https://rag-3d.github.io/) | [code](https://github.com/3DTopia/Phidias-Diffusion)
- [3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion](https://arxiv.org/abs/2409.12957), Zhaoxi Chen et al., Arxiv 2024 | [citation](./references/citations.bib#L1669-L1674) | [site](https://3dtopia.github.io/3DTopia-XL/) | [code](https://github.com/3DTopia/3DTopia-XL)
- [LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness](https://arxiv.org/abs/2409.18125), Chenming Zhu et al., Arxiv 2024 | [citation](./references/citations.bib#L1683-L1688) | [site](https://zcmax.github.io/projects/LLaVA-3D/) | [code](https://github.com/ZCMax/LLaVA-3D)
- [SceneCraft: Layout-Guided 3D Scene Generation](https://arxiv.org/abs/2410.09049), Xiuyu Yang et al., Arxiv 2024 | [citation](./references/citations.bib#L1697-L1702) | [site](https://orangesodahub.github.io/SceneCraft/) | [code](https://github.com/OrangeSodahub/SceneCraft/)
- [DreamCraft3D++: Efficient Hierarchical 3D Generation with Multi-Plane Reconstruction Model](https://arxiv.org/abs/2410.12928), Jingxiang Sun et al., Arxiv 2024 | [citation](./references/citations.bib#L1704-L1709) | [site](https://dreamcraft3dplus.github.io) | [code](https://github.com/MrTornado24/DreamCraft3D_Plus)
- [Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models](https://arxiv.org/abs/2410.10821), Jiangzhi Bao et al., Arxiv 2024 | [citation](./references/citations.bib#L1711-L1716) | [site](https://tex4d.github.io) | [code](https://github.com/ZqlwMatt/Tex4D)
3D Editing, Decomposition & Stylization
- [CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields](https://arxiv.org/abs/2112.05139), Can Wang et al., Arxiv 2021 | [citation](./references/citations.bib#L15-L20) | [site](https://cassiepython.github.io/clipnerf/) | [code](https://github.com/cassiePython/CLIPNeRF)
- [CG-NeRF: Conditional Generative Neural Radiance Fields](https://arxiv.org/abs/2112.03517), Kyungmin Jo et al., Arxiv 2021 | [citation](./references/citations.bib#L22-L27) | [site]() | [code]()
- [TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition](https://arxiv.org/abs/2210.11277), Yongwei Chen et al., NeurIPS 2022 | [citation](./references/citations.bib#L36-L41) | [site](https://cyw-3d.github.io/tango/) | [code](https://github.com/Gorilla-Lab-SCUT/tango)
- [3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models](https://arxiv.org/abs/2211.14108), Gang Li et al., Arxiv 2022 | [citation](./references/citations.bib#L50-L55) | [site](https://3ddesigner-diffusion.github.io/) | [code]()
- [NeRF-Art: Text-Driven Neural Radiance Fields Stylization](https://arxiv.org/abs/2212.08070), Can Wang et al., Arxiv 2022 | [citation](./references/citations.bib#L71-L76) | [site](https://cassiepython.github.io/nerfart/) | [code](https://github.com/cassiePython/NeRF-Art)
- [Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions](https://instruct-nerf2nerf.github.io/), Ayaan Haque et al., Arxiv 2023 | [citation](./references/citations.bib#L323-L328) | [site](https://instruct-nerf2nerf.github.io/) | [code](https://github.com/ayaanzhaque/instruct-nerf2nerf)
- [Local 3D Editing via 3D Distillation of CLIP Knowledge](https://arxiv.org/abs/2306.12570), Junha Hyung et al., Arxiv 2023 | [citation](./references/citations.bib#L372-L377) | [site]() | [code]()
- [RePaint-NeRF: NeRF Editing via Semantic Masks and Diffusion Models](https://arxiv.org/abs/2306.05668), Xingchen Zhou et al., Arxiv 2023 | [citation](./references/citations.bib#L414-L419) | [site](https://starstesla.github.io/repaintnerf/) | [code](https://github.com/StarsTesla/RePaint-NeRF)
- [Text2Tex: Text-driven Texture Synthesis via Diffusion Models](https://daveredrum.github.io/Text2Tex/), Dave Zhenyu Chen et al., Arxiv 2023 | [citation](./references/citations.bib#L421-L426) | [site](https://daveredrum.github.io/Text2Tex/) | [code](https://github.com/daveredrum/Text2Tex)
- [Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor](https://control4darxiv.github.io/), Ruizhi Shao et al., Arxiv 2023 | [citation](./references/citations.bib#L435-L440) | [site](https://fantasia3d.github.io/) | [code](https://github.com/Gorilla-Lab-SCUT/Fantasia3D)
- [Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation](https://fantasia3d.github.io/), Rui Chen et al., Arxiv 2023 | [citation](./references/citations.bib#L449-L454) | [site]() | [code]()
- [Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes](https://arxiv.org/abs/2303.13450), Dana Cohen-Bar et al., Arxiv 2023 | [citation](./references/citations.bib#L463-L468) | [site](https://control4darxiv.github.io/) | [code]()
- [MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR](https://arxiv.org/abs/2308.09278), Xudong Xu et al., Arxiv 2023 | [citation](./references/citations.bib#L596-L601) | [site](https://sheldontsui.github.io/projects/Matlaber) | [code](https://github.com/SheldonTsui/Matlaber)
- [SATR: Zero-Shot Semantic Segmentation of 3D Shapes](https://arxiv.org/abs/2304.04909), Ahmed Abdelreheem et al., ICCV 2023 | [citation](./references/citations.bib#L610-L615) | [site](https://samir55.github.io/SATR/) | [code](https://github.com/Samir55/SATR)
- [Texture Generation on 3D Meshes with Point-UV Diffusion](https://arxiv.org/abs/2308.10490), Xin Yu et al., ICCV 2023 | [citation](./references/citations.bib#L638-L643) | [site](https://cvmi-lab.github.io/Point-UV-Diffusion/) | [code](https://github.com/CVMI-Lab/Point-UV-Diffusion)
- [Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts](https://arxiv.org/abs/2310.11784), Xinhua Cheng et al., Arxiv 2023 | [citation](./references/citations.bib#L736-L741) | [site](https://cxh0519.github.io/projects/Progressive3D/) | [code](https://github.com/cxh0519/Progressive3D)
- [3D-GPT: Procedural 3D Modeling with Large Language Models](https://arxiv.org/abs/2310.12945), Chunyi Sun et al., Arxiv 2023 | [citation](./references/citations.bib#L743-L748) | [site](https://chuny1.github.io/3DGPT/3dgpt.html) | [code]()
- [CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models](https://arxiv.org/abs/2310.19784), Ziyang Yuan et al., Arxiv 2023 | [citation](./references/citations.bib#L792-L797) | [site]() | [code]()
- [Decorate3D: Text-Driven High-Quality Texture Generation for Mesh Decoration in the Wild](https://openreview.net/pdf?id=1recIOnzOF), Yanhui Guo et al., NeurIPS 2023 | [citation](./references/citations.bib#L834-L840) | [site](https://decorate3d.github.io/Decorate3D/) | [code](https://github.com/Decorate3D/Decorate3D)
- [HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image](https://arxiv.org/abs/2312.04543), Tong Wu et al., Arxiv 2023 | [citation](./references/citations.bib#L877-L882) | [site]() | [code]()
- [InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes](https://arxiv.org/abs/2401.05335), Mohamad Shahbazi et al., Arxiv 2024 | [citation](./references/citations.bib#L1024-L1029) | [site](https://mohamad-shahbazi.github.io/inserf/) | [code]()
- [ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields](https://arxiv.org/abs/2401.17895), JEdward Bartrum et al., Arxiv 2024 | [citation](./references/citations.bib#L1052-L1057) | [site](https://replaceanything3d.github.io/) | [code]()
- [Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation](https://arxiv.org/abs/2401.14257), Minglin Chen et al., Arxiv 2024| [citation](./references/citations.bib#L1073-L1078) | [site]() | [code]()
- [BoostDream: Efficient Refining for High-Quality Text-to-3D Generation from Multi-View Diffusion](https://arxiv.org/abs/2401.16764), Yonghao Yu et al., Arxiv 2024 | [citation](./references/citations.bib#L1059-L1064) | [site]() | [code]()
- [2L3: Lifting Imperfect Generated 2D Images into Accurate 3D](https://arxiv.org/abs/2401.15841), Yizheng Chen et al., Arxiv 2024 | [citation](./references/citations.bib#L1066-L1071) | [site]() | [code]()
- [GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting](https://arxiv.org/abs/2402.07207), Xiaoyu Zhou et al., Arxiv 2024 | [citation](./references/citations.bib#L1101-L1106) | [site](https://gala3d.github.io/) | [code](https://github.com/VDIGPKU/GALA3D)
- [Disentangled 3D Scene Generation with Layout Learning](https://arxiv.org/abs/2402.16936), Dave Epstein et al., Arxiv 2024 | [citation](./references/citations.bib#L1143-L1148) | [site](https://dave.ml/layoutlearning/) | [code]()
- [MagicClay: Sculpting Meshes With Generative Neural Fields](https://arxiv.org/abs/2403.02460), Amir Barda et al., Arxiv 2024 | [citation](./references/citations.bib#L1157-L1162) | [site]() | [code]()
- [TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation](https://arxiv.org/abs//2403.12906) Yufei Liu et al., Arxiv 2024 | [citation](./references/citations.bib#L1220-L1225) | [site](https://ggxxii.github.io/texdreamer/) | [code]()
- [InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting](https://arxiv.org/abs/2403.11878), Jiaxiang Tang et al., Arxiv 2024 | [citation](./references/citations.bib#L1227-L1232) | [site](https://me.kiui.moe/intex/) | [code](https://github.com/ashawkey/InTeX)
- [SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer](https://arxiv.org/abs/2404.03736), Zijie Wu et al., Arxiv 2024 | [citation](./references/citations.bib#L1283-L1288) | [site](https://sc4d.github.io/) | [code]()
- [TELA: Text to Layer-wise 3D Clothed Human Generation](https://arxiv.org/abs/2404.16748), Junting Dong et al., Arxiv 2024 | [citation](./references/citations.bib#L1339-L1344) | [site](http://jtdong.com/tela_layer/) | [code]()
- [Interactive3D: Create What You Want by Interactive 3D Generation](https://arxiv.org/abs/2404.16510), Shaocong Dong et al., Arxiv 2024 | [citation](./references/citations.bib#L1346-L1351) | [site](https://interactive-3d.github.io) | [code](https://github.com/interactive-3d/interactive3d)
- [TIP-Editor: An Accurate 3D Editor Following Both Text-Prompts And Image-Prompts](https://arxiv.org/abs/2401.14828), Jingyu Zhuang et al., Arxiv 2024 | [citation](./references/citations.bib#L1353-L1358) | [site](https://zjy526223908.github.io/TIP-Editor) | [code](https://github.com/zjy526223908/TIP-Editor)
- [Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning](https://arxiv.org/abs/2405.08054), Wenqi Dong et al., Arxiv 2024 | [citation](./references/citations.bib#L1374-L1379) | [site]() | [code]()
- [Part123: Part-aware 3D Reconstruction from a Single-view Image](https://arxiv.org/abs/2405.16888), Anran Liu et al., Arxiv 2024 | [citation](./references/citations.bib#L1395-L1400) | [site]() | [code]()
- [DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models](https://arxiv.org/abs/2405.17176), Yuqing Zhang et al., Arxiv 2024 | [citation](./references/citations.bib#L1437-L1442) | [site](https://zzzyuqing.github.io/dreammat.github.io/) | [code](https://github.com/zzzyuqing/DreamMat)
- [DreamVTON: Customizing 3D Virtual Try-on with Personalized Diffusion Models](https://arxiv.org/abs/2407.16511), Zhenyu Xie et al., Arxiv 2024 | [citation](./references/citations.bib#L1528-L1533) | [site]() | [code]()
- [SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement](https://arxiv.org/abs/2408.00653), Mark Boss et al., Arxiv 2024 | [citation](./references/citations.bib#L1578-L1583) | [site](https://stable-fast-3d.github.io/) | [code](https://github.com/Stability-AI/stable-fast-3d)
- [DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors](https://arxiv.org/abs/2409.08278), Thomas Hanwen et al., Arxiv 2024 | [citation](./references/citations.bib#L1627-L1632) | [site](https://dreamhoi.github.io/) | [code](https://https://github.com/hanwenzhu/dreamhoi)
- [DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation](https://arxiv.org/abs/2409.07454), Haibo Yang et al., Arxiv 2024 | [citation](./references/citations.bib#L1634-L1639) | [site](https://dreammesh.github.io/) | [code]()
- [FlashTex: Fast Relightable Mesh Texturing with LightControlNet](https://arxiv.org/abs/2409.07454), Kangle Deng et al., Arxiv 2024 | [citation](./references/citations.bib#L1676-L1681) | [site](https://flashtex.github.io) | [code](https://github.com/Roblox/FlashTex)
- [MeshUp: Multi-Target Mesh Deformation via Blended Score Distillation](https://arxiv.org/abs/2408.14899), Hyunwoo Kim et al., Arxiv 2024 | [citation](./references/citations.bib#L1690-L1695) | [site](https://threedle.github.io/MeshUp/) | [code](https://github.com/threedle/MeshUp)
- [MvDrag3D: Drag-based Creative 3D Editing via Multi-view Generation-Reconstruction Priors](https://arxiv.org/abs/2410.16272), Honghua Chen et al., Arxiv 2024 | [citation](./references/citations.bib#L1718-L1723) | [site](https://chenhonghua.github.io/MyProjects/MvDrag3D) | [code](https://github.com/chenhonghua/MvDrag3D)
Avatar Generation and Manupilation
- [Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion](https://3d-avatar-diffusion.microsoft.com/), Tengfei Wang et al., Arxiv 2022 | [citation](./references/citations.bib#L197-L202) | [site](https://3d-avatar-diffusion.microsoft.com/) | [code]()
- [DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars](https://arxiv.org/abs/2303.09375), David Svitov et al., Arxiv 2023 | [citation](./references/citations.bib#L666-L671) | [site]() | [code]()
- [ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image](https://zero123.cs.columbia.edu/), Zhenzhen Weng et al., Arxiv 2023 | [citation](./references/citations.bib#L267-L272)
- [AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control](https://arxiv.org/abs/2303.17606), Ruixiang Jiang et al., ICCV 2023 | [citation](./references/citations.bib#L274-L279) | [site](https://avatar-craft.github.io/) | [code](https://github.com/songrise/avatarcraft)
- [Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models](https://arxiv.org/abs/2305.11870), Byungjun Kim et al., ICCV 2023 | [citation](./references/citations.bib#L568-L573) | [site](https://snuvclab.github.io/chupa/) | [code](https://github.com/snuvclab/chupa)
- [DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance](https://arxiv.org/abs/2304.03117), Longwen Zhang et al., Arxiv 2023 | [citation](./references/citations.bib#L456-L461) | [site](https://sites.google.com/view/dreamface) | [code](https://huggingface.co/spaces/DEEMOSTECH/3D-Avatar-Generator)
- [HeadSculpt: Crafting 3D Head Avatars with Text](https://arxiv.org/abs/2306.03038), Xiao Han et al., Arxiv 2023 | [citation](./references/citations.bib#L470-L475) | [site](https://brandonhan.uk/HeadSculpt/) | [code]()
- [DreamHuman: Animatable 3D Avatars from Text](https://arxiv.org/abs/2306.09329), Nikos Kolotouros et al., Arxiv 2023 | [citation](./references/citations.bib#L554-L559) | [site](https://dream-human.github.io/) | [code]()
- [FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields](https://arxiv.org/abs/2307.11418), Sungwon Hwang et al., Arxiv 2023 | [citation](./references/citations.bib#L491-L496) | [site]() | [code]()
- [AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose](https://arxiv.org/abs/2308.03610), Huichao Zhang et al., Arxiv 2023 | [citation](./references/citations.bib#L547-L552) | [site](https://avatarverse3d.github.io/) | [code](https://github.com/bytedance/AvatarVerse)
- [TeCH: Text-guided Reconstruction of Lifelike Clothed Humans](https://arxiv.org/abs/2308.08545), Yangyi Huang et al., Arxiv 2023 | [citation](./references/citations.bib#L575-L580) | [site](https://huangyangyi.github.io/TeCH/) | [code](https://github.com/huangyangyi/TeCH)
- [HumanLiff: Layer-wise 3D Human Generation with Diffusion Model](https://skhu101.github.io/HumanLiff/), Hu Shoukang et al., Arxiv 2023 | [citation](./references/citations.bib#L582-L587) | [site](https://skhu101.github.io/HumanLiff/) | [code](https://github.com/skhu101/HumanLiff)
- [TADA! Text to Animatable Digital Avatars](https://tada.is.tue.mpg.de), Tingting Liao et al., Arxiv 2023 | [citation](./references/citations.bib#L589-L594) | [site](https://tada.is.tue.mpg.de/) | [code](https://github.com/TingtingLiao/TADA)
- [One-shot Implicit Animatable Avatars with Model-based Priors](https://arxiv.org/abs/2212.02469v2), Yangyi Huang et al., ICCV 2023 | [citation](./references/citations.bib#L617-L622) | [site](https://huangyangyi.github.io/ELICIT/) | [code](https://github.com/huangyangyi/ELICIT)
- [Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model](https://arxiv.org/abs/2309.03550), Sungwon Hwang et al., Arxiv 2023 | [citation](./references/citations.bib#L652-L657) | [site]() | [code]()
- [Text-Guided Generation and Editing of Compositional 3D Avatars](https://arxiv.org/abs/2309.07125), Hao Zhang et al., Arxiv 2023 | [citation](./references/citations.bib#L659-L664) | [site](https://yfeng95.github.io/teca/) | [code]()
- [HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation](https://arxiv.org/abs/2310.01406), Xin Huang et al., Arxiv 2023 | [citation](./references/citations.bib#L708-L713) | [site](https://humannorm.github.io/) | [code](https://github.com/xhuangcv/humannorm)
- [HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting](https://arxiv.org/abs/2311.17061), Xian Liu et al., Arxiv 2023 | [citation](./references/citations.bib#L856-L861) | [site](https://alvinliu0.github.io/projects/HumanGaussian) | [code](https://github.com/alvinliu0/HumanGaussian)
- [Text-Guided 3D Face Synthesis: From Generation to Editing](https://arxiv.org/abs/2312.00375), Yunjie Wu wt al., Arxiv 2023 | [citation](./references/citations.bib#L891-L896) | [site](https://faceg2e.github.io/) | [code]()
- [SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained Geometry and Appearance](https://arxiv.org/abs/2312.08889), Yuanyou Xu et al., Arxiv 2023 | [citation](./references/citations.bib#L933-L938) | [site](https://yoxu515.github.io/SEEAvatar/) | [code](https://github.com/yoxu515/SEEAvatar)
- [GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning](https://arxiv.org/abs/2312.11461), Ye Yuan et al., Arxiv 2023 | [citation](./references/citations.bib#L968-L973) | [site](https://nvlabs.github.io/GAvatar/) | [code]()
- [Make-A-Character: High Quality Text-to-3D Character Generation within Minutes](https://arxiv.org/abs/2312.15430), Jianqiang Ren et al., Arxv 2023 | [citation](./references/citations.bib#L1003-L1008) | [site](https://human3daigc.github.io/MACH/) | [code](https://github.com/Human3DAIGC/Make-A-Character)
- [En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data](https://arxiv.org/abs/2401.01173), Yifang Men et al., Arxiv 2024 | [citation](./references/citations.bib#L1010-L1015) | [site](https://menyifang.github.io/projects/En3D/index.html) | [code](https://github.com/menyifang/En3D)
- [HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting](https://zhenglinzhou.github.io/HeadStudio-ProjectPage/), Zhenglin Zhou et al., Arxiv 2024 | [citation](./references/citations.bib#L1094-L1099) | [site](https://zhenglinzhou.github.io/HeadStudio-ProjectPage/) | [code](https://github.com/ZhenglinZhou/HeadStudio/)
- [InstructHumans: Editing Animatable 3D Human Textures with Instructions](https://jyzhu.top/instruct-humans/), Jiayin Zhu et al., Arxiv 2024 | [citation](./references/citations.bib#L1276-L1281) | [site](https://jyzhu.top/instruct-humans/) | [code](https://github.com/viridityzhu/InstructHumans)
- [X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation](https://arxiv.org/abs/2405.00954), Yiwei Ma et al., Arxiv 2024 | [citation](./references/citations.bib#L1381-L1386) | [site](https://xmu-xiaoma666.github.io/Projects/X-Oscar/) | [code](https://github.com/LinZhekai/X-Oscar)
- [MagicPose4D: Crafting Articulated Models with Appearance and Motion Control](https://arxiv.org/abs/2405.14017), Hao Zhang et al., Arxiv 2024 | [citation](./references/citations.bib#L1444-L1449) | [site](https://boese0601.github.io/magicpose4d/) | [code](https://github.com/haoz19/MagicPose4D)
- [HAAR: Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles](https://arxiv.org/abs/2312.11666), Vanessa Sklyarova et al., Arxiv 2024 | [citation](./references/citations.bib#L1465-L1470) | [site](https://haar.is.tue.mpg.de/) | [code](https://github.com/Vanessik/HAAR)
- [GaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly Enhanced Quality](https://arxiv.org/abs/2406.18462), Taoran Yi et al., Arxiv 2024 | [citation](./references/citations.bib#L1479-L1484) | [site](https://taoranyi.com/gaussiandreamerpro/) | [code](https://github.com/hustvl/GaussianDreamerPro)
- [Barbie: Text to Barbie-Style 3D Avatars](https://arxiv.org/abs/2408.09126), Xiaokun Sun et al., Arxiv 2024 | [citation](./references/citations.bib#L1620-L1625) | [site]() | [code]()
Dynamic Content Generation
- [Text-To-4D Dynamic Scene Generation](https://arxiv.org/abs/2301.11280), Uriel Singer et al., Arxiv 2023 | [citation](./references/citations.bib#L225-L230) | [site](https://make-a-video3d.github.io/) | [code]()
- [TextDeformer: Geometry Manipulation using Text Guidance](https://arxiv.org/abs/2304.13348), William Gao et al., Arxiv 2033 | [citation](./references/citations.bib#L281-L286) | [site]() | [code]()
- [Consistent4D: Consistent 360 Degree Dynamic Object Generation from Monocular Video](https://arxiv.org/abs/2311.02848), Yanqin Jiang et al., Arxiv 2023 | [citation](./references/citations.bib#L799-L809) | [site](https://consistent4d.github.io/) | [code](https://github.com/yanqinJiang/Consistent4D)
- [4D-fy:Text-to-4D Generation Using Hybrid Score Distillation Sampling](https://arxiv.org/abs/2311.17984), Lincong Feng et al., Arxiv 2023 | [citation](./references/citations.bib#L842-L847) | [site](https://sherwinbahmani.github.io/4dfy/) | [code](https://github.com/sherwinbahmani/4dfy)
## Datasets :floppy_disk:
- [Objaverse: A Universe of Annotated 3D Objects](https://arxiv.org/abs/2212.08051), Matt Deitke et al., Arxiv 2022 | [citation](./references/citations.bib#L519-L524)
- [Objaverse-XL: A Universe of 10M+ 3D Objects](https://objaverse.allenai.org/), Matt Deitke et al., Preprint 2023 | [citation](./references/citations.bib#L526-L531)
- [Describe3D: High-Fidelity 3D Face Generation from Natural Language Descriptions](https://arxiv.org/abs/2305.03302), Menghua Wu et al., CVPR 2023 | [citation](./references/citations.bib#L533-L538)
- [Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting](https://arxiv.org/abs/2312.1327), Hao Ouyang et at., Arxiv 2023 | [citation](./references/citations.bib#L947-L952)
- [Customize-It-3D: High-Quality 3D Creation from A Single Image Using Subject-Specific Knowledge Prior](https://arxiv.org/abs/2312.11535), Nan Huang et al., Arxiv 2023 | [citation](./references/citations.bib#L954-L959)
- [Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering](https://arxiv.org/abs/2312.11360), Kim Youwan et al., Arxiv 2023 | [citation](./references/citations.bib#L961-L966)
- [SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding](https://arxiv.org/abs/2401.09340), Baoxiong Jia et al., Arxiv 2024 | [citation](./references/citations.bib#L1038-L1043)
- [Scalable 3D Captioning with Pretrained Models](https://arxiv.org/abs/2306.07279), Tiange Luo et al., Arxiv 2024 | [citation](./references/citations.bib#L1388-L1393)
## Frameworks :desktop_computer:
- [threestudio: A unified framework for 3D content generation](https://github.com/threestudio-project/threestudio), Yuan-Chen Guo et al., Github 2023
- [Nerfstudio: A Modular Framework for Neural Radiance Field Development](https://docs.nerf.studio/), Matthew Tancik et al., SIGGRAPH 2023
- [Mirage3D: Open-Source Implementations of 3D Diffusion Models Optimized for GLB Output](https://github.com/MirageML/Mirage3D), Mirageml et al., Github 2023
## Tutorial Videos :tv:
- [AI 3D Generation, explained](https://www.youtube.com/watch?v=EoAm1yZR-ao)
## TODO
- [x] Initial List of the STOA
- [x] Provide citations in BibTeX
- [x] Sub-categorize based on input conditioning
- [ ] Provide links to project pages and codes