Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Text2X-Resources
This is an open collection of state-of-the-art (SOTA), novel Text to X (X can be everything) methods (papers, codes and datasets).
https://github.com/ALEEEHU/Awesome-Text2X-Resources
Last synced: about 14 hours ago
JSON representation
-
Text to 4D
-
💡 4D ArXiv Papers
- Link - project.github.io/) |
- Link - - | [Link](https://tau-vailab.github.io/4-LEGS/) |
- Link
- Link
- Link - Group/4DGen) | [Link](https://vita-group.github.io/4DGen/) |
- Link - ren/dreamgaussian4d) | [Link](https://jiawei-ren.github.io/projects/dreamgaussian4d/) |
- Link - zvg/Efficient4D) | [Link](https://fudan-zvg.github.io/Efficient4D/) |
- Link - Overmind/GaussianFlow) | [Link](https://zerg-overmind.github.io/GaussianFlow.github.io/) |
- Link - Group/Comp4D) |[Link](https://vita-group.github.io/Comp4D/#) |
- Link - - |
- Link - - | [Link](https://github.com/MiaoQiaowei/PLA4D.github.io) |
- Link - 98/STAR) | [Link](https://star-avatar.github.io/) |
- Link - - | [Link](https://4k4dgen.github.io/index.html) |
- Link - of-motion/) | [Link](https://shape-of-motion.github.io/) |
- Link - - | -- |
- Link - - | -- |
- Link - x-d.github.io/) |
- Link
- Link - - | [Link](https://disco-4d.github.io/) |
- Link
- Link - - | [Link](https://snap-research.github.io/4Real-Video/) |
- Link - - | [Link](https://cat-4d.github.io/) |
-
🎉 4D Accepted Papers
- Link - - | [Link](https://make-a-video3d.github.io/) |
- Link
- Link
- Link - - | [Link](https://control4darxiv.github.io./) |
- Link
- Link - yifei/STAG4D) |[Link](https://nju-3dv.github.io/projects/STAG4D/) |
- Link - Group/Diffusion4D) |[Link](https://vita-group.github.io/Diffusion4D/) |
- Link - dgs.github.io/) |
- Link
- Link - - | [Link](https://snap-research.github.io/4Real/) |
- Link
- Link - - | [Link](https://www.microsoft.com/en-us/research/project/compositional-3d-aware-video-generation/) |
- Link
- Link - in-4d) | [Link](https://research.nvidia.com/labs/nxp/dream-in-4d/) |
- Link - - | [Link](https://research.nvidia.com/labs/toronto-ai/AlignYourGaussians/) |
- Link
- Link - CVGL/DreamMesh4D) | [Link](https://lizhiqi49.github.io/DreamMesh4D/) |
- Link - tlabs/L4GM-official) |[Link](https://research.nvidia.com/labs/toronto-ai/l4gm/) |
-
Other 4D Additional Info
- Link - AI/generative-models) | [Link](https://sv4d.github.io/) |
-
-
Text to Human Motion
-
🎉 Motion Accepted Papers
- Link - - | [Link](https://lingomotions.com/) |
- Link - SEA/OmniMotionGPT) | [Link](https://zshyang.github.io/omgpt-website/) |
- Link - nik/motionfix) | [Link](https://motionfix.is.tue.mpg.de/) |
- Link - Research/HumanTOMATO) | [Link](https://lhchen.top/HumanTOMATO/) |
- Link - gillman/self-correcting-self-consuming) | [Link](https://cs.brown.edu/people/ngillman//sc-sc.html) |
- Link - editing-release)| [Link](https://purvigoel.github.io/iterative-motion-editing/) |
- Link - Wenxun/MotionLCM) | [Link](https://dai-wenxun.github.io/MotionLCM-page/) |
- Link - - |
- Link
- Link - vi/SMooDi) | [Link](https://neu-vi.github.io/SMooDi/) |
- Link - ZY-Dou/EMDM) | [Link](https://frank-zy-dou.github.io/projects/EMDM/index.html) |
- Link - Motion) | [Link](https://moonsliu.github.io/Pro-Motion/) |
- Link - - | [Link](https://research.nvidia.com/labs/toronto-ai/tesmo/) |
- Link - team/Stable-Text-to-motion-Framework) | [Link](https://sato-team.github.io/Stable-Text-to-Motion-Framework/) |
- Link - diffusion-model) | [Link](https://guytevet.github.io/mdm-page/) |
- Link - gpt.github.io/) |
- Link - latent-diffusion) | [Link](https://chenxin.tech/mld/) |
- Link - page/) |
- Link - page/) |
- Link - codes) | [Link](https://ericguo5513.github.io/momask/) |
- Link - motion-transfer/diffusion-motion-transfer) | [Link](https://diffusion-motion-transfer.github.io/) |
- Link
- Link - LYJ-Lab/InstructMotion) | -- |
- Link
- Link - motion/afford-motion) | [Link](https://afford-motion.github.io/) |
- Link
- Link - tlabs/stmc)| [Link](https://mathis.petrovich.fr/stmc/) |
- Link - motion-inbetweening) | [Link](https://setarehc.github.io/CondMDI/) |
-
💡 Motion ArXiv Papers
- Link - RAG/MoRAG) | [Link](https://motion-rag.github.io/) |
- Link - - | [Link](https://andypinxinliu.github.io/KinMo/) |
- Link
- Link - Motion) | [Link](https://shuochengzhai.github.io/Infinite-motion.github.io/) |
- Link - - | [Link](https://story2motion.github.io/) |
- Link
- Link - zhang/LMM) | [Link](https://mingyuan-zhang.github.io/projects/LMM.html) |
- Link - y1heng/StableMoFusion) | [Link](https://h-y1heng.github.io/StableMoFusion-page/) |
- Link
- Link - - | -- |
- Link - page/) |
- Link - Research/MotionCLR) | [Link](https://lhchen.top/MotionCLR/) |
- Link - zeyu-zhang/KMM) | [Link](https://steve-zeyu-zhang.github.io/KMM/) |
- Link
- Link - - | [Link](https://zkf1997.github.io/DART/) |
-
Survey
-
Datasets
-
-
Text to Video
-
💡 Video ArXiv Papers
- Link - videoai.github.io/tora_video/) |
- Link - - | [Link](https://motion-prompting.github.io/) |
- Link - AI-Research/StreamingT2V) | [Link](https://streamingt2v.github.io/) |
- Link - animator.html) |
- Link - animator) | [Link](https://laulampaul.github.io/text-animator.html) |
- Link - - | [Link](https://still-moving.github.io/) |
- Link
- Link - CS/CustomCrafter) | [Link](https://customcrafter.github.io/) |
- Link - o.github.io/) |
- Link - - | [Link](https://mvideo-v1.github.io/) |
- Link - shaonian/AnimateAnything) | [Link](https://yu-shaonian.github.io/Animate_Anything/) |
- Link - - | -- |
- Link - - |
- Link - Flow) | [Link](https://pyramid-flow.github.io/) |
- Link - X/GameGen-X) | [Link](https://gamegen-x.github.io/) |
- Link - - | [Link](https://github.com/hmrishavbandy/FlipSketch) |
- Link - story2video.github.io/) |
-
🎉 Video Accepted Papers
- Link - - | [Link](https://wangyanhui666.github.io/MicroCinema.github.io/) |
- Link - Page/) |
- Link - videosyn) | -- |
- Link
- Link - zhengcheng/vividzoo) | [Link](https://hi-zhengcheng.github.io/vividzoo/) |
- Link - hanlin/VideoDirectorGPT) | [Link](https://videodirectorgpt.github.io/) |
- Link - Ryan/DEMO) | [Link](https://pr-ryan.github.io/DEMO-project/) |
-
Other Additional Info
-
-
Related Resources
-
Text to 'other tasks'
-
Survey and Awesome Repos
- Paper - 3D-Gaussians) [Page](https://ingra14m.github.io/Deformable-Gaussians/)
- Paper
- Paper - GS) [Page](https://yihua7.github.io/SC-GS-web/)
- Paper
- Paper
- Paper - zvg/4d-gaussian-splatting) [Page](https://fudan-zvg.github.io/4d-gaussian-splatting/)
- Paper - 3dv.github.io/projects/Gaussian-Flow/)
- Paper
- Gaussian Splatting: 3D Reconstruction and Novel View Synthesis, a Review
- Recent Advances in 3D Gaussian Splatting
- 3D Gaussian as a New Vision Era: A Survey
- A Survey on 3D Gaussian Splatting
- Awesome 3D Gaussian Splatting Resources
- 3D Gaussian Splatting Papers
- 3DGS and Beyond Docs
- Advances in 3D Generation: A Survey
- A Comprehensive Survey on 3D Content Generation
- A Survey On Text-to-3D Contents Generation In The Wild
- Awesome 3D AIGC 1 - AIGC-3D)
- Awesome Text 2 3D
- GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation
- Awesome LLM 3D
- A Survey on 3D Human Avatar Modeling -- From Reconstruction to Generation
- Awesome-Avatars
- Awesome 4D Generation
- PROGRESS AND PROSPECTS IN 3D GENERATIVE AI: A TECHNICAL OVERVIEW INCLUDING 3D HUMAN
- Awesome Digital Human
-
-
Text to 3D Human
-
💡 Human ArXiv Papers
-
🎉 Human Accepted Papers
- Link
- Link - craft.github.io/) |
- Link - Research/DreamWaltz) | [Link](https://idea-research.github.io/DreamWaltz/) |
- Link - - | [Link](https://dream-human.github.io/) |
- Link
- Link
- Link
- Link
- Link
- Link - ProjectPage/) |
- Link - - | [Link](https://www.nikoskolot.com/avatarpopup/) |
- Link
- Link
- Link - SMPL) | [Link](https://shanemankiw.github.io/SO-SMPL/) |
-
Additional Info
-
-
Update Logs
-
Text to Scene
-
💡 Scene ArXiv Papers
- Link - IMTech/LayerPano3D) |[Link](https://ys-imtech.github.io/projects/LayerPano3D/) |
- Link - Room) | [Link](https://fangchuan.github.io/ctrl-room.github.io/) |
- Link - - | [Link](https://ken-ouyang.github.io/text2immersion/index.html) |
- Link
- Link - - |-- |
- Link - - | -- |
- Link
- Link
- Link - language) | [Link](https://ai.stanford.edu/~yzzhang/projects/scene-language/)|
- Link
- Link - 3D) | -- |
- Link
- Link
-
🎉 Scene Accepted Papers
- Link
- Link - to-room/) |
- Link
- Link
- Link - - |--|
- Link
- Link - UCLA/DreamScene360) |[Link](https://dreamscene360.github.io/) |
- Link - scene/) |
- Link - Project/DreamScene) |[Link](https://dreamscene-project.github.io/) |
- Link - and-Fantasy) |[Link](https://leo81005.github.io/Reality-and-Fantasy/) |
- Link
- Link - - |[Link](https://replaceanything3d.github.io/) |
- Link - page/) |
- Link - - | [Link](https://jonasschult.github.io/ControlRoom3D/) |
- Link - - |[Link](https://dave.ml/layoutlearning/) |
-
Programming Languages
Categories
Sub Categories
🎉 Motion Accepted Papers
28
Survey and Awesome Repos
27
💡 4D ArXiv Papers
22
🎉 4D Accepted Papers
18
💡 Video ArXiv Papers
17
💡 Motion ArXiv Papers
15
🎉 Scene Accepted Papers
15
🎉 Human Accepted Papers
14
💡 Scene ArXiv Papers
13
💡 Human ArXiv Papers
7
🎉 Video Accepted Papers
7
Text to 'other tasks'
4
Other Additional Info
3
Additional Info
3
Datasets
3
Other 4D Additional Info
1
Survey
1
Keywords
neural-rendering
3
nerf
3
3d-gaussian-splatting
3
aigc
2
smpl
2
gaussian-splatting
2
3dgs
2
text-to-3d
2
avatar
2
3d-aigc
1
novel-view-synthesis
1
vae
1
prior
1
pose-estimation
1
pose
1
motion
1
human
1
smpl-model
1
3d-reconstruction
1
3d-keypoints
1
virtual-try-on
1
digital-human
1
clothed-people-digitalization
1
tt3d
1
t23d
1
stable-diffusion
1
sdf
1
motion-generation
1
image-to-3d
1
humannerf
1
diffusion
1
3dreconstruction
1
3d-learning-from-2d
1
generation
1
computer-vision
1
computer-graphics
1
3d-generation
1
2d-keypoints
1