Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

Awesome-Text2X-Resources

This is an open collection of state-of-the-art (SOTA), novel Text to X (X can be everything) methods (papers, codes and datasets).
https://github.com/ALEEEHU/Awesome-Text2X-Resources

Last synced: 2 days ago
JSON representation

  • Text to 4D

    • πŸ’‘ 4D ArXiv Papers

      • Link - project.github.io/) |
      • Link - - | [Link](https://tau-vailab.github.io/4-LEGS/) |
      • Link
      • Link - Group/4DGen) | [Link](https://vita-group.github.io/4DGen/) |
      • Link - ren/dreamgaussian4d) | [Link](https://jiawei-ren.github.io/projects/dreamgaussian4d/) |
      • Link - zvg/Efficient4D) | [Link](https://fudan-zvg.github.io/Efficient4D/) |
      • Link - Overmind/GaussianFlow) | [Link](https://zerg-overmind.github.io/GaussianFlow.github.io/) |
      • Link - Group/Comp4D) |[Link](https://vita-group.github.io/Comp4D/#) |
      • Link - Group/Diffusion4D) |[Link](https://vita-group.github.io/Diffusion4D/) |
      • Link - - |
      • Link - - | [Link](https://github.com/MiaoQiaowei/PLA4D.github.io) |
      • Link - 98/STAR) | [Link](https://star-avatar.github.io/) |
      • Link - - | [Link](https://research.nvidia.com/labs/toronto-ai/l4gm/) |
      • Link - - | [Link](https://4k4dgen.github.io/index.html) |
      • Link - of-motion/) | [Link](https://shape-of-motion.github.io/) |
      • Link - - | -- |
      • Link - - | -- |
      • Link - - | [Link](https://disco-4d.github.io/) |
      • Link
    • πŸŽ‰ 4D Accepted Papers

      • Link - - | [Link](https://make-a-video3d.github.io/) |
      • Link
      • Link
      • Link - - | [Link](https://control4darxiv.github.io./) |
      • Link
      • Link - yifei/STAG4D) |[Link](https://nju-3dv.github.io/projects/STAG4D/) |
      • Link - dgs.github.io/) |
      • Link
      • Link - - | [Link](https://snap-research.github.io/4Real/) |
      • Link
      • Link - - | [Link](https://www.microsoft.com/en-us/research/project/compositional-3d-aware-video-generation/) |
      • Link
      • Link - in-4d) | [Link](https://research.nvidia.com/labs/nxp/dream-in-4d/) |
      • Link - - | [Link](https://research.nvidia.com/labs/toronto-ai/AlignYourGaussians/) |
      • Link
      • Link - CVGL/DreamMesh4D) | [Link](https://lizhiqi49.github.io/DreamMesh4D/) |
    • Other 4D Additional Info

      • Link - AI/generative-models) | [Link](https://sv4d.github.io/) |
  • Text to Human Motion

    • πŸŽ‰ Motion Accepted Papers

      • Link - - | [Link](https://lingomotions.com/) |
      • Link - SEA/OmniMotionGPT) | [Link](https://zshyang.github.io/omgpt-website/) |
      • Link - nik/motionfix) | [Link](https://motionfix.is.tue.mpg.de/) |
      • Link - Research/HumanTOMATO) | [Link](https://lhchen.top/HumanTOMATO/) |
      • Link - gillman/self-correcting-self-consuming) | [Link](https://cs.brown.edu/people/ngillman//sc-sc.html) |
      • Link - editing-release)| [Link](https://purvigoel.github.io/iterative-motion-editing/) |
      • Link - Wenxun/MotionLCM) | [Link](https://dai-wenxun.github.io/MotionLCM-page/) |
      • Link - - |
      • Link
      • Link - vi/SMooDi) | [Link](https://neu-vi.github.io/SMooDi/) |
      • Link - ZY-Dou/EMDM) | [Link](https://frank-zy-dou.github.io/projects/EMDM/index.html) |
      • Link - Motion) | [Link](https://moonsliu.github.io/Pro-Motion/) |
      • Link - - | [Link](https://research.nvidia.com/labs/toronto-ai/tesmo/) |
      • Link - team/Stable-Text-to-motion-Framework) | [Link](https://sato-team.github.io/Stable-Text-to-Motion-Framework/) |
      • Link - diffusion-model) | [Link](https://guytevet.github.io/mdm-page/) |
      • Link - gpt.github.io/) |
      • Link - latent-diffusion) | [Link](https://chenxin.tech/mld/) |
      • Link - page/) |
      • Link - page/) |
      • Link - codes) | [Link](https://ericguo5513.github.io/momask/) |
      • Link - motion-transfer/diffusion-motion-transfer) | [Link](https://diffusion-motion-transfer.github.io/) |
      • Link
      • Link - LYJ-Lab/InstructMotion) | -- |
      • Link
      • Link - motion/afford-motion) | [Link](https://afford-motion.github.io/) |
      • Link
      • Link - tlabs/stmc)| [Link](https://mathis.petrovich.fr/stmc/) |
      • Link - motion-inbetweening) | [Link](https://setarehc.github.io/CondMDI/) |
    • πŸ’‘ Motion ArXiv Papers

      • Link - RAG/MoRAG) | [Link](https://motion-rag.github.io/) |
      • Link
      • Link - Motion) | [Link](https://shuochengzhai.github.io/Infinite-motion.github.io/) |
      • Link - - | [Link](https://story2motion.github.io/) |
      • Link
      • Link - zhang/LMM) | [Link](https://mingyuan-zhang.github.io/projects/LMM.html) |
      • Link - y1heng/StableMoFusion) | [Link](https://h-y1heng.github.io/StableMoFusion-page/) |
      • Link
      • Link - - | -- |
      • Link - page/) |
      • Link - Research/MotionCLR) | [Link](https://lhchen.top/MotionCLR/) |
      • Link
      • Link - - | [Link](https://zkf1997.github.io/DART/) |
    • Survey

    • Datasets

  • Text to Video

    • πŸ’‘ Video ArXiv Papers

      • Link - videoai.github.io/tora_video/) |
      • Link - AI-Research/StreamingT2V) | [Link](https://streamingt2v.github.io/) |
      • Link - animator.html) |
      • Link - animator) | [Link](https://laulampaul.github.io/text-animator.html) |
      • Link - - | [Link](https://still-moving.github.io/) |
      • Link
      • Link - CS/CustomCrafter) | [Link](https://customcrafter.github.io/) |
      • Link - o.github.io/) |
      • Link - - | -- |
      • Link - Flow) | [Link](https://pyramid-flow.github.io/) |
      • Link - X/GameGen-X) | [Link](https://gamegen-x.github.io/) |
    • πŸŽ‰ Video Accepted Papers

      • Link - - | [Link](https://wangyanhui666.github.io/MicroCinema.github.io/) |
      • Link - Page/) |
      • Link - videosyn) | -- |
      • Link
      • Link - zhengcheng/vividzoo) | [Link](https://hi-zhengcheng.github.io/vividzoo/) |
      • Link - hanlin/VideoDirectorGPT) | [Link](https://videodirectorgpt.github.io/) |
      • Link - Ryan/DEMO) | [Link](https://pr-ryan.github.io/DEMO-project/) |
    • Other Additional Info

      • Link - FUXI/VidGen) | [Link](https://sais-fuxi.github.io/projects/vidgen-1m/) |
      • Mochi 1 - of-the-art video generation model with high-fidelity motion and strong prompt adherence.
      • arXiv - cvc.github.io/VideoGen-Eval/), [GitHub Repo](https://github.com/AILab-CVC/VideoGen-Eval)
  • Text to CAD

    • πŸŽ‰ CAD Accepted Papers

      • Link - - | [Link](https://sadilkhan.github.io/text2cad-project/) |
  • Text to 3D Human

    • πŸŽ‰ Human Accepted Papers

      • Link
      • Link - craft.github.io/) |
      • Link - Research/DreamWaltz) | [Link](https://idea-research.github.io/DreamWaltz/) |
      • Link - - | [Link](https://dream-human.github.io/) |
      • Link
      • Link
      • Link
      • Link
      • Link
      • Link - ProjectPage/) |
      • Link - - | [Link](https://www.nikoskolot.com/avatarpopup/) |
      • Link
      • Link
      • Link - SMPL) | [Link](https://shanemankiw.github.io/SO-SMPL/) |
    • πŸ’‘ Human ArXiv Papers

      • Link - A-Character) | [Link](https://human3daigc.github.io/MACH/) |
      • Link - - | [Link](https://syntec-research.github.io/MagicMirror/) |
      • Link - humans/) |
      • Link - - | -- |
      • Link
    • Additional Info

  • Update Logs

  • Text to Scene

    • πŸ’‘ Scene ArXiv Papers

      • Link - IMTech/LayerPano3D) |[Link](https://ys-imtech.github.io/projects/LayerPano3D/) |
      • Link - Room) | [Link](https://fangchuan.github.io/ctrl-room.github.io/) |
      • Link - - | [Link](https://ken-ouyang.github.io/text2immersion/index.html) |
      • Link
      • Link - - |-- |
      • Link - - |[Link](https://replaceanything3d.github.io/) |
      • Link - - | -- |
      • Link
      • Link
      • Link - language) | [Link](https://ai.stanford.edu/~yzzhang/projects/scene-language/)|
      • Link
      • Link - 3D) | -- |
      • Link
      • Link
    • πŸŽ‰ Scene Accepted Papers

      • Link
      • Link - to-room/) |
      • Link
      • Link
      • Link - - |--|
      • Link
      • Link - UCLA/DreamScene360) |[Link](https://dreamscene360.github.io/) |
      • Link - scene/) |
      • Link - Project/DreamScene) |[Link](https://dreamscene-project.github.io/) |
      • Link - and-Fantasy) |[Link](https://leo81005.github.io/Reality-and-Fantasy/) |
      • Link
      • Link - page/) |
      • Link - - | [Link](https://jonasschult.github.io/ControlRoom3D/) |
      • Link - - |[Link](https://dave.ml/layoutlearning/) |
  • Text to Model

    • πŸ’‘ Model ArXiv Papers

  • Text to Music🎢

    • πŸ’‘ Music ArXiv Papers