Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

awesome-robotics-3d

A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
https://github.com/zubair-irshad/awesome-robotics-3d

Last synced: 1 day ago
JSON representation

  • ✨ About

  • Pretraining

  • VLM and LLM

    • [Paper - point.github.io/)] [[Demo](https://robo-point.github.io/)]
    • [Paper - vlm.github.io/)]
    • [Paper/PDF
    • [Paper - www.cs.umass.edu/3dvla/)] [[Code](https://github.com/UMass-Foundation-Model/3D-VLA)]
    • [Paper
    • [Paper - epic.github.io/Open6DOR/)] [[Code](https://007e03d34429a2517b.gradio.live/)]
    • [Paper - vlm.github.io/)] [[Code](https://spatial-vlm.github.io/#community-implementation)]
    • [Paper
    • [Paper
    • [Paper
    • [Paper - ma.github.io/)]
    • [Paper - manipulation.github.io/)] [[Code](https://github.com/moka-manipulation/moka)]
    • [Paper - vlm.github.io/)]
    • [Paper/PDF
    • [Paper - www.cs.umass.edu/3dvla/)] [[Code](https://github.com/UMass-Foundation-Model/3D-VLA)]
    • [Paper - epic.github.io/Open6DOR/)] [[Code](https://007e03d34429a2517b.gradio.live/)]
    • [Paper
    • [Paper - vlm.github.io/)] [[Code](https://spatial-vlm.github.io/#community-implementation)]
    • [Paper
    • [Paper
    • [Paper
    • [Paper - ma.github.io/)]
    • [Paper - manipulation.github.io/)] [[Code](https://github.com/moka-manipulation/moka)]
    • [Paper - Zero/)] [[Code](https://github.com/zhangsha1024/Agent3d-zero-code)]
    • [Paper - www.cs.umass.edu/multiply/)] [[Code](https://github.com/UMass-Foundation-Model/MultiPLY)]
    • [Paper - freax.github.io/thinkgrasp_page/)]
    • [Paper - freax.github.io/thinkgrasp_page/)]
    • [Paper
    • [Paper - Zero/)] [[Code](https://github.com/zhangsha1024/Agent3d-zero-code)]
    • [Paper - www.cs.umass.edu/multiply/)] [[Code](https://github.com/UMass-Foundation-Model/MultiPLY)]
    • [Paper - learning.uk/dream2real)] [[Code](https://github.com/FlyCole/Dream2Real)]
    • [Paper - generalist.github.io/)] [[Code](https://github.com/embodied-generalist/embodied-generalist)]
    • [Paper
    • [Paper - DCAI/SpatialBot)]
    • [Paper - robot.github.io/)]
    • [Paper - www.cs.umass.edu/3dllm/)] [[Code](https://github.com/UMass-Foundation-Model/3D-LLM)]
    • [Paper
    • [Paper - llm.cs.uni-freiburg.de/)] [[Code](https://github.com/robot-learning-freiburg/MoMa-LLM)]
    • [Paper - anything/)]
    • [Paper - AIC/Open-Vocabulary-Affordance-Detection-in-3D-Point-Clouds)]
    • [Paper - AIC/Language-Conditioned-Affordance-Pose-Detection-in-3D-Point-Clouds)]
    • [Paper - AIC/Open-Vocabulary-Affordance-Detection-using-Knowledge-Distillation-and-Text-Point-Correlation)]
    • [Paper
    • [Paper
    • [Paper - learning.uk/dream2real)] [[Code](https://github.com/FlyCole/Dream2Real)]
    • [Paper - generalist.github.io/)] [[Code](https://github.com/embodied-generalist/embodied-generalist)]
    • [Paper
    • [Paper - DCAI/SpatialBot)]
    • [Paper - robot.github.io/)]
    • [Paper - www.cs.umass.edu/3dllm/)] [[Code](https://github.com/UMass-Foundation-Model/3D-LLM)]
    • [Paper
    • [Paper - llm.cs.uni-freiburg.de/)] [[Code](https://github.com/robot-learning-freiburg/MoMa-LLM)]
    • [Paper - anything/)]
    • [Paper - AIC/Open-Vocabulary-Affordance-Detection-in-3D-Point-Clouds)]
    • [Paper - AIC/Language-Conditioned-Affordance-Pose-Detection-in-3D-Point-Clouds)]
    • [Paper - AIC/Open-Vocabulary-Affordance-Detection-using-Knowledge-Distillation-and-Text-Point-Correlation)]
    • [Paper
  • Policy Learning

    • [Paper - diffuser-actor.github.io/)] [[Code](https://github.com/nickgkan/3d_diffuser_actor)]
    • [Paper - Diffusion-Policy)]
    • [Paper
    • [Paper - fast.github.io/)] [[Code](https://github.com/ManiCM-fast/ManiCM)]
    • [Paper - ai/hdp)]
    • [Paper
    • [Paper
    • [Paper - view-transformer.github.io/)] [[Code](https://github.com/nvlabs/rvt)]
    • [Paper - robot/polarnet/)]
    • [Paper
    • [Paper - 3d.github.io/)] [[Code](https://github.com/doublelei/VIHE.git)]
    • [Paper - robot.github.io/)]
    • [Paper
    • [Paper - view-transformer-2.github.io/)] [[Code](https://github.com/nvlabs/rvt)]
    • [Paper - E)]
    • [Paper - policy/rise)] [[Code](https://github.com/rise-policy/RISE)]
    • [Paper - diffuser.github.io/)] [[Code](https://github.com/zhouxian/chained-diffuser)]
    • [Paper
    • [Paper - robot/polarnet/)]
    • [Paper - diffuser-actor.github.io/)] [[Code](https://github.com/nickgkan/3d_diffuser_actor)]
    • [Paper
    • [Paper - view-transformer.github.io/)] [[Code](https://github.com/nvlabs/rvt)]
    • [Paper - Diffusion-Policy)]
    • [Paper - fast.github.io/)] [[Code](https://github.com/ManiCM-fast/ManiCM)]
    • [Paper - ai/hdp)]
    • [Paper
    • [Paper - diffuser)]
    • [Paper - 3d.github.io/)] [[Code](https://github.com/doublelei/VIHE.git)]
    • [Paper - robot.github.io/)]
    • [Paper
    • [Paper - view-transformer-2.github.io/)] [[Code](https://github.com/nvlabs/rvt)]
    • [Paper - E)]
    • [Paper - policy/rise)] [[Code](https://github.com/rise-policy/RISE)]
    • [Paper - diffuser.github.io/)] [[Code](https://github.com/zhouxian/chained-diffuser)]
    • [Paper
    • [Paper
    • [Paper - robot/polarnet/)]
  • Representation