Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-6d-object
Awesome work on object 6 DoF pose estimation
https://github.com/ZhongqunZHANG/awesome-6d-object
Last synced: 2 days ago
JSON representation
-
Conference Papers
-
2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
- 2021 ICRA
-
2021 IROS
-
2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR - nguyen/template-pose)
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR - vl/Coupled-Iterative-Refinement)
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR Best Student Paper - cprg/EPro-PnP)
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR Best Student Paper - cprg/EPro-PnP)
- 2022 CVPR
- 2022 CVPR - nguyen/template-pose)
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
- 2022 CVPR
-
2022 ECCV
- 2022 ECCV
- 2022 ECCV - pal/Gen6D) [\[Project\]](https://github.com/liuyuan-pal/Gen6D) [\[Dataset\]](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/yuanly_connect_hku_hk/EkWESLayIVdEov4YlVrRShQBkOVTJwgK0bjF7chFg2GrBg?e=Y8UpXu)
- 2022 ECCV
- 2022 ECCV - code) [\[Project\]](https://linhuang17.github.io/NCF/)
- 2022 ECCV - z.github.io/projects/Unseen_Object_Pose.html) [\[Code\]](https://github.com/sailor-z/Unseen_Object_Pose)
- 2022 ECCV
- 2022 ECCV - dpdn)
- 2022 ECCV
- 2022 ECCV
- 2022 ECCV
- 2022 ECCV - DA-6D-Pose-Group/CATRE)
- 2022 ECCV - 6D) [\[Project\]](https://fylwen.github.io/disp6d.html)
- 2022 ECCV - 6D](https://github.com/Gorilla-Lab-SCUT/DCL-Net))
- 2022 ECCV - pal/Gen6D) [\[Project\]](https://github.com/liuyuan-pal/Gen6D) [\[Dataset\]](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/yuanly_connect_hku_hk/EkWESLayIVdEov4YlVrRShQBkOVTJwgK0bjF7chFg2GrBg?e=Y8UpXu)
- 2022 ECCV
- 2022 ECCV - code) [\[Project\]](https://linhuang17.github.io/NCF/)
- 2022 ECCV
-
2021 Others
- 2021 WACV
- 2021 ICLR Oral
- 2021 BMVC - bmvc/) [\[Code\]](https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fxiaoxiaoxh%2FOMAD&sa=D&sntz=1&usg=AFQjCNHRMrL0ldm-wi5n-VBfPVYkmuWrhg)
- 2021 GCPR
- 2021 AAAI
- 2021 AAAI
- 2021 BMVC - bmvc/) [\[Code\]](https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fxiaoxiaoxh%2FOMAD&sa=D&sntz=1&usg=AFQjCNHRMrL0ldm-wi5n-VBfPVYkmuWrhg)
- 2021 ICLR Oral
-
2020 ECCV
-
2020 CVPR
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\ - epfl/single-stage-pose)
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
- \[PDF\
-
2020 Others
- 2020 IROS - 6d-pose-tracking)
- 2020 RA-L/IROS - AAE) [\[Project\]](https://haopan.github.io/6dpose.html)
- 2020 3DV
- 2020 3DV
- 2020 WACV
- 2020 BMVC
- 2020 BMVC
- 2020 ACCV
- 2020 ACCV
- 2020 ICRA
- 2020 ICRA
- 2020 ICRA
- 2020 ICRA
- 2020 ICRA
- 2020 ICRA
- 2020 WACV
- 2020 WACV
- 2020 WACV
- 2020 WACV
- 2020 WACV
- 2020 BMVC
- 2020 BMVC - supervised-3d-pose-generator)
- 2020 BMVC - conference.com/programme/accepted-papers/Code%20will%20be%20made%20publicly%20available%20later)
- 2020 BMVC - 3DHumanShapePose)
- 2020 ACCV
- 2020 ACCV
- 2020 ACCV
- 2020 ICRA
- 2020 3DV
- 2020 ICRA
- 2020 ICRA
-
2019 CVPR
-
2019 ICCV
-
2018 CVPR
-
2018 ECCV
-
2017 CVPR
-
2017 ICCV
-
2017 Others
-
2016 CVPR
-
2015 CVPR
-
2015 ICCV
-
2014 CVPR
-
2014 Others & Before
-
2023 CVPR
-
2023 ICRA
-
2023 IROS
-
2022 ICRA
- 2022 ICRA
- 2022 ICRA - irshad.github.io/projects/CenterSnap.html) [\[Code\]](https://github.com/zubair-irshad/CenterSnap) [\[Video\]](https://www.youtube.com/watch?v=Bg5vi6DSMdM)
- 2022 ICRA
- 2022 ICRA
- 2022 ICRA - supervision-public) [\[Project\]](https://yenchenlin.me/nerf-supervision/)
- 2022 ICRA - irshad.github.io/projects/CenterSnap.html) [\[Code\]](https://github.com/zubair-irshad/CenterSnap) [\[Video\]](https://www.youtube.com/watch?v=Bg5vi6DSMdM)
-
2022 IROS
-
2022 Others
-
2021 ICCV
-
2021 CVPR
- 2021 CVPR Oral - Net) [\[Supp\]](https://openaccess.thecvf.com/content/CVPR2021/papers/Chen_FS-Net_Fast_Shape-Based_Network_for_Category-Level_6D_Object_Pose_Estimation_CVPR_2021_paper.pdf)
- 2021 CVPR - ycb-toolkit) [\[Project\]](https://dex-ycb.github.io/)
- 2021 CVPR Oral
- 2021 CVPR - DA-6D-Pose-Group/GDR-Net) [\[Supp\]](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_GDR-Net_Geometry-Guided_Direct_CVPR_2021_supplemental.pdf)
- 2021 CVPR - PoseNet_Learning_6DoF_CVPR_2021_supplemental.pdf)
- 2021 CVPR Oral - Body_Segmentation_CVPR_2021_supplemental.pdf)
- 2021 CVPR
- 2021 CVPR
- 2021 CVPR - Graph-Driven_Learning_Framework_CVPR_2021_supplemental.pdf)
- 2021 CVPR
- 2021 CVPR - DA-6D-Pose-Group/GDR-Net) [\[Supp\]](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_GDR-Net_Geometry-Guided_Direct_CVPR_2021_supplemental.pdf)
- 2021 CVPR - PoseNet_Learning_6DoF_CVPR_2021_supplemental.pdf)
- 2021 CVPR - ycb-toolkit) [\[Project\]](https://dex-ycb.github.io/)
- 2021 CVPR Oral
-
2024 CVPR
- 2024 CVPR
- 2024 CVPR - nguyen/gigaPose)
- 2024 CVPR
- 2024 CVPR - 6D)
- 2024 CVPR - Pose)
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR - feats-pose)
- 2024 CVPR
- 2024 CVPR - feats-pose)
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR - nguyen/gigaPose)
- 2024 CVPR
- 2024 CVPR - 6D)
- 2024 CVPR - Pose)
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
- 2024 CVPR
-
2023 ICCV
- 2023 ICCV - 1218/PseudoFlow)
-
-
Journal Papers
-
Other Journals
- 2022 Transactions on MultiMedia\
- 2022 TIP\
- 2022 RAL\
- 2022 TIP\
- 2022 Transactions on MultiMedia\
- 2021 IEEE Transactions on Robotics\
- 2022 TIP\
- 2020 TIP\
- 2022 Transactions on MultiMedia\
- 2022 TIP\
- 2022 Transactions on MultiMedia\
- 2022 TIP\
- 2020 TIP\
- 2022 Transactions on MultiMedia\
- 2022 TIP\
- 2022 TIP\
- 2022 Transactions on MultiMedia\
- 2020 TIP\
- 2021 IEEE Transactions on Robotics\
- 2022 Transactions on MultiMedia\
- 2022 TIP\
- 2022 RAL\
- 2022 TIP\
-
TPAMI / IJCV
- 2021 TPAMI\ - DA-6D-Pose-Group/self6dpp)
- 2021 TPAMI\
- 2020 TPAMI\
- 2020 IJCV\
- 2020 IJCV\
- 2020 TPAMI\
- 2020 TPAMI\
- 2019 TPAMI\
- 2018 TPAMI\
- 2018 TPAMI\
- 2018 TPAMI\
- 2018 IJCV\
- 2018 IJCV\
- 2017 TPAMI\
- 2017 TPAMI\
- 2017 TPAMI\
- 2017 IJCV\
- 2016 TPAMI\
- 2016 TPAMI\
- 2016 TPAMI\
- 2016 IJCV\
- 2021 TPAMI\ - DA-6D-Pose-Group/self6dpp)
- 2021 TPAMI\
- 2020 TPAMI\
- 2020 TPAMI\
- 2020 TPAMI\
- 2018 TPAMI\
- 2018 TPAMI\
- 2018 TPAMI\
- 2017 TPAMI\
- 2017 TPAMI\
- 2017 TPAMI\
- 2016 TPAMI\
- 2016 TPAMI\
- 2016 TPAMI\
- 2021 TPAMI\ - DA-6D-Pose-Group/self6dpp)
- 2021 TPAMI\
- 2020 TPAMI\
- 2020 TPAMI\
- 2020 TPAMI\
- 2018 TPAMI\
- 2018 TPAMI\
- 2018 TPAMI\
- 2017 TPAMI\
- 2017 TPAMI\
- 2017 TPAMI\
- 2016 TPAMI\
- 2016 TPAMI\
- 2016 TPAMI\
- 2019 TPAMI\
-
-
Workshops
-
*Workshops on Recovering 6D Object Pose:*
-
*Workshops on 3D Vision and Robotics:*
-
• [R6D 2019](http://cmp.felk.cvut.cz/sixd/workshop_2019/), In conjunction with ICCV 2019
-
• [R6D 2018](http://cmp.felk.cvut.cz/sixd/workshop_2018/), In conjunction with ECCV 2018
- Talk 1
- Talk 2 - based Reasoning and Data Efficiency,[\[Slide\]](http://cmp.felk.cvut.cz/sixd/workshop_2018/data/bekris_r6d_eccv18_talk.pptx)
- Talk 3
- 2018 ECCVW
- 2018 ECCVW
- 2018 ECCVW
- Talk 2 - based Reasoning and Data Efficiency,[\[Slide\]](http://cmp.felk.cvut.cz/sixd/workshop_2018/data/bekris_r6d_eccv18_talk.pptx)
- Talk 4
- 2018 ECCVW
- Summary
- 2018 ECCVW
- Summary
-
• [3DReps 2020](https://geometry.stanford.edu/3DReps/index.html), In conjunction with ECCV 2020
-
-
Thesis
-
2014 Others & Before
- 2020\
- Bo Yang
- Sahin Caner
- 2018\ - consistency-shape-gen)
- 2016\
- Yu Xiang
- 2014\
- Alexandre Boulch
-
-
Datasets
-
Benchmark-6D-Object-Pose-Estimation
-
-
Researchers
-
U.S./Canada
- Dieter Fox - lab.cs.washington.edu/), University of Washington.
- Junsong Yuan
- Fei-Fei Li
- KOSTAS BEKRIS
- Qixing Huang
- Leonidas Guibas
- TOMÁŠ HODAŇ
- Shuran Song
-
Europe
- Niloy J. Mitra
- TAE-KYUN (T-K) KIM
- Ales Leonardis - science/artificial-intelligence/robotics-and-computer-vision/index.aspx), University of Birmingham.
- Pascal Fua
- Andrew Davison - robotics-lab), Imperial College London.
- Andrew Zisserman
- Jiri MATAS
- Otmar Hilliges
- Renaud MARLET - lab.enpc.fr/), École des Ponts ParisTech (ENPC).
- Matthias Nießner
- Eric Brachmann - heidelberg.de/vislearn/), Heidelberg University.
-
Asia
-
Australia
- Hongdong Li
- Yi Yang - homepage/Home), [ReLER Lab](http://reler.net/), University of Technology Sydney (UTS).
-
-
arXiv Papers
- \[arXiv:2303.06753\ - Wise Network Quantization for 6D Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2303.06753)
- \[arXiv:2303.11516\ - Covariance Loss for End-to-End Learning of 6D Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2303.11516)
- \[arXiv:2303.13479\ - free Category-level Pose Estimation with Implicit Space Transformation. [\[PDF\]](https://arxiv.org/pdf/2303.13479)
- \[arXiv:2210.03437\
- \[arXiv:2210.13540\
- \[arXiv:2210.11973\ - Time Constrained 6D Object-Pose Tracking of An In-Hand Suture Needle for Minimally Invasive Robotic Surgery. [\[PDF\]](https://arxiv.org/pdf/2210.11973)
- \[arXiv:2210.11545\ - View Optimization. [\[PDF\]](https://arxiv.org/pdf/2210.11545)
- \[arXiv:2210.10959\
- \[arXiv:2210.07199\ - Supervised Geometric Correspondence for Category-Level 6D Object Pose Estimation in the Wild. [\[PDF\]](https://arxiv.org/pdf/2210.07199)
- \[arXiv:2210.05138\ - Adaptive and Semantic-Aware Multi-Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2210.05138)
- \[arXiv:2205.02536\ - based Multi-Object 6D Pose Estimation using Keypoint Regression. [\[PDF\]](https://arxiv.org/pdf/2205.02536)
- \[arXiv:2204.09429\ - Time High-Resolution 6D Pose Estimation Network Using Knowledge Distillation. [\[PDF\]](https://arxiv.org/pdf/2204.09429)
- \[arXiv:2204.07049\ - to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin-picking. [\[PDF\]](https://arxiv.org/pdf/2204.07049)
- \[arXiv:2204.01586\
- \[arXiv:2203.15309\ - based Point Cloud Registration for 6D Object Pose Estimation in the Real World. [\[PDF\]](https://arxiv.org/pdf/2203.15309)
- \[arXiv:2203.04802\ - Pose: A First-Reconstruct-Then-Regress Approach for Weakly-supervised 6D Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2203.04802)
- \[arXiv:2203.03498\
- \[arXiv:2203.02069\ - Level Style Transfer for 6D Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2203.02069)
- \[arXiv:2203.01051\ - pose estimation from 2D shape for robotic grasping of objects. [\[PDF\]](https://arxiv.org/pdf/2203.01051)
- \[arXiv:2203.00945\
- \[arXiv:2203.00302\
- \[arXiv:2203.00283\ - Centric 3D Perception. [\[PDF\]](https://arxiv.org/pdf/2203.00283)
- \[arXiv:2202.12555\
- \[arXiv:2202.10346\ - D-based Categorical Pose and Shape Estimation. [\[PDF\]](https://arxiv.org/pdf/2202.10346)
- \[arXiv:2202.03574\
- \[arXiv:2201.13065\
- \[arXiv:2201.00059\ - level Object Pose and Shape Estimation. [\[PDF\]](https://arxiv.org/pdf/2201.00059)
- \[arXiv:2112.15075\
- \[arXiv:2112.03810\
- \[arXiv:2111.10677\
- \[arXiv:2111.07383\ - Equivariant Features for Estimation and Tracking of Object Poses in 3D Space. [\[PDF\]](https://arxiv.org/pdf/2111.07383)
- \[arXiv:2111.06276\
- \[arXiv:2111.03821\ - Time Optical Flow-Aided 6D Object Pose and Velocity Tracking. [\[PDF\]](https://arxiv.org/pdf/2111.03821)
- \[arXiv:2111.00190\ - Supervised Category-Level Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2111.00190)
- \[arXiv:2110.04792\ - ViT: Category-Level 6D Object Pose Estimation via Transformer-based Instance Representation Learning. [\[PDF\]](https://arxiv.org/pdf/2110.04792)
- \[arXiv:2110.00992\
- \[arXiv:2107.02057\
- \[arXiv:2106.14193\ - Level 6D Object Pose and Size Estimation from Depth Observation. [\[PDF\]](https://arxiv.org/pdf/2106.14193)
- \[arXiv:2106.08045\ - based 6D pose estimation for highly cluttered Bin Picking. [\[PDF\]](https://arxiv.org/pdf/2106.08045)
- \[arXiv:2106.06684\
- \[arXiv:2105.04112\ - View Dataset for Reflective Objects in Robotic Bin-Picking. [\[PDF\]](https://arxiv.org/pdf/2105.04112)
- \[arXiv:2103.09696\
- \[arXiv:2101.08895\
- \[arXiv:2012.11938\ - to-Keypoint Voting Network for 6D Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2012.11938)
- \[arXiv:2012.11260\ - Consistent Self-Training for 3D Hand-Object Joint Reconstruction. [\[PDF\]](https://arxiv.org/pdf/2012.11260)
- \[arXiv:2012.01788\ - Driven Active Mapping for More Accurate Object Pose Estimation and Robotic Grasping. [\[PDF\]](https://arxiv.org/pdf/2012.01788) [\[Project Page\]](https://yanmin-wu.github.io/project/active-mapping/)
- \[arXiv:2012.00088\ - Free Method for Articulated Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2012.00088)
- \[arXiv:2011.13669\ - time object recognition and pose estimation in point clouds. [\[PDF\]](https://arxiv.org/pdf/2011.13669)
- \[arXiv:2011.11078\ - to-End Differentiable 6DoF Object Pose Estimation with Local and Global Constraints. [\[PDF\]](https://arxiv.org/pdf/2011.11078)
- \[arXiv:2011.08771\ - D Datasetfor 6D Pose Estimation Task. [\[PDF\]](https://arxiv.org/pdf/2011.08771)
- \[arXiv:2011.04307\ - to-end 6D multi object pose estimation approach. [\[PDF\]](https://arxiv.org/pdf/2011.04307) [\[Code\]](https://github.com/ybkscht/EfficientPose)
- \[arXiv:2011.00372\
- \[arXiv:2010.16117\
- \[arXiv:2010.09355\
- \[arXiv:2010.00829\ - range 3D object pose estimation. [\[PDF\]](https://arxiv.org/pdf/2010.00829)
- \[arXiv:2009.12678\
- \[arXiv:2009.06887\ - level 3D Hough Voting Network for 6D Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2009.06887)
- \[arXiv:2008.08391\ - based 6-DoF Pose Estimation without Real Pose Annotations. [\[PDF\]](https://arxiv.org/pdf/2008.08391)
- \[arXiv:2008.05242\ - wise Attention Module for 6D Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2008.05242)
- \[arXiv:2002.03923\
- \[arXiv:1912.07333\
- \[arXiv:1910.10653\
- \[arXiv:1809.06893\
- \[arXiv:1812.01387\
- \[arXiv:2110.04792\ - ViT: Category-Level 6D Object Pose Estimation via Transformer-based Instance Representation Learning. [\[PDF\]](https://arxiv.org/pdf/2110.04792)
- \[arXiv:2110.00992\
- \[arXiv:2111.00190\ - Supervised Category-Level Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2111.00190)
- \[arXiv:2203.04424\ - Supported Self-Training for 6D Object Pose Estimation. [\[PDF\]](https://arxiv.org/pdf/2203.04424)
- \[arXiv:2111.09378\
-
Challenges
-
• [3DReps 2020](https://geometry.stanford.edu/3DReps/index.html), In conjunction with ECCV 2020
-
Categories
Sub Categories
TPAMI / IJCV
50
2020 CVPR
47
2021 ICRA
46
2022 CVPR
34
2014 Others & Before
31
2020 Others
31
2020 ECCV
29
• [3DReps 2020](https://geometry.stanford.edu/3DReps/index.html), In conjunction with ECCV 2020
29
2019 CVPR
26
2019 ICCV
26
2024 CVPR
25
2021 IROS
25
Other Journals
23
2017 ICCV
18
2022 ECCV
17
2018 CVPR
16
2018 ECCV
15
2014 CVPR
15
2021 CVPR
14
2015 CVPR
14
• [R6D 2018](http://cmp.felk.cvut.cz/sixd/workshop_2018/), In conjunction with ECCV 2018
12
Europe
11
2017 CVPR
11
2023 CVPR
10
*Workshops on Recovering 6D Object Pose:*
9
U.S./Canada
8
Asia
8
2021 Others
8
2021 ICCV
8
• [R6D 2019](http://cmp.felk.cvut.cz/sixd/workshop_2019/), In conjunction with ICCV 2019
8
2016 CVPR
7
2022 Others
7
2022 ICRA
6
2015 ICCV
6
2023 IROS
4
*Workshops on 3D Vision and Robotics:*
3
2023 ICRA
3
Australia
2
2017 Others
2
2023 ICCV
1
2022 IROS
1
Benchmark-6D-Object-Pose-Estimation
1