Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/JiehongLin/SAM-6D
[CVPR2024] Code for "SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation".
https://github.com/JiehongLin/SAM-6D
Last synced: 3 months ago
JSON representation
[CVPR2024] Code for "SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation".
- Host: GitHub
- URL: https://github.com/JiehongLin/SAM-6D
- Owner: JiehongLin
- Created: 2023-09-28T02:30:23.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-09T06:49:49.000Z (6 months ago)
- Last Synced: 2024-08-01T03:33:24.498Z (5 months ago)
- Language: Python
- Homepage:
- Size: 61.6 MB
- Stars: 291
- Watchers: 21
- Forks: 27
- Open Issues: 27
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-Segment-Anything - [code
README
#
SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation
####
[Jiehong Lin](https://jiehonglin.github.io/), [Lihua Liu](https://github.com/foollh), [Dekun Lu](https://github.com/WuTanKun), [Kui Jia](http://kuijia.site/)
####CVPR 2024
####[[Paper]](https://arxiv.org/abs/2311.15707)
## News
- [2024/03/07] We publish an updated version of our paper on [ArXiv](https://arxiv.org/abs/2311.15707).
- [2024/02/29] Our paper is accepted by CVPR2024!## Update Log
- [2024/03/05] We update the demo to support [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM), you can do this by specifying `SEGMENTOR_MODEL=fastsam` in demo.sh.
- [2024/03/03] We upload a [docker image](https://hub.docker.com/r/lihualiu/sam-6d/tags) for running custom data.
- [2024/03/01] We update the released [model](https://drive.google.com/file/d/1joW9IvwsaRJYxoUmGo68dBVg-HcFNyI7/view?usp=sharing) of PEM. For the new model, a larger batchsize of 32 is set, while that of the old is 12.## Overview
In this work, we employ Segment Anything Model as an advanced starting point for **zero-shot 6D object pose estimation** from RGB-D images, and propose a novel framework, named **SAM-6D**, which utilizes the following two dedicated sub-networks to realize the focused task:
- [x] [Instance Segmentation Model](https://github.com/JiehongLin/SAM-6D/tree/main/SAM-6D/Instance_Segmentation_Model)
- [x] [Pose Estimation Model](https://github.com/JiehongLin/SAM-6D/tree/main/SAM-6D/Pose_Estimation_Model)
## Getting Started
### 1. Preparation
Please clone the repository locally:
```
git clone https://github.com/JiehongLin/SAM-6D.git
```
Install the environment and download the model checkpoints:
```
cd SAM-6D
sh prepare.sh
```
We also provide a [docker image](https://hub.docker.com/r/lihualiu/sam-6d/tags) for convenience.### 2. Evaluation on the custom data
```
# set the paths
export CAD_PATH=Data/Example/obj_000005.ply # path to a given cad model(mm)
export RGB_PATH=Data/Example/rgb.png # path to a given RGB image
export DEPTH_PATH=Data/Example/depth.png # path to a given depth map(mm)
export CAMERA_PATH=Data/Example/camera.json # path to given camera intrinsics
export OUTPUT_DIR=Data/Example/outputs # path to a pre-defined file for saving results# run inference
cd SAM-6D
sh demo.sh
```## Citation
If you find our work useful in your research, please consider citing:@article{lin2023sam,
title={SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation},
author={Lin, Jiehong and Liu, Lihua and Lu, Dekun and Jia, Kui},
journal={arXiv preprint arXiv:2311.15707},
year={2023}
}## Contact
If you have any questions, please feel free to contact the authors.
Jiehong Lin: [[email protected]](mailto:[email protected])
Lihua Liu: [[email protected]](mailto:[email protected])
Dekun Lu: [[email protected]](mailto:[email protected])
Kui Jia: [[email protected]]([email protected])