Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.
https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
List: Awesome-Segment-Anything
awesome sam segment-anything segment-anything-model
Last synced: 21 days ago
JSON representation
A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.
- Host: GitHub
- URL: https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
- Owner: Vision-Intelligence-and-Robots-Group
- License: mit
- Created: 2023-04-14T01:46:28.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-11-22T02:30:09.000Z (30 days ago)
- Last Synced: 2024-11-22T13:33:15.099Z (29 days ago)
- Topics: awesome, sam, segment-anything, segment-anything-model
- Homepage:
- Size: 356 KB
- Stars: 328
- Watchers: 9
- Forks: 30
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ultimate-awesome - Awesome-Segment-Anything - A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies. (Other Lists / Monkey C Lists)
- Awesome-Segment-Anything - Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
README
# Awesome-Segment-Anything
Tribute to Meta AI's Segment Anything Model (SAM)
A collection of projects, papers, and source code for SAM and related studies.
**Keywords:** Segment Anything Model, Segment Anything, SAM, awesome
> **CATALOGUE**
>
>[Origin of the Study](#quick-start) :heartpulse: [Project & Toolbox](#tool) :heartpulse: [Lecture & Notes](#workshop) :heartpulse: [Papers](#papers-by-categories)## 1 Origin of the Study
**Fundemental Models**
+ **[SAM]** Segment Anything (2023)[[paper]](https://arxiv.org/pdf/2304.02643.pdf) [[Project]](https://github.com/facebookresearch/segment-anything)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/segment-anything.svg?logo=github&label=Stars)
+ **[SEEM]** Segment Everything Everywhere All at Once (2023)[[paper]](https://arxiv.org/pdf/2304.06718.pdf)[[Project]](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)![GitHub stars](https://img.shields.io/github/stars/UX-Decoder/Segment-Everything-Everywhere-All-At-Once.svg?logo=github&label=Stars)
## 2 Project & Toolbox
+ **[Awesome Segment-Anything Extensions]** Awesome Segment-Anything Extensions [paper][[code]](https://github.com/JerryX1110/awesome-segment-anything-extensions)![GitHub stars](https://img.shields.io/github/stars/JerryX1110/awesome-segment-anything-extensions.svg?logo=github&label=Stars)
+ **[Awesome Anything]** A curated list of general AI methods for Anything [paper][[code]](https://github.com/VainF/Awesome-Anything)![GitHub stars](https://img.shields.io/github/stars/VainF/Awesome-Anything.svg?logo=github&label=Stars)
+ **[Awesome Segment Anything]** Awesome Segment Anything [paper][[code]]( https://github.com/Hedlen/awesome-segment-anything) ![GitHub Repo stars](https://img.shields.io/github/stars/Hedlen/awesome-segment-anything?logo=github&style=flat-square)
|Preview|Project|
|------|------|
|![preview](https://github.com/IDEA-Research/Grounded-Segment-Anything/raw/main/assets/acoustics/gsam_whisper_inpainting_demo.png)|**[Grounded-Segment-Anything]** Grounded-Segment-Anything(DINO + SAM) [[paper]](https://arxiv.org/abs/2303.05499)[[code]](https://github.com/IDEA-Research/Grounded-Segment-Anything)![GitHub stars](https://img.shields.io/github/stars/IDEA-Research/Grounded-Segment-Anything.svg?logo=github&label=Stars)|
|![preview](https://github.com/Cheems-Seminar/grounded-segment-any-parts/raw/main/assets/dog2zebra.jpg)|**[Grounded Segment Anything: From Objects to Parts]** Support text prompt input(GLIP/VLPart + SAM)[paper][[code]](https://github.com/Cheems-Seminar/grounded-segment-any-parts)![GitHub Repo stars](https://img.shields.io/github/stars/Cheems-Seminar/grounded-segment-any-parts?logo=github&style=flat-square)|
|![preview](https://github.com/fudan-zvg/Semantic-Segment-Anything/raw/main/figures/SSA_motivation.png)|**[Semantic-Segment-Anything]** A pipeline on top of SAM to predict semantic category for each mask [paper][[code]](https://github.com/fudan-zvg/Semantic-Segment-Anything)![GitHub stars](https://img.shields.io/github/stars/fudan-zvg/Semantic-Segment-Anything.svg?logo=github&label=Stars)|
|![preview](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection/raw/master/assets/demo_results.png)|**[GroundedSAM-zero-shot-anomaly-detection]** Segment any anomaly[papaer][[code]](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)![GitHub Repo stars](https://img.shields.io/github/stars/caoyunkang/GroundedSAM-zero-shot-anomaly-detection?logo=github&style=flat-square)|
|![preview](https://github.com/sail-sg/EditAnything/raw/main/images/sample_cat_eye.jpg)|**[EditAnything]** EditAnything [paper][[code]](https://github.com/sail-sg/EditAnything)![GitHub stars](https://img.shields.io/github/stars/sail-sg/EditAnything.svg?logo=github&label=Stars)|
|![preview](https://user-images.githubusercontent.com/14961526/230437084-79ef6e02-a254-421e-bd4c-32e87415c623.png)|**[sd-webui-segment-anything]** extension for helping stable diffusion webui with inpainting [paper][[code]](https://github.com/continue-revolution/sd-webui-segment-anything)![GitHub stars](https://img.shields.io/github/stars/continue-revolution/sd-webui-segment-anything.svg?logo=github&label=Stars)|
|![preview](https://github.com/anuragxel/salt/raw/main/assets/how-it-works.gif)|**[SALT]** Segment Anything Labelling Tool [paper][[code]](https://github.com/anuragxel/salt)![GitHub stars](https://img.shields.io/github/stars/anuragxel/salt.svg?logo=github&label=Stars)|
|![preview](https://github.com/PengtaoJiang/SAM-CLIP/raw/main/imgs/pipeline.png)|**[SAM-CLIP]** Segment Anything CLIP [paper][[code]](https://github.com/PengtaoJiang/Segment-Anything-CLIP)![GitHub stars](https://img.shields.io/github/stars/PengtaoJiang/Segment-Anything-CLIP.svg?logo=github&label=Stars)|
|![preview](https://github.com/RockeyCoss/Prompt-Segment-Anything/raw/master/assets/example1.jpg)|**[Prompt-Segment-Anything]** An implementation of SAM[parper][[code]](https://github.com/RockeyCoss/Prompt-Segment-Anything)![GitHub Repo stars](https://img.shields.io/github/stars/RockeyCoss/Prompt-Segment-Anything?logo=github&style=flat-square)|
|![preview](https://github.com/Vision-Intelligence-and-Robots-Group/count-anything/raw/main/example.png)|**[Count-Anything]** Few-shot SAM Counting[parper][[code]](https://github.com/Vision-Intelligence-and-Robots-Group/count-anything)![GitHub Repo stars](https://img.shields.io/github/stars/Vision-Intelligence-and-Robots-Group/count-anything?logo=github&style=flat-square)|
|![preview](https://baai-seggpt.hf.space/file/rainbow2.gif)|**[SegGPT: Vision Foundation Models]** One touch for segmentation in all images (SAM+SegGPT)[[paper]](https://arxiv.org/abs/2304.03284)[[code]](https://github.com/baaivision/Painter)![GitHub Repo stars](https://img.shields.io/github/stars/baaivision/Painter?logo=github&style=flat-square)|
|![preview](https://github.com/QianXuna/Myimg/blob/main/image/ay0uq-jd22c.gif?raw=true)|**[napari-segment-anything]** Image viewer plugin of SAM[paper][[code]](https://github.com/JoOkuma/napari-segment-anything)![GitHub Repo stars](https://img.shields.io/github/stars/JoOkuma/napari-segment-anything?logo=github&style=flat-square)|
|![preview](https://github.com/yeungchenwa/OCR-SAM/raw/main/imgs/erase_vis.png) |**[OCR-SAM]** SAM is applied to OCR [paper][[code]](https://github.com/yeungchenwa/OCR-SAM#sam-for-text)![GitHub Repo stars](https://img.shields.io/github/stars/yeungchenwa/OCR-SAM?logo=github&style=flat-square)|
|![preview](https://github.com/lujiazho/SegDrawer/raw/main/example/demo.gif)|**[SegDrawer]** Simple static web-based mask drawer[paper][[code]](https://github.com/lujiazho/SegDrawer)![GitHub Repo stars](https://img.shields.io/github/stars/lujiazho/SegDrawer?logo=github&style=flat-square)|
|![preview](https://github.com/QianXuna/Myimg/blob/main/image/w3gjq-2pu51.gif?raw=true)|**[Magic Copy]** Extract and copy foreground objects using SAM[paper][[code]](https://github.com/kevmo314/magic-copy)![GitHub Repo stars](https://img.shields.io/github/stars/kevmo314/magic-copy?logo=github&style=flat-square)|
|![preview](https://github.com/QianXuna/Myimg/blob/main/image/tq67n-hjrc4.gif?raw=true)|**[Track-Anything]** Video object tracking and segmentation based on SAM[paper][[code]](https://github.com/gaomingqi/Track-Anything)![GitHub Repo stars](https://img.shields.io/github/stars/gaomingqi/Track-Anything?logo=github&style=flat-square)|
|![preview](https://github.com/z-x-yang/Segment-and-Track-Anything/raw/main/assets/demo_3x2.gif)]|**[SAM-Track]** Video object tracking and segmentation based on SAM[paper][[code]](https://github.com/z-x-yang/Segment-and-Track-Anything)![GitHub Repo stars](https://img.shields.io/github/stars/z-x-yang/Segment-and-Track-Anything?logo=github&style=flat-square)|
|![preview](https://github.com/ylqi/Count-Anything/raw/main/figures/example_1.png)|**[Count Anything]** Count any object[paper][[code]](https://github.com/ylqi/Count-Anything)![GitHub Repo stars](https://img.shields.io/github/stars/ylqi/Count-Anything?logo=github&style=flat-square)|
|![preview](https://github.com/geekyutao/Inpaint-Anything/raw/main/example/MainFramework.png)|**[Inpaint Anything]** Inpaint Anything [[paper]](https://arxiv.org/abs/2304.06790)[[code]](https://github.com/geekyutao/Inpaint-Anything)![GitHub Repo stars](https://img.shields.io/github/stars/geekyutao/Inpaint-Anything?logo=github&style=flat-square)|
|![preview](https://github.com/Huage001/Transfer-Any-Style/raw/main/picture/demo3.gif)|**[Transfer-Any-Style]** Transfer-Any-Style[paper][[code]](https://github.com/Huage001/Transfer-Any-Style)![GitHub Repo stars](https://img.shields.io/github/stars/Huage001/Transfer-Any-Style?logo=github&style=flat-square)|
|![preview](https://github.com/Anything-of-anything/Anything-3D/raw/main/novel-view/assets/1.jpeg)|**[Anything-3D]** Anything-3D [paper][[code]](https://github.com/Anything-of-anything/Anything-3D) ![GitHub Repo stars](https://img.shields.io/github/stars/Anything-of-anything/Anything-3D?logo=github&style=flat-square)|
|![preview](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/raw/main/assets/teaser_new.png?raw=true)|**[SEEM]** Segment Everything Everywhere All at Once [[paper]](https://arxiv.org/pdf/2304.06718.pdf)[[code]](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once) ![GitHub Repo stars](https://img.shields.io/github/stars/UX-Decoder/Segment-Everything-Everywhere-All-At-Once?logo=github&style=flat-square)|
|![preview](https://github.com/ttengwang/Caption-Anything/raw/main/assets/qingming.gif)|**[Caption-Anything]** Generate descriptive captions for any object [paper][[code]](https://github.com/ttengwang/Caption-Anything) ![GitHub Repo stars](https://img.shields.io/github/stars/ttengwang/Caption-Anything?logo=github&style=flat-square)|
|![preview]( https://github.com/QianXuna/Myimg/blob/main/image/Ysq3u7E.png?raw=true)|**[Caption-Anything]** Generate descriptive captions for any object [paper][[code]](https://github.com/opengeos/segment-geospatial) ![GitHub Repo stars](https://img.shields.io/github/stars/opengeos/segment-geospatial?logo=github&style=flat-square)|
|![preview]( https://github.com/showlab/Image2Paragraph/raw/main/examples/introduction.png)|**[Image.txt]** Transform Image Into Unique Paragraph [paper][[code]](https://github.com/showlab/Image2Paragraph) ![GitHub Repo stars](https://img.shields.io/github/stars/showlab/Image2Paragraph?logo=github&style=flat-square)|
|![preview](https://github.com/dvlab-research/3D-Box-Segment-Anything/raw/main/images/sam-voxelnext.png)|**[3D-Box via Segment Anything]** Extend the scope to 3D world [paper][[code]](https://github.com/dvlab-research/3D-Box-Segment-Anything) ![GitHub Repo stars](https://img.shields.io/github/stars/dvlab-research/3D-Box-Segment-Anything?logo=github&style=flat-square)|
|![preview]( https://github.com/Jun-CEN/SegmentAnyRGBD/blob/main/resources/comparison.png)|**[SegmentAnyRGBD]** Segment rendered depth images based on SAM [paper] [[code]](https://github.com/Jun-CEN/SegmentAnyRGBD) ![GitHub Repo stars](https://img.shields.io/github/stars/Jun-CEN/SegmentAnyRGBD?logo=github&style=flat-square)|## 3 Lecture & Notes
**How to | roboflow** how-to-segment-anything-with-sam [[blog]](https://github.com/roboflow/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb)
## 4 Papers
### Image Segmentation & Medical Image Segmentation+ **[Zero-shot Segmentation]** Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging[[paper]](https://arxiv.org/abs/2304.04155)[[code]](https://github.com/BingfengYan/VISAM)![GitHub stars](https://img.shields.io/github/stars/BingfengYan/VISAM.svg?logo=github&label=Stars)
+ **[generic segmentation]** Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications [[paper]](http://arxiv.org/abs/2304.05750v2)[code]
+ **[Medical Image segmentation]** SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM [[paper]](http://arxiv.org/abs/2304.05622v1)[[code]](https://github.com/bingogome/samm)+ **[Medical Image segmentation]** SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model [[paper]](http://arxiv.org/abs/2304.05396v1) [code]
+ **[Camouflaged Object Segmentation]** SAM Struggles in Concealed Scenes -- Empirical Study on "Segment Anything"[[paper]](https://arxiv.org/abs/2304.06022)[code]
+ **[Brain Extraction]** Brain Extraction comparing Segment Anything Model (SAM) and FSL Brain Extraction Tool[[paper]](https://arxiv.org/abs/2304.04738)[code]
### Object Detection & Tracking
+ **[Camouflaged Object Detection]** Can SAM Segment Anything? When SAM Meets Camouflaged Object Detection[[paper]](https://arxiv.org/abs/2304.04709)[[code]](https://github.com/luckybird1994/SAMCOD)![GitHub stars](https://img.shields.io/github/stars/luckybird1994/SAMCOD.svg?logo=github&label=Stars)
+ **[Multi-Object Tracking]** CO-MOT[paper][[code]](https://github.com/BingfengYan/VISAM)![GitHub stars](https://img.shields.io/github/stars/BingfengYan/VISAM.svg?logo=github&label=Stars)
### Explaning and Supporting Basic Models
+ **[CLIP]** CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks [[paper]](http://arxiv.org/abs/2304.05653v1)[[code]](https://github.com/xmed-lab/clip_surgery)
### Arxiv-daily-update
[Click here](https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything-Model/blob/main/arxiv-daily-docs/README.md) to check the daily-updated paper list!