https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.
https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
List: Awesome-Segment-Anything
awesome sam segment-anything segment-anything-model
Last synced: 5 months ago
JSON representation
A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.
- Host: GitHub
- URL: https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
- Owner: Vision-Intelligence-and-Robots-Group
- License: mit
- Created: 2023-04-14T01:46:28.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-11-22T02:30:09.000Z (5 months ago)
- Last Synced: 2024-11-22T13:33:15.099Z (5 months ago)
- Topics: awesome, sam, segment-anything, segment-anything-model
- Homepage:
- Size: 356 KB
- Stars: 328
- Watchers: 9
- Forks: 30
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ultimate-awesome - Awesome-Segment-Anything - A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies. (Other Lists / Julia Lists)
- Awesome-Segment-Anything - Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything
README
# Awesome-Segment-Anything
Tribute to Meta AI's Segment Anything Model (SAM)
A collection of projects, papers, and source code for SAM and related studies.
**Keywords:** Segment Anything Model, Segment Anything, SAM, awesome
> **CATALOGUE**
>
>[Origin of the Study](#quick-start) :heartpulse: [Project & Toolbox](#tool) :heartpulse: [Lecture & Notes](#workshop) :heartpulse: [Papers](#papers-by-categories)## 1 Origin of the Study
**Fundemental Models**
+ **[SAM]** Segment Anything (2023)[[paper]](https://arxiv.org/pdf/2304.02643.pdf) [[Project]](https://github.com/facebookresearch/segment-anything)
+ **[SEEM]** Segment Everything Everywhere All at Once (2023)[[paper]](https://arxiv.org/pdf/2304.06718.pdf)[[Project]](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
## 2 Project & Toolbox
+ **[Awesome Segment-Anything Extensions]** Awesome Segment-Anything Extensions [paper][[code]](https://github.com/JerryX1110/awesome-segment-anything-extensions)
+ **[Awesome Anything]** A curated list of general AI methods for Anything [paper][[code]](https://github.com/VainF/Awesome-Anything)
+ **[Awesome Segment Anything]** Awesome Segment Anything [paper][[code]]( https://github.com/Hedlen/awesome-segment-anything) 
|Preview|Project|
|------|------|
||**[Grounded-Segment-Anything]** Grounded-Segment-Anything(DINO + SAM) [[paper]](https://arxiv.org/abs/2303.05499)[[code]](https://github.com/IDEA-Research/Grounded-Segment-Anything)|
||**[Grounded Segment Anything: From Objects to Parts]** Support text prompt input(GLIP/VLPart + SAM)[paper][[code]](https://github.com/Cheems-Seminar/grounded-segment-any-parts)|
||**[Semantic-Segment-Anything]** A pipeline on top of SAM to predict semantic category for each mask [paper][[code]](https://github.com/fudan-zvg/Semantic-Segment-Anything)|
||**[GroundedSAM-zero-shot-anomaly-detection]** Segment any anomaly[papaer][[code]](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)|
||**[EditAnything]** EditAnything [paper][[code]](https://github.com/sail-sg/EditAnything)|
||**[sd-webui-segment-anything]** extension for helping stable diffusion webui with inpainting [paper][[code]](https://github.com/continue-revolution/sd-webui-segment-anything)|
||**[SALT]** Segment Anything Labelling Tool [paper][[code]](https://github.com/anuragxel/salt)|
||**[SAM-CLIP]** Segment Anything CLIP [paper][[code]](https://github.com/PengtaoJiang/Segment-Anything-CLIP)|
||**[Prompt-Segment-Anything]** An implementation of SAM[parper][[code]](https://github.com/RockeyCoss/Prompt-Segment-Anything)|
||**[Count-Anything]** Few-shot SAM Counting[parper][[code]](https://github.com/Vision-Intelligence-and-Robots-Group/count-anything)|
||**[SegGPT: Vision Foundation Models]** One touch for segmentation in all images (SAM+SegGPT)[[paper]](https://arxiv.org/abs/2304.03284)[[code]](https://github.com/baaivision/Painter)|
||**[napari-segment-anything]** Image viewer plugin of SAM[paper][[code]](https://github.com/JoOkuma/napari-segment-anything)|
| |**[OCR-SAM]** SAM is applied to OCR [paper][[code]](https://github.com/yeungchenwa/OCR-SAM#sam-for-text)|
||**[SegDrawer]** Simple static web-based mask drawer[paper][[code]](https://github.com/lujiazho/SegDrawer)|
||**[Magic Copy]** Extract and copy foreground objects using SAM[paper][[code]](https://github.com/kevmo314/magic-copy)|
||**[Track-Anything]** Video object tracking and segmentation based on SAM[paper][[code]](https://github.com/gaomingqi/Track-Anything)|
|]|**[SAM-Track]** Video object tracking and segmentation based on SAM[paper][[code]](https://github.com/z-x-yang/Segment-and-Track-Anything)|
||**[Count Anything]** Count any object[paper][[code]](https://github.com/ylqi/Count-Anything)|
||**[Inpaint Anything]** Inpaint Anything [[paper]](https://arxiv.org/abs/2304.06790)[[code]](https://github.com/geekyutao/Inpaint-Anything)|
||**[Transfer-Any-Style]** Transfer-Any-Style[paper][[code]](https://github.com/Huage001/Transfer-Any-Style)|
||**[Anything-3D]** Anything-3D [paper][[code]](https://github.com/Anything-of-anything/Anything-3D) |
||**[SEEM]** Segment Everything Everywhere All at Once [[paper]](https://arxiv.org/pdf/2304.06718.pdf)[[code]](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once) |
||**[Caption-Anything]** Generate descriptive captions for any object [paper][[code]](https://github.com/ttengwang/Caption-Anything) |
||**[Caption-Anything]** Generate descriptive captions for any object [paper][[code]](https://github.com/opengeos/segment-geospatial) |
||**[Image.txt]** Transform Image Into Unique Paragraph [paper][[code]](https://github.com/showlab/Image2Paragraph) |
||**[3D-Box via Segment Anything]** Extend the scope to 3D world [paper][[code]](https://github.com/dvlab-research/3D-Box-Segment-Anything) |
||**[SegmentAnyRGBD]** Segment rendered depth images based on SAM [paper] [[code]](https://github.com/Jun-CEN/SegmentAnyRGBD) |## 3 Lecture & Notes
**How to | roboflow** how-to-segment-anything-with-sam [[blog]](https://github.com/roboflow/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb)
## 4 Papers
### Image Segmentation & Medical Image Segmentation+ **[Zero-shot Segmentation]** Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging[[paper]](https://arxiv.org/abs/2304.04155)[[code]](https://github.com/BingfengYan/VISAM)
+ **[generic segmentation]** Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications [[paper]](http://arxiv.org/abs/2304.05750v2)[code]
+ **[Medical Image segmentation]** SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM [[paper]](http://arxiv.org/abs/2304.05622v1)[[code]](https://github.com/bingogome/samm)+ **[Medical Image segmentation]** SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model [[paper]](http://arxiv.org/abs/2304.05396v1) [code]
+ **[Camouflaged Object Segmentation]** SAM Struggles in Concealed Scenes -- Empirical Study on "Segment Anything"[[paper]](https://arxiv.org/abs/2304.06022)[code]
+ **[Brain Extraction]** Brain Extraction comparing Segment Anything Model (SAM) and FSL Brain Extraction Tool[[paper]](https://arxiv.org/abs/2304.04738)[code]
### Object Detection & Tracking
+ **[Camouflaged Object Detection]** Can SAM Segment Anything? When SAM Meets Camouflaged Object Detection[[paper]](https://arxiv.org/abs/2304.04709)[[code]](https://github.com/luckybird1994/SAMCOD)
+ **[Multi-Object Tracking]** CO-MOT[paper][[code]](https://github.com/BingfengYan/VISAM)
### Explaning and Supporting Basic Models
+ **[CLIP]** CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks [[paper]](http://arxiv.org/abs/2304.05653v1)[[code]](https://github.com/xmed-lab/clip_surgery)
### Arxiv-daily-update
[Click here](https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything-Model/blob/main/arxiv-daily-docs/README.md) to check the daily-updated paper list!