Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

awesome-segment-anything

Tracking and collecting papers/projects/others related to Segment Anything.
https://github.com/hedlen/awesome-segment-anything

Last synced: 4 days ago
JSON representation

  • Papers/Projects

    • Derivative Projects

      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - segment-anything-colab) & [Huggingface](https://huggingface.co/spaces/yizhangliu/Grounded-Segment-Anything) | [Code](https://github.com/IDEA-Research/Grounded-Segment-Anything) | - | Combining Grounding DINO and Segment Anything| - |
      • img - | [Code](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)| - | Grounding DINO + SAM to segment any anomaly. |
      • img - | [Code](https://github.com/fudan-zvg/Semantic-Segment-Anything) | Fudan | A dense category annotation engine. |
      • img - |[Code](https://github.com/maxi-w/CLIP-SAM) | - | SAM + CLIP.|
      • img - | [Code](https://github.com/RockeyCoss/Prompt-Segment-Anything)| - | SAM + Zero-shot Instance Segmentation.|
      • img - |[Code](https://github.com/Li-Qingyun/sam-mmrotate) | - | An implementation of SAM for generating rotated bounding boxes with MMRotate.|
      • img1 - |[Code](https://github.com/lujiazho/SegDrawer) | - | Simple static web-based mask drawer, supporting semantic drawing with SAM.|
      • - | SAM + Labelme + LabelImg + Auto-labeling.|
      • - Bc) [BiliBili Demo](https://www.bilibili.com/video/BV1or4y1R7EJ/) | [Code](https://github.com/yatengLG/ISAT_with_segment_anything) | - | Labeling tool by SAM(segment anything model),supports SAM, sam-hq, MobileSAM EdgeSAM etc.|
      • img - |[Code](https://github.com/Yuqifan1117/Annotation-anything-pipeline) | - | GPT + SAM.|
      • roboflow-sam-optimized-faster - data-segment-anything-model-sam/) | Roboflow | SAM-assisted labeling for training computer vision models.|
      • img - |[Code](https://github.com/anuragxel/salt) | - | A tool that adds a basic interface for image labeling and saves the generated masks in COCO format.]
      • img - anything-webui.vercel.app/) |[Code](https://github.com/Kingfish404/segment-anything-webui/) | - | This is a new web interface for the SAM.|
      • img - | [Code](https://github.com/ziqi-jin/finetune-anything) | - |A class-aware one-stage tool for training fine-tuning models based on SAM.|
      • img - | [Code](https://github.com/NVIDIA-AI-IOT/nanosam) | NVIDIA |A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT.|
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • Video - x-yang/Segment-and-Track-Anything) | Zhejiang University | This project, which is based on SAM and DeAOT, focuses on segmenting and tracking objects in videos. |
      • Video - |[Code](https://github.com/MIC-DKFZ/napari-sam) | - | Segment anything with Napari integration of SAM.|
      • img - |[Code](https://github.com/amine0110/SAM-Medical-Imaging) | - | SAM for Medical Imaging.|
      • img - |[Code](https://github.com/jinfagang/Disappear) | - | SAM + Inpainting/Replacing.|
      • img - |[Code](https://github.com/dvlab-research/3D-Box-Segment-Anything) | - | SAM is extended to 3D perception by combining it with VoxelNeXt.|
      • img - |[Code](https://github.com/Anything-of-anything/Anything-3D) | - | SAM + [Zero 1-to-3](https://github.com/cvlab-columbia/zero123).|
      • img - of-anything/Anything-3D/raw/main/AnyFace3D/assets/celebrity_selfie/2.gif)|- |[Code](https://github.com/Anything-of-anything/Anything-3D) | - | SAM + [HRN](https://younglbw.github.io/HRN-homepage/).|
      • img - | [Code](https://github.com/Pointcept/SegmentAnything3D) | Pointcept | Extending Segment Anything to 3D perception by transferring the segmentation information of 2D images to 3D space|
      • img - |[Code](https://github.com/sail-sg/EditAnything) | - | Edit and Generate Anything in an image.|
      • img - |[Code](https://github.com/feizc/IEA) | - | Stable Diffusion + SAM.|
      • img - YissBq9nOvS2PHEjAsFkA?usp=share_link) |[Code](https://github.com/aliaksandr960/segment-anything-eo) | - | SAM + Remote Sensing. |
      • img - |[Code](https://github.com/achalddave/segment-any-moving) | - | SAM + Moving Object Detection. |
      • img - SAM) | - | Optical Character Recognition with SAM. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • Code - | Evaluating the basic performance of SAM on the Referring Image segmentation task.|
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - park/segment-anything-with-clip) |[Code](https://github.com/Curt-Park/segment-anything-with-clip) | - | SAM + CLIP|
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • img - park/yolo-world-with-efficientvit-sam) | [Code](https://github.com/Curt-Park/yolo-world-with-efficientvit-sam) | - | Efficient open-vocabulary object detection and segmentation with YOLO-World + EfficientViT SAM |
      • img - Anything-Video) |[Code](https://github.com/kadirnar/segment-anything-video) | - | SAM + Video. |
      • Video - x-yang/Segment-and-Track-Anything) | Zhejiang University | This project, which is based on SAM and DeAOT, focuses on segmenting and tracking objects in videos. |
    • Basemodel Papers

      • img - | [Code](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit) | Google | A open-vocabulary object detector. |
      • img - liangf.github.io/projects/ovseg/) | [Code](https://github.com/facebookresearch/ov-seg) | META | Segment an image into semantic regions according to text descriptions.|
      • img - | [Code](https://github.com/baaivision/Painter) | BAAI | A Generalist Painter for In-Context Visual Learning.|
      • img - ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) &[Huggingface](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) | [Code](https://github.com/IDEA-Research/GroundingDINO) | IDEA | A stronger open-set object detector|
      • img - anything/raw/main/assets/masks2.jpg?raw=true)| [arXiv](https://arxiv.org/abs/2304.02643) | [Project page](https://segment-anything.com/) | [Code](https://github.com/facebookresearch/segment-anything) | Meta | A stronger Large model which can be used to generate masks for all objects in an image.|
      • img
      • img - Decoder/Segment-Everything-Everywhere-All-At-Once)| Microsoft | Semantic Segmentation with various prompt types.|
    • Derivative Papers

      • img - lab/CLIP_Surgery/blob/master/demo.ipynb)| [Code](https://github.com/xmed-lab/CLIP_Surgery) | HKUST | This work about SAM based on CLIP's explainability to achieve text to mask without manual points.|
      • img - specific prompts in SAM.|
      • img - SAM) | - |Segment Anything with specific concepts. |
      • img1 - | [Code](https://github.com/aim-uofa/Matcher) | - | One shot semantic segmentation by integrating an all-purpose feature extraction model and a class-agnostic segmentation model. |
      • img - 619/FastSAM) | [Code](https://github.com/casia-iva-lab/fastsam) | - | Reformulate the architecture and improve the speed of SAM. |
      • img - friendly by replacing the heavyweight image encoder with a lightweight one.|
      • img
      • img - | [Code](https://github.com/czg1225/SlimSAM) | NUS | 0.1% Data Makes Segment Anything Slim.|
      • img1 - |[Code](https://github.com/tianrun-chen/SAM-Adapter-PyTorch) | - | SAM-adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More.|
      • img1 - | - | - | A project to fineturn SAM using Adaption for the Medical Imaging. |
      • img1 - | [Code](https://github.com/OpenGVLab/SAM-Med2D) | Sichuan University & Shanghai AI Laboratory | The most comprehensive studies on applying SAM to medical 2D images |
      • img - cell-analytics.github.io/micro-sam/micro_sam.html#installation) | [Code](https://github.com/computational-cell-analytics/micro-sam) | University of Göttingen, Germany | Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything and specializes it for microscopy and other bio-imaging data. Its core components are: <ul><li>The `micro_sam` tools for interactive data annotation with napari.</li><li>The `micro_sam` library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.</li><li>The `micro_sam` models that are fine-tuned on publicly available microscopy data.</li> Our goal is to build fast and interactive annotation tools for microscopy data |
      • img1 - | [Project](https://www.comet.com/examples/demo-text-to-inpainting-sam-stablediffusion/view/bRnI022tXQUdKGsVCFmjFRRtT/) | [Code](https://colab.research.google.com/drive/1B7L4cork9UFTtIB02EntjiZRLYuqJS2b#scrollTo=LtZghyHoJabf) | comet | Grounding DINO + SAM + Stable Diffusion |
      • arXiv - | [Code](https://github.com/luckybird1994/SAMCOD) | - | SAM + Camouflaged object detection (COD) task.|
      • img - tech.github.io/InterpAny-Clearer/) & [Interactive Demo](http://ai4sports.opengvlab.com/interpany-clearer/) | [Code](https://github.com/zzh-tech/InterpAny-Clearer) | Shanghai AI Laboratory & Snap Inc. | Editable video frame interpolation with SAM. |
      • img - images.githubusercontent.com/74295796/244441627-d947f59d-b0c1-4c22-9967-d8f2bf633879.gif)|[arXiv](https://arxiv.org/abs/2306.04121)| - | [Code](https://github.com/hustvl/Matte-Anything)| HUST Vision Lab| An interactive natural image matting system with excellent performance for both opaque and transparent objects |
      • img1 - Labs/Matting-Anything) | SHI Labs | Leverage feature maps from SAM and adopts a Mask-to-Matte module to predict the alpha matte. |
      • img1 - | [Code](https://github.com/OpenGVLab/Instruct2Act) | OpenGVLab | A SAM application in the Robotic field.|
      • img1 - | [Code](https://github.com/portrai-io/IAMSAM) | Portrai Inc. | A SAM application for the analysis of Spatial Transcriptomics.|
      • img2 - LAB-CUHK-SZ/SAMPro3D) | CUHKSZ, MSRA |A novel method to segment any 3D indoor scenes by applying the SAM to 2D frames, without need any training, tuning, distillation or 3D pretrained networks.|
      • img1 - Any-Point-Cloud) | - | A framework capable of leveraging 2D vision foundation models for self-supervised learning on large-scale 3D point clouds.|
      • img - | An extension of 3D Slicer using the SAM to aid the segmentation of 3D data from tomography or other imaging techniques. |
      • img - | A novel framework to Segment Anything in 3D, named SA3D. |
      • img
      • img - | [Code](https://github.com/ggsDing/SAM-CD) | PLA Information Engineering University | A sample-efficient change detection framework that employs SAM as the visual encoder. |
      • Video - | [Code](https://github.com/gaomingqi/Track-Anything) | MIT, Harvard University | an open-vocabulary and multimodal model to detects, tracks, and follows any objects in real-time.|
      • Video - | [Code](https://github.com/z-x-yang/Segment-and-Track-Anything) | MIT, Harvard University | A framework called Segment And Track Anything (SAMTrack) that allows users to precisely and effectively segment and track any object in a video.|
      • arXiv - | - | KAIST | The
      • img1 - | [Code](https://github.com/KidsWithTokens/Medical-SAM-Adapter) | - | A project to finetune SAM using Adaption for the Medical Imaging. |
      • img - | An extension of 3D Slicer using the SAM to aid the segmentation of 3D data from tomography or other imaging techniques. |
      • img - sam.github.io) | [Code](https://github.com/zyc00/Point-SAM) | UCSD | An open-world 3D native promptable point-cloud segmentation method.|
      • img1 - tuned SAM on 65 biomedical imaging datasets with scribble, click, and bounding box inputs |
      • img - | [Code](https://github.com/htcr/sam_road) | Carnegie Mellon University | A simple and fast method applying SAM for vectorized large-scale road network graph extraction. It reaches state-of-the-art accuracy while being 40 times faster. |
    • front-end framework

      • samjs - | JS SDK for SAM, Support remote sensing data segmentation and vectorization|