Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/helblazer811/RefSAM
Referring Image Segmentation Benchmarking with Segment Anything Model (SAM)
https://github.com/helblazer811/RefSAM
Last synced: 3 months ago
JSON representation
Referring Image Segmentation Benchmarking with Segment Anything Model (SAM)
- Host: GitHub
- URL: https://github.com/helblazer811/RefSAM
- Owner: helblazer811
- Created: 2023-04-06T03:42:35.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-04-07T16:42:26.000Z (over 1 year ago)
- Last Synced: 2024-06-17T04:36:06.552Z (5 months ago)
- Language: Python
- Size: 1.28 MB
- Stars: 33
- Watchers: 5
- Forks: 1
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-segment-anything - Code - | Evaluating the basic performance of SAM on the Referring Image segmentation task.| (Papers/Projects / Derivative Projects)
- awesome-segment-anything-extensions - Repo
README
# RefSAM
This repository is for evaluating the basic performance of SAM on the Referring Image Segmementation task. Check out the SAM project [here]().
# The Naive Zero Shot Approach
The very basic approach we use is to:
1. Produce a referring expression representation using the CLIP language transformer.
2. Extract SAM masks from an image.
3. Embed the masked sections into a CLIP model to produce a representation of the section.
4. Compare the masked section representation to the representation of the referring expression.The code for the approach can be found in ```model.py```
# Setup
## Install SAM
```
pip install git+https://github.com/facebookresearch/segment-anything.git
```
## Load the SAM model in the ```pretrained/``` folder
I used the ```sam_vit_h_4b8939.pth``` model from the SAM repository. It can be found [here](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)## Load the data
Follow the directions in ```prepare_dataset.md``` to download and setup the evaluation dataset.
# Run the evaluation
To evaluate the approach run.```
python evaluate_on_refcoco.py
```