https://github.com/autodistill/autodistill-grounded-sam-2
Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.
https://github.com/autodistill/autodistill-grounded-sam-2
autodistill florence-2 grounded-sam segment-anything-2
Last synced: 2 months ago
JSON representation
Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.
- Host: GitHub
- URL: https://github.com/autodistill/autodistill-grounded-sam-2
- Owner: autodistill
- License: apache-2.0
- Created: 2024-07-30T10:28:44.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-08-07T09:11:49.000Z (11 months ago)
- Last Synced: 2025-04-02T08:58:10.072Z (3 months ago)
- Topics: autodistill, florence-2, grounded-sam, segment-anything-2
- Language: Python
- Homepage: https://docs.autodistill.com
- Size: 30.3 KB
- Stars: 116
- Watchers: 4
- Forks: 16
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Autodistill: Grounded SAM 2 Base Model
This repository contains the code implementing Grounded SAM 2 using Florence-2 as a grounding model and Segment Anything 2 as a segmentation model for use with [`autodistill`](https://github.com/autodistill/autodistill).
Florence-2 is a zero-shot multimodal model. You can use Florence-2 for open vocabulary object detection. This project uses the object detection capabilities in Florence-2 to ground the SAM 2 model.
Read the full [Autodistill documentation](https://autodistill.github.io/autodistill/).
Read the [Grounded SAM 2 Autodistill documentation](https://autodistill.github.io/autodistill/base_models/grounded-sam-2/).
## Installation
To use the GroundedSAM Base Model, simply install it along with a Target Model supporting the `detection` task:
```bash
pip3 install autodistill-grounded-sam-2 autodistill-yolov8
```You can find a full list of `detection` Target Models on [the main autodistill repo](https://github.com/autodistill/autodistill).
## Quickstart
```python
from autodistill_grounded_sam_2 import GroundedSAM2
from autodistill.detection import CaptionOntology
from autodistill.utils import plot
import cv2# define an ontology to map class names to our Grounded SAM 2 prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = GroundedSAM2(
ontology=CaptionOntology(
{
"person": "person",
"shipping container": "shipping container",
}
)
)# run inference on a single image
results = base_model.predict("logistics.jpeg")plot(
image=cv2.imread("logistics.jpeg"),
classes=base_model.ontology.classes(),
detections=results
)
# label all images in a folder called `context_images`
base_model.label("./context_images", extension=".jpeg")
```## License
The code in this repository is licensed under an [Apache 2.0 license](LICENSE).
## 🏆 Contributing
We love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!