https://github.com/ylqi/Count-Anything
This method uses Segment Anything and CLIP to ground and count any object that matches a custom text prompt, without requiring any point or box annotation.
https://github.com/ylqi/Count-Anything
clip count-anything segment-anything
Last synced: about 1 month ago
JSON representation
This method uses Segment Anything and CLIP to ground and count any object that matches a custom text prompt, without requiring any point or box annotation.
- Host: GitHub
- URL: https://github.com/ylqi/Count-Anything
- Owner: ylqi
- Created: 2023-04-12T13:16:53.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-04-22T06:26:56.000Z (over 2 years ago)
- Last Synced: 2024-12-21T15:34:30.968Z (10 months ago)
- Topics: clip, count-anything, segment-anything
- Language: Python
- Homepage:
- Size: 17 MB
- Stars: 137
- Watchers: 3
- Forks: 17
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-segment-anything-extensions - Repo
- Awesome-Segment-Anything - Code - | A method uses SAM and CLIP to ground and count any object that matches a custom text prompt, without requiring any point or box annotation. | (Open Source Projects / Follow-up Papers)
README
# Count Anything
### [Official repo](https://github.com/ylqi/Count-Anything)
> **[Count Anything](https://github.com/ylqi/Count-Anything)**
> Liqi, Yan
> ZJU-CV, Zhejiang University / Fudan Univerisity_**C**ount **A**nything (CA)_ project is a versatile image processing tool that combines the capabilities of [Segment Anything](https://segment-anything.com/), [Semantic-Segment-Anything](https://github.com/fudan-zvg/Semantic-Segment-Anything), and [CLIP](https://arxiv.org/abs/2103.00020).
Our solution can count *any object* specified by users within an image.### π Count Anything (CA) engine

The CA engine consists of three steps:
- **(I) Segement Anything.** Following [Semantic-Segment-Anything](https://github.com/fudan-zvg/Semantic-Segment-Anything), CA engine crops a patch for each mask predicted by [Segment Anything](https://segment-anything.com/).
- **(II) Class Mixer.** To identify the masks that match the userβs text prompt, we add the text prompt as an additional class into the class list from the close-set datasets (COCO or ADE20K).
- **(III) CLIP Encoders.** The CA engine uses CLIP image encoder and text encoder to assess if the text prompt is the best option among other classes. If yes, this mask is considered as an instance of the class given by the text prompt, and the count number is incremented by 1.## π©Examples

## π» Requirements
- Python 3.7+
- CUDA 11.1+## π οΈ Installation
```bash
conda env create -f environment.yaml
conda activate ca-env
```
## π Quick Start
### 1. Run [Segment Anything](https://segment-anything.com/) to get segmentation jsons for each image:
Please use `--convert-to-rle` to save segmentation results as `.json` files.
```bash
python scripts/amg.py --checkpoint sam_vit_h_4b8939.pth --model-type vit_h --convert-to-rle --input examples/AdobeStock_323574125.jpg --output output --pred-iou-thresh 0.98 --crop-n-layers 0 --crop-nms-thresh 0.3 --box-nms-thresh 0.5 --stability-score-thresh 0.7
```
```bash
python scripts/amg.py --checkpoint sam_vit_h_4b8939.pth --model-type vit_h --convert-to-rle --input examples/crowd_img.jpg --output output --pred-iou-thresh 0 --min-mask-region-area 0 --stability-score-thresh 0.8
```
### 2. Save the `.jpg` and `.json` in our `data/examples` folder:
```none
βββ Count-Anything
| βββ data
| β βββ examples
| β β βββ AdobeStock_323574125.jpg
| β β βββ AdobeStock_323574125.json
| β β βββ ...
```### 3. Run our Count Anything engine with 1 GPU:
Please use `--text_prompt [OBJ]` to specify the customized class for counting.
```bash
python scripts/main.py --out_dir=output --world_size=1 --save_img --text_prompt="shirt" --data_dir=data/examples
```
```bash
python scripts/main.py --out_dir=output --world_size=1 --save_img --text_prompt="person" --data_dir=data/crowd_examples/
```
The result is saved in `output` folder.## π Acknowledgement
- [Segment Anything](https://segment-anything.com/) provides the SA-1B dataset.
- [HuggingFace](https://huggingface.co/) provides code and pre-trained models.
- [Semantic-Segment-Anything](https://github.com/fudan-zvg/Semantic-Segment-Anything) provides code.
- [CLIP](https://arxiv.org/abs/2103.00020) provide powerful semantic segmentation, image caption and classification models.## π Citation
If you find this work useful for your research, please cite our github repo:
```bibtex
@misc{yan2023count,
title = {Count Anything},
author = {Yan, Liqi},
howpublished = {\url{https://github.com/ylqi/Count-Anything}},
year = {2023}
}
```