An open API service indexing awesome lists of open source software.

https://github.com/wooyeolbaek/seg-mask-process

Segmentation mask postprocessing
https://github.com/wooyeolbaek/seg-mask-process

postprocess segmentation segmentation-masks

Last synced: 4 months ago
JSON representation

Segmentation mask postprocessing

Awesome Lists containing this project

README

        

# Post Process for Segmentation Masks
## Usage
Fill pixel areas below the specific threshold using Breadth First Search

![대체 텍스트](./assets/mask.png)
![대체 텍스트](./assets/mask_processed.png)

## Initialize
```shell
python -m venv .seg_venv
source .seg_venv/bin/activate
pip install requirements.txt
mkdir masks
```
Put mask images inside the `masks` directory

## (Optional)Normalize Mask
- Visualizing mask images
```shell
python norm.py
```

## Process
```shell
python process.py
```

## (Optional)CLIP Seg
- Quick Start: Segmentation Mask
```python
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPSegModel

processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```

## References
- https://github.com/switchablenorms/CelebAMask-HQ
- https://huggingface.co/docs/transformers/model_doc/clipseg
- https://huggingface.co/CIDAS/clipseg-rd64-refined