An open API service indexing awesome lists of open source software.

https://github.com/SamsungLabs/ritm_interactive_segmentation

Reviving Iterative Training with Mask Guidance for Interactive Segmentation
https://github.com/SamsungLabs/ritm_interactive_segmentation

hrnets interactive-segmentation pretrained-models pytorch segmentation

Last synced: about 1 month ago
JSON representation

Reviving Iterative Training with Mask Guidance for Interactive Segmentation

Awesome Lists containing this project

README

        

## Reviving Iterative Training with Mask Guidance for Interactive Segmentation









drawing
drawing






Open In Colab


The MIT License

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation of the following paper:

> **Reviving Iterative Training with Mask Guidance for Interactive Segmentation**

> [Konstantin Sofiiuk](https://github.com/ksofiyuk), [Ilia Petrov](https://github.com/ptrvilya), [Anton Konushin](https://scholar.google.com/citations?user=ZT_k-wMAAAAJ)

> Samsung Research

> https://arxiv.org/abs/2102.06583
>
> **Abstract:** *Recent works on click-based interactive segmentation have demonstrated state-of-the-art results by
> using various inference-time optimization schemes. These methods are considerably more computationally expensive
> compared to feedforward approaches, as they require performing backward passes through a network during inference and
> are hard to deploy on mobile frameworks that usually support only forward passes. In this paper, we extensively
> evaluate various design choices for interactive segmentation and discover that new state-of-the-art results can be
> obtained without any additional optimization schemes. Thus, we propose a simple feedforward model for click-based
> interactive segmentation that employs the segmentation masks from previous steps. It allows not only to segment an
> entirely new object, but also to start with an external mask and correct it. When analyzing the performance of models
> trained on different datasets, we observe that the choice of a training dataset greatly impacts the quality of
> interactive segmentation. We find that the models trained on a combination of COCO and LVIS with diverse and
> high-quality annotations show performance superior to all existing models.*

## Setting up an environment

This framework is built using Python 3.6 and relies on the PyTorch 1.4.0+. The following command installs all
necessary packages:

```.bash
pip3 install -r requirements.txt
```

You can also use our [Dockerfile](./Dockerfile) to build a container with the configured environment.

If you want to run training or testing, you must configure the paths to the datasets in [config.yml](config.yml).

## Interactive Segmentation Demo


drawing

The GUI is based on TkInter library and its Python bindings. You can try our interactive demo with any of the
[provided models](#pretrained-models). Our scripts automatically detect the architecture of the loaded model, just
specify the path to the corresponding checkpoint.

Examples of the script usage:

```.bash
# This command runs interactive demo with HRNet18 ITER-M model from cfg.INTERACTIVE_MODELS_PATH on GPU with id=0
# --checkpoint can be relative to cfg.INTERACTIVE_MODELS_PATH or absolute path to the checkpoint
python3 demo.py --checkpoint=hrnet18_cocolvis_itermask_3p --gpu=0

# This command runs interactive demo with HRNet18 ITER-M model from /home/demo/isegm/weights/
# If you also do not have a lot of GPU memory, you can reduce --limit-longest-size (default=800)
python3 demo.py --checkpoint=/home/demo/fBRS/weights/hrnet18_cocolvis_itermask_3p --limit-longest-size=400

# You can try the demo in CPU only mode
python3 demo.py --checkpoint=hrnet18_cocolvis_itermask_3p --cpu
```

Running demo in docker

# activate xhost

xhost +
docker run -v "$PWD":/tmp/ \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY <id-or-tag-docker-built-image> \
python3 demo.py --checkpoint resnet34_dh128_sbd --cpu

**Controls**:

| Key | Description |
| ------------------------------------------------------------- | ---------------------------------- |
| Left Mouse Button | Place a positive click |
| Right Mouse Button | Place a negative click |
| Scroll Wheel | Zoom an image in and out |
| Right Mouse Button +
Move Mouse | Move an image |
| Space | Finish the current object mask |

Initializing the ITER-M models with an external segmentation mask


drawing



According to our paper, ITER-M models take an image, encoded user input, and a previous step mask as their input. Moreover, a user can initialize the model with an external mask before placing any clicks and correct this mask using the same interface. As it turns out, our models successfully handle this situation and make it possible to change the mask.

To initialize any ITER-M model with an external mask use the "Load mask" button in the menu bar.

Interactive segmentation options


  • ZoomIn (can be turned on/off using the checkbox)



    • Skip clicks - the number of clicks to skip before using ZoomIn.


    • Target size - ZoomIn crop is resized so its longer side matches this value (increase for large objects).


    • Expand ratio - object bbox is rescaled with this ratio before crop.


    • Fixed crop - ZoomIn crop is resized to (Target size, Target size).


  • BRS parameters (BRS type can be changed using the dropdown menu)



    • Network clicks - the number of first clicks that are included in the network's input. Subsequent clicks are processed only using BRS (NoBRS ignores this option).


    • L-BFGS-B max iterations - the maximum number of function evaluation for each step of optimization in BRS (increase for better accuracy and longer computational time for each click).


  • Visualisation parameters



    • Prediction threshold slider adjusts the threshold for binarization of probability map for the current object.


    • Alpha blending coefficient slider adjusts the intensity of all predicted masks.


    • Visualisation click radius slider adjusts the size of red and green dots depicting clicks.


## Datasets

We train all our models on SBD and COCO+LVIS and evaluate them on GrabCut, Berkeley, DAVIS, SBD and PascalVOC. We also provide links to additional datasets: ADE20k and OpenImages, that are used in ablation study.

| Dataset | Description | Download Link |
|-----------|----------------------------------------------|:------------------------------------:|
|ADE20k | 22k images with 434k instances (total) | [official site][ADE20k] |
|OpenImages | 944k images with 2.6M instances (total) | [official site][OpenImages] |
|MS COCO | 118k images with 1.2M instances (train) | [official site][MSCOCO] |
|LVIS v1.0 | 100k images with 1.2M instances (total) | [official site][LVIS] |
|COCO+LVIS* | 99k images with 1.5M instances (train) | [original LVIS images][LVIS] +
[our combined annotations][COCOLVIS_annotation] |
|SBD | 8498 images with 20172 instances for (train)
2857 images with 6671 instances for (test) |[official site][SBD]|
|Grab Cut | 50 images with one object each (test) | [GrabCut.zip (11 MB)][GrabCut] |
|Berkeley | 96 images with 100 instances (test) | [Berkeley.zip (7 MB)][Berkeley] |
|DAVIS | 345 images with one object each (test) | [DAVIS.zip (43 MB)][DAVIS] |
|Pascal VOC | 1449 images with 3417 instances (validation)| [official site][PascalVOC] |
|COCO_MVal | 800 images with 800 instances (test) | [COCO_MVal.zip (127 MB)][COCO_MVal] |

[ADE20k]: http://sceneparsing.csail.mit.edu/
[OpenImages]: https://storage.googleapis.com/openimages/web/download.html
[MSCOCO]: https://cocodataset.org/#download
[LVIS]: https://www.lvisdataset.org/dataset
[SBD]: http://home.bharathh.info/pubs/codes/SBD/download.html
[GrabCut]: https://github.com/saic-vul/fbrs_interactive_segmentation/releases/download/v1.0/GrabCut.zip
[Berkeley]: https://github.com/saic-vul/fbrs_interactive_segmentation/releases/download/v1.0/Berkeley.zip
[DAVIS]: https://github.com/saic-vul/fbrs_interactive_segmentation/releases/download/v1.0/DAVIS.zip
[PascalVOC]: http://host.robots.ox.ac.uk/pascal/VOC/
[COCOLVIS_annotation]: https://github.com/saic-vul/ritm_interactive_segmentation/releases/download/v1.0/cocolvis_annotation.tar.gz
[COCO_MVal]: https://github.com/saic-vul/fbrs_interactive_segmentation/releases/download/v1.0/COCO_MVal.zip

Don't forget to change the paths to the datasets in [config.yml](config.yml) after downloading and unpacking.

(*) To prepare COCO+LVIS, you need to download original LVIS v1.0, then download and unpack our
pre-processed annotations that are obtained by combining COCO and LVIS dataset into the folder with LVIS v1.0.

## Testing

### Pretrained models
We provide pretrained models with different backbones for interactive segmentation.

You can find model weights and evaluation results in the tables below:



Train
Dataset
Model
GrabCut
Berkeley
SBD
DAVIS
Pascal
VOC
COCO
MVal


NoC
85%
NoC
90%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%




SBD
HRNet18 IT-M
(38.8 MB)

1.76
2.04
3.22
3.39
5.43
4.94
6.71
2.51
4.39


COCO+
LVIS
HRNet18
(38.8 MB)

1.54
1.70
2.48
4.26
6.86
4.79
6.00
2.59
3.58


HRNet18s IT-M
(16.5 MB)

1.54
1.68
2.60
4.04
6.48
4.70
5.98
2.57
3.33


HRNet18 IT-M
(38.8 MB)

1.42
1.54
2.26
3.80
6.06
4.36
5.74
2.28
2.98


HRNet32 IT-M
(119 MB)

1.46
1.56
2.10
3.59
5.71
4.11
5.34
2.57
2.97

### Evaluation

We provide the script to test all the presented models in all possible configurations on GrabCut, Berkeley, DAVIS,
Pascal VOC, and SBD. To test a model, you should download its weights and put them in `./weights` folder (you can
change this path in the [config.yml](config.yml), see `INTERACTIVE_MODELS_PATH` variable). To test any of our models,
just specify the path to the corresponding checkpoint. Our scripts automatically detect the architecture of the loaded model.

The following command runs the NoC evaluation on all test datasets (other options are displayed using '-h'):

```.bash
python3 scripts/evaluate_model.py --checkpoint=
```

Examples of the script usage:
```.bash
# This command evaluates HRNetV2-W18-C+OCR ITER-M model in NoBRS mode on all Datasets.
python3 scripts/evaluate_model.py NoBRS --checkpoint=hrnet18_cocolvis_itermask_3p

# This command evaluates HRNet-W18-C-Small-v2+OCR ITER-M model in f-BRS-B mode on all Datasets.
python3 scripts/evaluate_model.py f-BRS-B --checkpoint=hrnet18s_cocolvis_itermask_3p

# This command evaluates HRNetV2-W18-C+OCR ITER-M model in NoBRS mode on GrabCut and Berkeley datasets.
python3 scripts/evaluate_model.py NoBRS --checkpoint=hrnet18_cocolvis_itermask_3p --datasets=GrabCut,Berkeley
```

### Jupyter notebook

You can also interactively experiment with our models using [test_any_model.ipynb](./notebooks/test_any_model.ipynb) Jupyter notebook.

## Training

We provide the scripts for training our models on the SBD dataset. You can start training with the following commands:
```.bash
# ResNet-34 non-iterative baseline model
python3 train.py models/noniterative_baselines/r34_dh128_cocolvis.py --gpus=0 --workers=4 --exp-name=first-try

# HRNet-W18-C-Small-v2+OCR ITER-M model
python3 train.py models/iter_mask/hrnet18s_cocolvis_itermask_3p.py --gpus=0 --workers=4 --exp-name=first-try

# HRNetV2-W18-C+OCR ITER-M model
python3 train.py models/iter_mask/hrnet18_cocolvis_itermask_3p.py --gpus=0,1 --workers=6 --exp-name=first-try

# HRNetV2-W32-C+OCR ITER-M model
python3 train.py models/iter_mask/hrnet32_cocolvis_itermask_3p.py --gpus=0,1,2,3 --workers=12 --exp-name=first-try
```

For each experiment, a separate folder is created in the `./experiments` with Tensorboard logs, text logs,
visualization and checkpoints. You can specify another path in the [config.yml](config.yml) (see `EXPS_PATH`
variable).

Please note that we trained ResNet-34 and HRNet-18s on 1 GPU, HRNet-18 on 2 GPUs, HRNet-32 on 4 GPUs
(we used Nvidia Tesla P40 for training). To train on a different GPU you should adjust the batch size using the command
line argument `--batch-size` or change the default value in a model script.

We used the pre-trained HRNetV2 models from [the official repository](https://github.com/HRNet/HRNet-Image-Classification).
If you want to train interactive segmentation with these models, you need to download the weights and specify the paths to
them in [config.yml](config.yml).

## License

The code is released under the MIT License. It is a short, permissive software license. Basically, you can do whatever you want as long as you include the original copyright and license notice in any copy of the software/source.
## Citation

If you find this work is useful for your research, please cite our papers:
```bibtex
@inproceedings{ritm2022,
title={Reviving iterative training with mask guidance for interactive segmentation},
author={Sofiiuk, Konstantin and Petrov, Ilya A and Konushin, Anton},
booktitle={2022 IEEE International Conference on Image Processing (ICIP)},
pages={3141--3145},
year={2022},
organization={IEEE}
}

@inproceedings{fbrs2020,
title={f-brs: Rethinking backpropagating refinement for interactive segmentation},
author={Sofiiuk, Konstantin and Petrov, Ilia and Barinova, Olga and Konushin, Anton},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={8623--8632},
year={2020}
}
```