Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/BBBBchan/CorrMatch
Official code for "CorrMatch: Label Propagation via Correlation Matching for Semi-Supervised Semantic Segmentation"
https://github.com/BBBBchan/CorrMatch
Last synced: about 2 months ago
JSON representation
Official code for "CorrMatch: Label Propagation via Correlation Matching for Semi-Supervised Semantic Segmentation"
- Host: GitHub
- URL: https://github.com/BBBBchan/CorrMatch
- Owner: BBBBchan
- Created: 2023-06-07T07:41:03.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-30T12:07:47.000Z (7 months ago)
- Last Synced: 2024-08-02T12:21:59.291Z (5 months ago)
- Language: Python
- Homepage:
- Size: 3.91 MB
- Stars: 110
- Watchers: 2
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# CorrMatch
This repository contains the official implementation of the following paper:
> **[CorrMatch: Label Propagation via Correlation Matching for Semi-Supervised Semantic Segmentation](https://arxiv.org/abs/2306.04300)**
> [Boyuan Sun](https://github.com/BBBBchan/CorrMatch), [Yuqi Yang](https://github.com/BBBBchan/CorrMatch), [Le Zhang](http://zhangleuestc.cn/), [Ming-Ming Cheng](https://mmcheng.net/cmm/), [Qibin Hou](https://houqb.github.io/)🔥 Our paper is accepted by IEEE Computer Vision and Pattern Recognition (CVPR) 2024 !!!
## Overview
CorrMatch provides a solution for mining more high-quality regions from the unlabeled images to leverage the unlabeled data more efficiently for consistency regularization.
![avatar](./images/cvpr_pipeline.png "pipeline")Previous approaches mostly employ complicated training strategies to leverage unlabeled data but overlook the role of correlation maps in modeling the relationships between pairs of locations. Thus, we introduce two label propagation strategies (Pixel Propagation and Region Propagation) with the help of correlation maps.
For technical details, please refer to our full paper on [arXiv](https://arxiv.org/abs/2306.04300).
## Getting Started### Installation
```bash
git clone [email protected]:BBBBchan/CorrMatch.git
cd CorrMatch
conda create -n corrmatch python=3.9
conda activate corrmatch
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install opencv-python tqdm einops pyyaml
```### Pretrained Backbone:
[ResNet-101](https://drive.google.com/file/d/1Rx0legsMolCWENpfvE2jUScT3ogalMO8/view?usp=sharing)
```bash
mkdir pretrained
```
Please put the pretrained model under `pretrained` dictionary.### Dataset:
- Pascal VOC 2012: [JPEGImages](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) | [SegmentationClass](https://drive.google.com/file/d/1ikrDlsai5QSf2GiSUR3f8PZUzyTubcuF/view?usp=sharing)
- Cityscapes: [leftImg8bit](https://www.cityscapes-dataset.com/file-handling/?packageID=3) | [gtFine](https://drive.google.com/file/d/1E_27g9tuHm6baBqcA7jct_jqcGA89QPm/view?usp=sharing)Please modify the dataset path in configuration files.*The groundtruth mask ids have already been pre-processed. You may use them directly.*
Your dataset path may look like:
```
├── [Your Pascal Path]
├── JPEGImages
└── SegmentationClass
├── [Your Cityscapes Path]
├── leftImg8bit
└── gtFine
```## Usage
### Training CorrMatch
```bash
sh tools/train.sh
```
To run on different labeled data partitions or different datasets, please modify:``config``, ``labeled_id_path``, ``unlabeled_id_path``, and ``save_path`` in [train.sh](https://github.com/BBBBchan/CorrMatch/blob/main/tools/train.sh).
### Evaluation
```bash
sh tools/val.sh
```
To evaluate your checkpoint, please modify ``checkpoint_path`` in [val.sh](https://github.com/BBBBchan/CorrMatch/blob/main/tools/val.sh).## Results
### Pascal VOC 2012
Labeled images are sampled from the **original high-quality** training set. Results are obtained by DeepLabv3+ based on ResNet-101 with training size 321(513).
| Method | 1/16 (92) | 1/8 (183) | 1/4 (366) | 1/2 (732) | Full (1464) |
|:--------------------:|:---------:|:---------:|:--------------:|:---------:|:-----------:|
| SupOnly | 45.1 | 55.3 | 64.8 | 69.7 | 73.5 |
| ST++ | 65.2 | 71.0 | 74.6 | 77.3 | 79.1 |
| PS-MT | 65.8 | 69.6 | 76.6 | 78.4 | 80.0 |
| UniMatch | 75.2 | 77.2 | 78.8 | 79.9 | 81.2 |
| **CorrMatch (Ours)** | **76.4** | **78.5** | **79.4** | **80.6** | **81.8** |### Cityscapes
Results are obtained by DeepLabv3+ based on ResNet-101.
| Method | 1/16 (186) | 1/8 (372) | 1/4 (744) | 1/2 (1488) |
|:--------------------:|:----------:|:---------:|:-----------:|:----------:|
| SupOnly | 65.7 | 72.5 | 74.4 | 77.8 |
| UniMatch | 76.6 | 77.9 | 79.2 | 79.5 |
| **CorrMatch (Ours)** | **77.3** | **78.5** | **79.4** | **80.4** |## Citation
If you find our repo useful for your research, please consider citing our paper:
```bibtex
@article{sun2023corrmatch,
title={CorrMatch: Label Propagation via Correlation Matching for Semi-Supervised Semantic Segmentation},
author={Sun, Boyuan and Yang, Yuqi and Zhang, Le and Cheng, Ming-Ming and Hou, Qibin},
journal={IEEE Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
```## License
This code is licensed under the [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/) for non-commercial use only.
Please note that any commercial use of this code requires formal permission prior to use.## Contact
For technical questions, please contact `sbysbysby123[AT]gmail.com`.
For commercial licensing, please contact `cmm[AT]nankai.edu.cn` or `[email protected]`.
## Acknowledgement
We thank [UniMatch](https://github.com/LiheYoung/UniMatch), [CPS](https://github.com/charlesCXK/TorchSemiSeg), [CutMix-Seg](https://github.com/Britefury/cutmix-semisup-seg), [DeepLabv3Plus](https://github.com/YudeWang/deeplabv3plus-pytorch), [U2PL](https://github.com/Haochen-Wang409/U2PL) and other excellent works (see this [project](https://github.com/BBBBchan/Awesome-Semi-Supervised-Semantic-Segmentation)) for their amazing projects!