Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bowenc0221/panoptic-deeplab
This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)
https://github.com/bowenc0221/panoptic-deeplab
bottom-up cityscapes cvpr2020 deeplab detectron2 instance-segmentation panoptic-segmentation pytorch semantic-segmentation sementation
Last synced: 14 days ago
JSON representation
This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)
- Host: GitHub
- URL: https://github.com/bowenc0221/panoptic-deeplab
- Owner: bowenc0221
- License: apache-2.0
- Created: 2020-06-09T03:33:15.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2023-06-23T08:36:06.000Z (over 1 year ago)
- Last Synced: 2024-08-08T23:21:38.181Z (3 months ago)
- Topics: bottom-up, cityscapes, cvpr2020, deeplab, detectron2, instance-segmentation, panoptic-segmentation, pytorch, semantic-segmentation, sementation
- Language: Python
- Homepage:
- Size: 2.67 MB
- Stars: 585
- Watchers: 20
- Forks: 117
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Panoptic-DeepLab (CVPR 2020)
Panoptic-DeepLab is a state-of-the-art bottom-up method for panoptic segmentation,
where the goal is to assign semantic labels (e.g., person, dog, cat and so on) to
every pixel in the input image as well as instance labels (e.g. an id of 1, 2, 3,
etc) to pixels belonging to thing classes.![Illustrating of Panoptic-DeepLab](/docs/panoptic_deeplab.png)
This is the **PyTorch re-implementation** of our CVPR2020 paper based on Detectron2:
[Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation](https://arxiv.org/abs/1911.10194). Segmentation models with DeepLabV3 and DeepLabV3+ are also supported in this repo now!## News
* [2021/01/25] Found a bug in old config files for COCO experiments (need to change `MAX_SIZE_TRAIN` from 640 to 960 for COCO). Now we have also reproduced COCO results (35.5 PQ)!
* [2020/12/17] Support COCO dataset!
* [2020/12/11] Support DepthwiseSeparableConv2d in the Detectron2 version of Panoptic-DeepLab. Now the Panoptic-DeepLab in Detectron2 is exactly the same as the implementation in our paper, except the post-processing has not been optimized.
* [2020/09/24] I have implemented both [DeepLab](https://github.com/facebookresearch/detectron2/tree/master/projects/DeepLab) and [Panoptic-DeepLab](https://github.com/facebookresearch/detectron2/tree/master/projects/Panoptic-DeepLab) in the official [Detectron2](https://github.com/facebookresearch/detectron2), the implementation in the repo will be deprecated and I will mainly maintain the Detectron2 version. However, this repo still support different backbones for the Detectron2 Panoptic-DeepLab.
* [2020/07/21] Check this [Google AI Blog](https://ai.googleblog.com/2020/07/improving-holistic-scene-understanding.html) for Panoptic-DeepLab.
* [2020/07/01] More Cityscapes pre-trained backbones in model zoo (MobileNet and Xception are supported).
* [2020/06/30] Panoptic-DeepLab now supports [HRNet](https://github.com/HRNet), using HRNet-w48 backbone achieves 63.4% PQ on Cityscapes. Thanks to @PkuRainBow.## Disclaimer
* The implementation in this repo will be depracated, please refer to my [Detectron2 implementation](https://github.com/facebookresearch/detectron2/tree/master/projects/Panoptic-DeepLab) which gives slightly better results.
* This is a **re-implementation** of Panoptic-DeepLab, it is not guaranteed to reproduce all numbers in the paper, please refer to the
original numbers from [Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation](https://arxiv.org/abs/1911.10194)
when making comparison.
* When comparing speed with Panoptic-DeepLab, please refer to the speed in **Table 9** of the [original paper](https://arxiv.org/abs/1911.10194).## What's New
* We release a detailed [technical report](/docs/tech_report.pdf) with implementation details
and supplementary analysis on Panoptic-DeepLab. In particular, we find center prediction is almost perfect and the bottleneck of
bottom-up method still lies in semantic segmentation
* It is powered by the [PyTorch](https://pytorch.org) deep learning framework.
* Can be trained even on 4 1080TI GPUs (no need for 32 TPUs!).## How to use
We suggest using the Detectron2 implementation. You can either use it directly from the [Detectron2 projects](https://github.com/facebookresearch/detectron2/tree/master/projects/Panoptic-DeepLab) or use it from this repo from [tools_d2/README.md](/tools_d2/README.md).The differences are, official Detectron2 implementation only supports ResNet or ResNeXt as the backbone. This repo gives you an example of how to use your a custom backbone within Detectron2.
Note:
* Please check the usage of this code in [tools_d2/README.md](/tools_d2/README.md).
* If you are still interested in the old code, please check [tools/README.md](/tools/README.md).## Model Zoo (Detectron2)
### Cityscapes panoptic segmentationMethod
Backbone
Output
resolution
PQ
SQ
RQ
mIoU
AP
downloadPanoptic-DeepLab (DSConv)
R52-DC5
1024×2048
60.3
81.0
73.2
78.7
32.1
modelPanoptic-DeepLab (DSConv)
X65-DC5
1024×2048
61.4
81.4
74.3
79.8
32.6
modelPanoptic-DeepLab (DSConv)
HRNet-48
1024×2048
63.4
81.9
76.4
80.6
36.2
modelNote:
- This implementation uses DepthwiseSeparableConv2d (DSConv) in ASPP and decoder, which is same as the original paper.
- This implementation does not include optimized post-processing code needed for deployment. Post-processing the network outputs now takes more time than the network itself.### COCO panoptic segmentation
Method
Backbone
Output
resolution
PQ
SQ
RQ
Box AP
Mask AP
downloadPanoptic-DeepLab (DSConv)
R52-DC5
640×640
35.5
77.3
44.7
18.6
19.7
modelPanoptic-DeepLab (DSConv)
X65-DC5
640×640
-
-
-
-
-
modelPanoptic-DeepLab (DSConv)
HRNet-48
640×640
-
-
-
-
-
modelNote:
- This implementation uses DepthwiseSeparableConv2d (DSConv) in ASPP and decoder, which is same as the original paper.
- This implementation does not include optimized post-processing code needed for deployment. Post-processing the network outputs now takes more time than the network itself.## Citing Panoptic-DeepLab
If you find this code helpful in your research or wish to refer to the baseline results, please use the following BibTeX entry.
```BibTeX
@inproceedings{cheng2020panoptic,
title={Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation},
author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
booktitle={CVPR},
year={2020}
}@inproceedings{cheng2019panoptic,
title={Panoptic-DeepLab},
author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
booktitle={ICCV COCO + Mapillary Joint Recognition Challenge Workshop},
year={2019}
}
```If you use the Xception backbone, please consider citing
```BibTeX
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}@inproceedings{qi2017deformable,
title={Deformable convolutional networks--coco detection and segmentation challenge 2017 entry},
author={Qi, Haozhi and Zhang, Zheng and Xiao, Bin and Hu, Han and Cheng, Bowen and Wei, Yichen and Dai, Jifeng},
booktitle={ICCV COCO Challenge Workshop},
year={2017}
}
```If you use the HRNet backbone, please consider citing
```BibTeX
@article{WangSCJDZLMTWLX19,
title={Deep High-Resolution Representation Learning for Visual Recognition},
author={Jingdong Wang and Ke Sun and Tianheng Cheng and
Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and
Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
journal={TPAMI},
year={2019}
}
```## Acknowledgements
We have used utility functions from other wonderful open-source projects, we would espeicially thank the authors of:
- [DeepLab](https://github.com/tensorflow/models/tree/master/research/deeplab)
- [Detectron2](https://github.com/facebookresearch/detectron2)
- [TorchVision](https://github.com/pytorch/vision)## Contact
[Bowen Cheng](https://bowenc0221.github.io/) (bcheng9 AT illinois DOT edu)