{"id":13637749,"url":"https://github.com/LiheYoung/ST-PlusPlus","last_synced_at":"2025-04-19T17:31:39.378Z","repository":{"id":42158858,"uuid":"322855528","full_name":"LiheYoung/ST-PlusPlus","owner":"LiheYoung","description":"[CVPR 2022] ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation","archived":false,"fork":false,"pushed_at":"2023-03-03T07:12:50.000Z","size":3960,"stargazers_count":229,"open_issues_count":4,"forks_count":33,"subscribers_count":6,"default_branch":"master","last_synced_at":"2024-08-03T01:11:49.383Z","etag":null,"topics":["cvpr2022","self-training","semi-supervised-learning","semi-supervised-segmentation"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2106.05095","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/LiheYoung.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-12-19T13:36:28.000Z","updated_at":"2024-07-29T03:34:02.000Z","dependencies_parsed_at":"2024-01-14T08:55:26.218Z","dependency_job_id":"c9afd1f3-cb94-46c9-875f-c36cd77bd24d","html_url":"https://github.com/LiheYoung/ST-PlusPlus","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LiheYoung%2FST-PlusPlus","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LiheYoung%2FST-PlusPlus/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LiheYoung%2FST-PlusPlus/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LiheYoung%2FST-PlusPlus/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/LiheYoung","download_url":"https://codeload.github.com/LiheYoung/ST-PlusPlus/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223804937,"owners_count":17205824,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cvpr2022","self-training","semi-supervised-learning","semi-supervised-segmentation"],"created_at":"2024-08-02T01:00:28.959Z","updated_at":"2024-11-09T08:30:18.925Z","avatar_url":"https://github.com/LiheYoung.png","language":"Python","readme":"# ST++\n\nThis is the official PyTorch implementation of our CVPR 2022 paper:\n\n\u003e [**ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation**](https://arxiv.org/abs/2106.05095)       \n\u003e Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao        \n\u003e *In Conference on Computer Vision and Pattern Recognition (CVPR), 2022*\n\nWe have another simple yet stronger end-to-end framework **UniMatch** accepted by CVPR 2023:\n\n\u003e **[Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation](https://arxiv.org/abs/2208.09910)** [[Code](https://github.com/LiheYoung/UniMatch)]\u003c/br\u003e\n\u003e Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi\u003c/br\u003e\n\u003e *In Conference on Computer Vision and Pattern Recognition (CVPR), 2023*\n\n## Getting Started\n\n### Data Preparation\n\n#### Pre-trained Model\n\n[ResNet-50](https://download.pytorch.org/models/resnet50-0676ba61.pth) | [ResNet-101](https://download.pytorch.org/models/resnet101-63fe2227.pth) | [DeepLabv2-ResNet-101](https://drive.google.com/file/d/14be0R1544P5hBmpmtr8q5KeRAvGunc6i/view?usp=sharing)\n\n#### Dataset\n\n[Pascal JPEGImages](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) | [Pascal SegmentationClass](https://drive.google.com/file/d/1ikrDlsai5QSf2GiSUR3f8PZUzyTubcuF/view?usp=sharing) | [Cityscapes leftImg8bit](https://www.cityscapes-dataset.com/file-handling/?packageID=3) | [Cityscapes gtFine](https://drive.google.com/file/d/1E_27g9tuHm6baBqcA7jct_jqcGA89QPm/view?usp=sharing) \n\n#### File Organization\n\n```\n├── ./pretrained\n    ├── resnet50.pth\n    ├── resnet101.pth\n    └── deeplabv2_resnet101_coco_pretrained.pth\n    \n├── [Your Pascal Path]\n    ├── JPEGImages\n    └── SegmentationClass\n    \n├── [Your Cityscapes Path]\n    ├── leftImg8bit\n    └── gtFine\n```\n\n\n### Training and Testing\n\n```\nexport semi_setting='pascal/1_8/split_0'\n\nCUDA_VISIBLE_DEVICES=0,1 python -W ignore main.py \\\n  --dataset pascal --data-root [Your Pascal Path] \\\n  --batch-size 16 --backbone resnet50 --model deeplabv3plus \\\n  --labeled-id-path dataset/splits/$semi_setting/labeled.txt \\\n  --unlabeled-id-path dataset/splits/$semi_setting/unlabeled.txt \\\n  --pseudo-mask-path outdir/pseudo_masks/$semi_setting \\\n  --save-path outdir/models/$semi_setting\n```\nThis script is for our ST framework. To run ST++, add ```--plus --reliable-id-path outdir/reliable_ids/$semi_setting```.\n\n\n## Acknowledgement\n\nThe DeepLabv2 MS COCO pre-trained model is borrowed and converted from **AdvSemiSeg**.\nThe image partitions are borrowed from **Context-Aware-Consistency** and **PseudoSeg**. \nPart of the training hyper-parameters and network structures are adapted from **PyTorch-Encoding**. The strong data augmentations are borrowed from **MoCo v2** and **PseudoSeg**.\n \n+ AdvSemiSeg: [https://github.com/hfslyc/AdvSemiSeg](https://github.com/hfslyc/AdvSemiSeg).\n+ Context-Aware-Consistency: [https://github.com/dvlab-research/Context-Aware-Consistency](https://github.com/dvlab-research/Context-Aware-Consistency).\n+ PseudoSeg: [https://github.com/googleinterns/wss](https://github.com/googleinterns/wss).\n+ PyTorch-Encoding: [https://github.com/zhanghang1989/PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding).\n+ MoCo: [https://github.com/facebookresearch/moco](https://github.com/facebookresearch/moco).\n+ OpenSelfSup: [https://github.com/open-mmlab/OpenSelfSup](https://github.com/open-mmlab/OpenSelfSup).\n\nThanks a lot for their great works!\n\n## Citation\n\nIf you find this project useful, please consider citing:\n\n```bibtex\n@inproceedings{st++,\n  title={ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation},\n  author={Yang, Lihe and Zhuo, Wei and Qi, Lei and Shi, Yinghuan and Gao, Yang},\n  booktitle={CVPR},\n  year={2022}\n}\n\n@inproceedings{unimatch,\n  title={Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation},\n  author={Yang, Lihe and Qi, Lei and Feng, Litong and Zhang, Wayne and Shi, Yinghuan},\n  booktitle={CVPR},\n  year={2023}\n}\n```\n","funding_links":[],"categories":["2022"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FLiheYoung%2FST-PlusPlus","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FLiheYoung%2FST-PlusPlus","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FLiheYoung%2FST-PlusPlus/lists"}