https://github.com/luodian/madan
Pytorch Code release for our NeurIPS paper "Multi-source Domain Adaptation for Semantic Segmentation"
https://github.com/luodian/madan
Last synced: 9 months ago
JSON representation
Pytorch Code release for our NeurIPS paper "Multi-source Domain Adaptation for Semantic Segmentation"
- Host: GitHub
- URL: https://github.com/luodian/madan
- Owner: Luodian
- License: mit
- Created: 2019-06-23T10:07:58.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2020-08-29T08:36:06.000Z (over 5 years ago)
- Last Synced: 2025-02-27T11:36:07.538Z (9 months ago)
- Language: Python
- Homepage: https://arxiv.org/abs/1910.12181
- Size: 114 KB
- Stars: 172
- Watchers: 8
- Forks: 28
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# MADAN
A Pytorch Code for [Multi-source Domain Adaptation for Semantic Segmentation](https://arxiv.org/abs/1910.12181)
If you use this code in your research please consider citing:
```
@InProceedings{zhao2019madan,
title = {Multi-source Domain Adaptation for Semantic Segmentation},
author = {Zhao, Sicheng and Li, Bo and Yue, Xiangyu and Gu, Yang and Xu, Pengfei and Tan, Hu, Runbo and Chai, Hua and Keutzer, Kurt},
booktitle = {Advances in Neural Information Processing Systems},
year = {2019}
}
```
## Quick Look
Our multi-source domain adaptation builds on the work [CyCADA](https://github.com/jhoffman/cycada_release) and [CycleGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix). Since we focus on Semantic Segmentation task, we remove Digit Classfication part in CyCADA.
We add following modules and achieve startling improvements.
1. Dynamic Semantic Consistency Module
2. Adversarial Aggregation Module
1. Sub-domain Aggregation Discriminator
2. Cross-domain Cycle Discriminator
While we implements [MDAN](https://openreview.net/pdf?id=ryDNZZZAW) for Semantic Segmentation task in Pytorch as our baseline comparasion.
## Overall Structure

## Setup
Check out this repo:
```bash
git clone https://github.com/pikachusocute/MADAN.git
```
Install Python3 requirements
```bash
pip3 install -r requirements.txt
```
## Dynamic Adversarial Image Generation
We follow the way in CyCADA, in the first step, we need to train Image Adaptation module to transfer source image(GTA, Synthia or Multi-source) to "source as target".

We refer Image Adaptation module from GTA to Cityscapes as GTA->Cityscapes in the following.
#### GTA->Cityscapes
```bash
cd scripts/CycleGAN
bash cyclegan_gta2cityscapes.sh
```
In the training process, snapshot files will be stored in `cyclegan/checkpoints/[EXP_NAME]`.
Usually, afer we run for 20 epochs, there'll be a file `20_net_G_A.pth ` in previous folder path.
Then we run the test process.
```bash
bash scripts/CycleGAN/test_templates.sh [EXP_NAME] 20 cycle_gan_semantic_fcn gta5_cityscapes
```
In multi-source case, there are both `20_net_G_A_1.pth` and `20_net_G_A_2.pth` exist. We use another script to run test process.

```bash
bash scripts/CycleGAN/test_templates_cycle.sh [EXP_NAME] 20 test synthia_cityscapes gta5_cityscapes
```
New dataset will be generated at `~/cyclegan/results/[EXP_NAME]/train_20`.
After we obtain a new source stylized dataset, we then train segmenter on the new dataset.
## Pixel Level Adaptation
In this part, we train our new segmenter on new dataset.
```bash
ln -s ~/cyclegan/results/[EXP_NAME]/train_20 ~/data/cyclegta5/[EXP_NAME]_TRAIN_60
```
Then we set `dataflag = [EXP_NAME]_TRAIN_60` to find datasets' paths, and follow instructions to train segmenter to perform pixel level adaptation.
```bash
bash scripts/FCN/train_fcn8s_cyclesgta5_DSC.sh
```
## Feature Level Adaptation
For adaptation, we use
```bash
bash scripts/ADDA/adda_cyclegta2cs_score.sh
```
Make sure you choose the desired `src` and `tgt` and `datadir` before. In this process, you should load your `base_model` trained on synthetic dataset and perform adaptation in feature level to real scene dataset.
### Our Model
We release our adaptation model in the `./models`, you can use `scripts/eval_templates.sh` to evaluate its validity.
1. [CycleGTA5_Dynamic_Semantic_Consistency](https://drive.google.com/file/d/1moGF7L2hkTHUPUzqsSxPwKNlHCHQm4Ms/view?usp=sharing)
2. [CycleSYNTHIA_Dynamic_Semantic_Consistency](https://drive.google.com/file/d/19V5J1zyF3ct3247gSSr-u3WVkDJqQvUk/view?usp=sharing)
3. [Multi_Source_SAD_CCD](https://drive.google.com/file/d/1xgmLwhsbwv-isy7R5FkNevVSH9gcMxuq/view?usp=sharing)
### Transfered Dataset
We will release our transfer dataset soon, where our `CycleGTA5_Dynamic_Semantic_Consistency` model is trained to perform pixel level adaptation.