https://github.com/naoto0804/cross-domain-detection
Cross-Domain Weakly-Supervised Object Detection through Progressive Domain Adaptation [Inoue+, CVPR2018].
https://github.com/naoto0804/cross-domain-detection
chainer cross-domain domain-adaptation object-detection weakly-supervised-learning
Last synced: 6 months ago
JSON representation
Cross-Domain Weakly-Supervised Object Detection through Progressive Domain Adaptation [Inoue+, CVPR2018].
- Host: GitHub
- URL: https://github.com/naoto0804/cross-domain-detection
- Owner: naoto0804
- Created: 2018-03-11T08:28:12.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2024-03-16T05:41:18.000Z (over 1 year ago)
- Last Synced: 2025-03-29T07:09:17.051Z (6 months ago)
- Topics: chainer, cross-domain, domain-adaptation, object-detection, weakly-supervised-learning
- Language: Python
- Homepage: https://naoto0804.github.io/cross_domain_detection/
- Size: 1.84 MB
- Stars: 427
- Watchers: 10
- Forks: 77
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Cross-Domain Weakly-Supervised Object Detection through Progressive Domain Adaptation
This page is for the [paper](http://openaccess.thecvf.com/content_cvpr_2018/html/Inoue_Cross-Domain_Weakly-Supervised_Object_CVPR_2018_paper.html) appeared in CVPR2018.
You can also find [project page](https://naoto0804.github.io/cross_domain_detection/) for the paper.Here is the example of our results in watercolor images.

## Requirements
- Python 3.5+
- Chainer 3.0+
- ChainerCV 0.8
- Cupy 2.0+
- OpenCV 3+
- MatplotlibPlease install all the libraries. We recommend `pip install -r requirements.txt`.
## Download models
Please go to both `models` and `datasets` directory and follow the instructions.## Usage
For more details about arguments, please refer to `-h` option or the actual codes.### Demo using trained models
```
python demo.py input/watercolor_142090457.jpg output.jpg --gpu 0 --load models/watercolor_dt_pl_ssd300
```### Evaluation of trained models
```
python eval_model.py --root datasets/clipart --data_type clipart --det_type ssd300 --gpu 0 --load models/clipart_dt_pl_ssd300
```### Training using clean instance-level annotations (ideal case)
```
python train_model.py --root datasets/clipart --subset train --result result --det_type ssd300 --data_type clipart --gpu 0
```### Training using virtually created instance-level annotations
Rest of this section shows examples for experiments in `clipart` dataset.
1. (Preprocess): please follow instructions in `./datasets/README.md` to create folders.
2. Domain transfer (DT) step
1. `python train_model.py --root datasets/dt_clipart/VOC2007 --root datasets/dt_clipart/VOC2012 --subset trainval --result result/dt_clipart --det_type ssd300 --data_type clipart --gpu 0 --max_iter 500 --eval_root datasets/clipart`
We provide models obtained in this step at `./models`.3. Pseudo labeling (PL) step
1. `python pseudo_label.py --root datasets/clipart --data_type clipart --det_type ssd300 --gpu 0 --load models/clipart_dt_ssd300 --result datasets/dt_pl_clipart`
2. `python train_model.py --root datasets/dt_pl_clipart --subset train --result result/dt_pl_clipart --det_type ssd300 --data_type clipart --gpu 0 --load models/clipart_dt_ssd300 --eval_root datasets/clipart`### Citation
If you find this code or dataset useful for your research, please cite our paper:
```
@inproceedings{inoue2018cross,
title={Cross-domain weakly-supervised object detection through progressive domain adaptation},
author={Inoue, Naoto and Furuta, Ryosuke and Yamasaki, Toshihiko and Aizawa, Kiyoharu},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={5001--5009},
year={2018}
}
```