Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/MinZHANG-WHU/FDCNN

The implementation of FDCNN in paper - A Feature Difference Convolutional Neural Network-Based Change Detection Method
https://github.com/MinZHANG-WHU/FDCNN

caffe change-detection deep-learning multi-scale multi-sensor pycaffe remote-sensing siamese-network

Last synced: 2 months ago
JSON representation

The implementation of FDCNN in paper - A Feature Difference Convolutional Neural Network-Based Change Detection Method

Awesome Lists containing this project

README

        

# FDCNN

This repository contains code, network definitions and pre-trained models for change detection on remote sensing images using deep learning.

The implementation uses the [Caffe](https://github.com/BVLC/caffe) framework.

## Motivation

In this work, we use scene-level samples (from the AID data set) of remote sensing scene classification, which are easier to obtain than pixel-level samples, for learning deep features from different remote sensing scenes at different scales. These features learned from specific scenes (cultivated land, lakes, vegetation, etc.) are more affected. The changes in these scenes are usually more important. Based on this idea, A new CNN structure and training strategies are proposed for remote sensing image change detection, which is supervised but requires very few pixel-level training samples. Advantageously, it has good generalization ability and multi-scale change detection capabilities.

## Content

### Networks

We provide a deep neural network based on the [VGG16 architecture](https://arxiv.org/abs/1409.1556). It was trained on the AID dataset to learn the multi-scale deep features from remote sensing images. The pre-trained weights can be download from the [link](https://drive.google.com/open?id=1mAH0Hj9qi2M4GzVaNKe9xJkyeYMf2TLO).

We proposed a novel FDCNN to produce change detection maps from high-resolution RS images. It is suitable for **multi-scale** remote sensing image change detection tasks.

### Datasets

The available datasets can be downloaded from the table below:

Tabel 1. Experiment datasets.

Datasets
Description
Download


AID
10,000 RS images (R, G and B), including 30 different scene types (i.e. labeled 30 types at scene-level), each containing more than 220 images with a size of 600×600 pixels and a spatial resolution of 8 meters to 0.5 meters, collected in different countries (China, USA, UK, France, etc.), at different times and in different imaging conditions
[official]


Worldview 2
including 2 pilot sites, and each site consists of a ground truth map (labeled changed and unchanged at pixel-level) and two-period Worldview 2 satellite images (Worldview 3 and WV3 were incorrectly written in our paper), located in Shenzhen, China, with a size of 1431×1431 pixels and a spatial resolution of 2 meters, acquired in 2010 and 2015 respectively.
Site 1 (RGB)
[drive]


Site 1 (4 bands)
[drive]


Site 2 (RGB)
[drive]


Site 2 (4 bands)
[drive]


Zi-Yuan 3
including a ground truth map (labeled changed and unchanged at pixel-level) and two-period Zi-Yuan 3 satellite images, located in Wuhan, Hubei, China, with a size of 458×559 pixels, three bands (R, G and B), and a spatial resolution of 5.8 meters, acquired in 2014 and 2016 respectively.
[drive]


Quickbird
including a ground truth map (labeled changed and unchanged at pixel-level) and two-period Quickbird satellite images, located in Wuhan, Hubei, China, with a size of 1154×740 pixels, three bands (R, G and B), and a spatial resolution of 2.4 meters, acquired in 2009 and 2014 respectively.
[drive]


OSCD
10 test pairs RS images with a spatial resolution of 10 meters, taken from the Sentinel-2 satellites between 2015 and 2018 with pixel-level change ground truth. Their ground truth remains undisclosed and the results need be uploaded to the IEEE GRSS DASE website for evaluation
[drive] [official]


SZADA/1
a pair of optical aerial images, labeled changed and unchanged at pixel-level, taken with several years of time differences, with a spatial resolution 1.5 meters.
[drive] [official]


### How to start

1. Install Caffe with Python 2.7

1. Follow the instructions in [Installation](http://caffe.berkeleyvision.org/installation.html), note the version of Python. Or using our [pre-build runtime](https://drive.google.com/open?id=1OLIgpx0Jy6LT0KCkgYLcb0d3FvAJXEA0) (with CUDA 8.0 and for Windows only).
2. Please add the absolute path of folder "caffe_layers" to the PYTHONPATH so that PyCaffe can search for the layer implementation file.

2. Training VGG16 & FDCNN

1. Training VGG16 using the AID dataset.

2. Training FDCNN using the WV2 site 1 dataset.

3. Testing FDCNN

1. Download the test data sets and unzip them to the "datasets" subfolder.

2. Use your own trained FDCNN model, or download our [pre-trained FDCNN model](https://drive.google.com/open?id=1v1Q9gOqgzk657aaPWfEirSR-aJafF7BS).

3. Evaluation

- To test the accuracy of FDCNN on the test datasets, run the following commands:
```
python exp_test_custom.py \
--sensor=ZY3 \
--alpha=2.0
```

- To test the accuracy of FDCNN on the SZTAKI datasets, run the following commands:
```
python exp_test_SZTAKI.py \
--alpha=2.66
```

- To test the accuracy of FDCNN on the OSCD datasets, run the following commands:
```
python exp_test_OSCD.py \
--threshold=0.98
```
The ground truth of OSCD remains undisclosed and the results need be uploaded to [the IEEE GRSS DASE website](http://dase.grss-ieee.org/) for evaluation, see figure 1.
![](/output/OSCD.png)
Figure 1. FDCNN accuracy evaluation on the OSCD dataset.

Change magnitude map (CMM.tif) and binary image (BM.tif) will be generated under the "output" subfolder.

## References

If you use this work for your projects, please take the time to cite our [paper](https://doi.org/10.1109/TGRS.2020.2981051).

```
@Article{9052762,
AUTHOR = {Zhang, Min and Shi, Wenzhong},
TITLE = {A Feature Difference Convolutional Neural Network-Based Change Detection Method},
JOURNAL = {IEEE Transactions on Geoscience and Remote Sensing},
VOLUME = {},
YEAR = {2020},
NUMBER = {},
URL = {https://ieeexplore.ieee.org/document/9052762},
DOI = {10.1109/TGRS.2020.2981051}
}
```

## License

Code and datasets are released under the GPLv3 license for non-commercial and research purposes **only**. For commercial purposes, please contact the authors.