Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/developmentseed/skynet-train

Training and test the SegNet neural network on satellite imagery
https://github.com/developmentseed/skynet-train

Last synced: 3 days ago
JSON representation

Training and test the SegNet neural network on satellite imagery

Awesome Lists containing this project

README

        

# SegNet training and testing scripts

These scripts are for use in training and testing the [SegNet neural
network](http://mi.eng.cam.ac.uk/projects/segnet/), particularly with
OpenStreetMap + Satellite Imagery training data generated by
[skynet-data](https://github.com/developmentseed/skynet-data).

Contributions are very welcome!

# Quick start

The quickest and easiest way to use these scripts is via the
`developmentseed/skynet-train` docker image, but note that to make this work
with a GPU--necessary for reasonable training times---you will need a machine
set up to use [`nvidia-docker`](https://github.com/NVIDIA/nvidia-docker). (The
[start_instance](https://github.com/developmentseed/skynet-train/blob/master/start_instance)
script uses `docker-machine` to spin up an AWS EC2 g2 instance and set it up with
nvidia-docker. The [start_spot_instance](https://github.com/developmentseed/skynet-train/blob/master/start_spot_instance)
script does the same thing but creates a [spot](https://aws.amazon.com/ec2/spot/)
instance instead of an on demand one.)

1. Create a training dataset with [skynet-data](https://github.com/developmentseed/skynet-data).
2. Run:

```sh
nvidia-docker run \
-v /path/to/training/dataset:/data \
-v /path/to/training/output:/output \
-e AWS_ACCESS_KEY_ID=... \
-e AWS_SECRET_ACCESS_KEY=... \
developmentseed/skynet-train:gpu \
--sync s3://your-bucket/training/blahbla
```

This will kick off a training run with the given data. Every 10000 iterations,
the model will be snapshotted and run on the test data, the training "loss"
will be plotted, and all of this uploaded to s3. (Omit the `--sync` argument
and AWS creds to skip the upload.)

Each batch of test results includes a `view.html` file that shows a bare-bones
viewer allowing you to browse the results on a map and compare model outputs to
the ground truth data. Use it like:
- http://your-bucket-url/...test-dir.../view.html?imagery_source=MAPID&access_token=MAPBOX_ACCESS_TOKEN where `MAPID` points to Mapbox-hosted raster tiles used for training. (Defaults to `mapbox.satellite`.)
- http://your-bucket-url/...test-dir.../view.html?imagery_source=http://yourtiles.com/{z}/{x}/{y} for non-Mapbox imagery tiles

Customize the training run with these params:

```
--model MODEL # segnet or segnet_basic, defaults to segnet
--output OUTPUT # directory in which to output training assets
--data DATA # training dataset
[--fetch-data FETCH_DATA] # s3 uri from which to download training data into DATA
[--snapshot SNAPSHOT] # snapshot frequency
[--cpu] # sets cpu mode
[--gpu [GPU [GPU ...]]] # set gpu devices to use
[--display-frequency DISPLAY_FREQUENCY] # frequency of logging output (affects granularity of plots)
[--iterations ITERATIONS] # total number of iterations to run
[--crop CROP] # crop trianing images to CROPxCROP pixels
[--batch-size BATCH_SIZE] # batch size (adjust this up or down based on GPU size. defaults to 6 for segnet and 16 for segnet_basic)
[--sync SYNC]
```

## Monitoring

On an instance where training is happening, expose a simple monitoring page with:

```sh
docker run --rm -it -v /mnt/training:/output -p 80:8080 developmentseed/skynet-monitor
```

# Details

Prerequisites / Dependencies:
- Node and Python
- As of now, training SegNet requires building the [caffe-segnet fork](https://github.com/alexgkendall/caffe-segnet) fork of Caffe.
- Install node dependencies by running `npm install` in the root directory of this repo.

## Set up model definition

After creating a dataset with the [skynet-data](https://github.com/developmentseed/skynet-data)
scripts, set up the model `prototxt` definition files by running:

```
segnet/setup-model --data /path/to/dataset/ --output /path/to/training/workdir
```

Also copy `segnet/templates/solver.prototxt` to the training work directory, and
edit it to (a) point to the right paths, and (b) set up the learning
"hyperparameters".

(NOTE: this is hard to get right at first; when we post links to a couple of
pre-trained models, we'll also include a copy of the solver.prototxt we used as
a reference / starting point.)

## Train

Download the pre-trained VGG weights `VGG_ILSVRC_16_layers.caffemodel` from
http://www.robots.ox.ac.uk/~vgg/research/very_deep/

From your training work directory, run

```
$CAFFE_ROOT/build/tools/caffe train -gpu 0 -solver solver.txt \
-weights VGG_ILSVRC_16_layers.caffemodel \
2>&1 | tee train.log
```

You can monitor the training with:

```
segnet/util/plot_training_log.py train.log --watch
```

This will generate and continually update a plot of the "loss" (i.e., training
error) which should gradually decrease as training progresses.

## Testing the Trained Network

```
segnet/run_test --output /path/for/test/results/ --train /path/to/segnet_train.prototxt --weights /path/to/snapshots/segnet_blahblah_iter_XXXXX.caffemodel --classes /path/to/dataset/classes.json
```

This script essentially carries out the instructions outlined here:
http://mi.eng.cam.ac.uk/projects/segnet/tutorial.html

## Inference

After you have a trained and tested network, you'll often want to use it to predict over a larger area. We've included scripts for running this process locally or on AWS.

### Local Inference

To run predictions locally you'll need:
- Raster imagery (as either a GeoTIFF or a VRT)
- A line delimited list of [XYZ tile indices](https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames) to predict on (e.g. `49757-74085-17`. These can be made with [geodex](https://github.com/developmentseed/geodex))
- A skynet model, trained weights, and class definitions ( `.prototxt`, `.caffemodel`, `.json`)

To run:

```sh
docker run -v /path/to/inputs:/inputs -v /path/to/model:/model -v /path/to/output/:/inference \
developmentseed:/skynet-run:local-gpu /inputs/raster.tif /inputs/tiles.txt \
--model /model/segnet_deploy.prototxt
--weights /model/weights.caffemodel
--classes /model/classes.json
--output /inference
```

If you are running on a CPU, use the `:local-cpu` docker image and add `--cpu-only` as a final flag to the above command.

The predicted rasters and vectorized geojson outputs will be located in `/inference` (and the corresponding mounted volume)

### AWS Inference

TODO: for now, see command line instructions in `segnet/queue.py` and `segnet/batch_inference.py`

## GPU

These scripts were originally developed for use on an AWS `g2.2xlarge` instance. For support on newer GPUs, it may be required to:
- use a [newer NVIDIA driver](https://github.com/developmentseed/skynet-train/blob/master/user_data.sh#L22)
- use a newer version of CUDA. To support CUDA8+, you can use the docker images tagged with `:cuda8`. They are built off an updated [`caffe-segnet` fork](https://github.com/TimoSaemann/caffe-segnet-cudnn5) with support for `cuDNN5`.