Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/marcbelmont/satellite-image-object-detection
YOLO/YOLOv2 inspired deep network for object detection on satellite images (Tensorflow, Numpy, Pandas).
https://github.com/marcbelmont/satellite-image-object-detection
object-detection satellite-imagery tensorflow yolo yolo2
Last synced: 3 days ago
JSON representation
YOLO/YOLOv2 inspired deep network for object detection on satellite images (Tensorflow, Numpy, Pandas).
- Host: GitHub
- URL: https://github.com/marcbelmont/satellite-image-object-detection
- Owner: marcbelmont
- Created: 2017-06-20T15:16:27.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2017-06-20T15:22:39.000Z (over 7 years ago)
- Last Synced: 2024-08-01T19:51:09.821Z (3 months ago)
- Topics: object-detection, satellite-imagery, tensorflow, yolo, yolo2
- Language: Python
- Homepage:
- Size: 488 KB
- Stars: 122
- Watchers: 14
- Forks: 36
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Object detection on satellite images
YOLO/YOLOv2 inspired deep neural network for object detection on satellite images. Built using Tensorflow. Keys features:
- the model is using an architecture similar to YOLOv2 (batch_norm after each layers, no fully connected layers at the end).
- We predict only one box per feature map cell instead of 2 as in YOLO.
- No passthrough layer is used. Predictions are based only on the last layer.
- There is no Imagenet classification pretraining. The full loss is used from the start.
- The input images are 250x250 instead of ~450x450. Results should improve with a bigger image size.
- The learning rate is kept constant throughout training.## Installation
Dependencies: `pip install -r requirements.txt`
The **dataset** is/was available on https://www.datasciencechallenge.org/challenges/1/safe-passage/ . `preprocess.py` lets you transform the 2000x2000 images into 250x250 images and a CSV file with all the objects annotations. The dataset contains only the position of the center of the objects (**no bounding boxes**). A bounding box is generated. It's just a square centered on the provided position (x,y). The size of the square varies depending on the type of vehicle.
## How to use
Train the network with :
`python main.py --logdir=logs/ --batch_size=64 --ckptdir=checkpoints/`Inference (it will randomly select 10 images and draw bounding boxes):
`python detect.py`## Example
We're using **8 object classes**: Motorcycle, Light short rear, Light long rear, Dark short rear, Dark long rear, Red short rear, Red long rear, Light van. Other types of vehicles are **ignored**.
After training for **100 epochs**, you will get the following. The results are generally OK. However some cars are sometimes not detected (for example the red car at the bottom) or found in position with no vehicles. mAP has not been calculated.
![example](media/example.png)