Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mukel/epfml17-segmentation
Project 2: Road extraction from satellite images
https://github.com/mukel/epfml17-segmentation
Last synced: 1 day ago
JSON representation
Project 2: Road extraction from satellite images
- Host: GitHub
- URL: https://github.com/mukel/epfml17-segmentation
- Owner: mukel
- Created: 2017-11-10T15:04:02.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2017-12-21T21:53:40.000Z (about 7 years ago)
- Last Synced: 2024-12-18T00:31:52.773Z (16 days ago)
- Language: Jupyter Notebook
- Size: 146 MB
- Stars: 3
- Watchers: 4
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# EPFL Machine Learning Project 2: Road extraction from satellite images
![landing_image](https://user-images.githubusercontent.com/1896283/34275777-066951f2-e69f-11e7-80a2-1151fcf8b63b.png)
## Team: Chronic Machinelearnism
## Code architecture
The code consists of two Python (**3**) files:
* `run.py` : The ML pipeline. Fits the model and outputs the predictions for the submission dataset (submission_test.csv).
* `helpers.py` : Definition of all the auxiliary methods (e.g. image manipulation).## External Dependencies
Keras (>= 2.0.9) + TensorFlow backend, OpenCV and imutils.
Install dependencies using pip:
`pip install imutils opencv-python keras tensorflow-gpu`## Running
The user simply needs to **Python3-execute** the run.py file.*Note: All the above mentioned .py files needs to be in the same folder. This folder needs to contain a subfolder called 'data' with the training and submission folders "extracted as is" from Kaggle.*
*Note: Running the code requires quite some memory. Having (at least) 40GB of RAM is highly recommended.*
## Running time.
The model was trained on a single p2.8xlarge (AWS) instance in around 1 hour. On a laptop we expect the training time to be around 72 hours. We ran our run.py with all training data in multi-gpu mode (disabled on the deliverable). The data augmentation is very memory hungry, taking a considerable amount of memory; at least 128GB of RAM are required to train the model with the full dataset.## Authors
Aimee Montero, Alfonso Peterssen, Philipp Chervet