Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ivanch/autoseg-testing
Semi-Automatic testing data augmentation techniques for SegNet.
https://github.com/ivanch/autoseg-testing
data-augmentation data-augmented-autoencoders keras segnet
Last synced: about 1 month ago
JSON representation
Semi-Automatic testing data augmentation techniques for SegNet.
- Host: GitHub
- URL: https://github.com/ivanch/autoseg-testing
- Owner: ivanch
- Created: 2019-09-17T13:20:27.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-11-20T03:45:05.000Z (about 5 years ago)
- Last Synced: 2024-11-09T19:37:28.226Z (3 months ago)
- Topics: data-augmentation, data-augmented-autoencoders, keras, segnet
- Language: Jupyter Notebook
- Homepage:
- Size: 43.9 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Semi-Automatic Segmentation model testing
Test models and image pre-processing techniques to find the best combination.## Usage
1. Define the `WORK_DIR` with the working directory of your project. It should contain:
1. File named `dataset.zip` containing:
* `input` folder with all the input images
* `label` folder with all the target images (masks) with the same name as in the input folder
* `test-input` folder with the test input images
* `test-label` folder with the target test masks with the same name as in the input folder
2. Folder named `models`
2. Define `SIZE` with the model input size (SegNet follows 256)
3. Define `STRIDE` with the stride that the algorithm will make to generate the dataset
4. Define the `CURRENT_MODEL` with the path of your model according to [image-segmentation-keras](https://github.com/divamgupta/image-segmentation-keras)
* `CURRENT_MODEL = "segnet.resnet50_segnet"` will produce a SegNet model using the ResNet50 backbone
5. Run all the cells until `Functions` stage
6. If your model is new (first time using it)
1. Run `create_default_model()`
7. If your dataset is new (first time using it)
1. Run `develop_dataset()` (last cell)
2. Wait for it to complete, may take a while
8. Create/modificate the image pre-processing cells, starting at `Standard`
9. Run the cells
* When a cell is completed, it should create a folder with the model results
10. At the end, run `fetch_model_results()` to get all the model results into its own folder## Analysing data
1. Copy the *CURRENT_MODEL*.csv file to the same dir as `data_analysis.ipynb`
2. Run the cells
3. Check the result.png file## Example
On the right there's the model trained on 1024 images and on the left there's the model results trained on 7000 images.
Each line corresponds to a different image processing technique.
![example-image](assets/result-dataset.png)