https://github.com/manifoldfr/dlmi-project
Project for the MVA course "Deep Learning for Medical Imaging"
https://github.com/manifoldfr/dlmi-project
data-augmentation deep-learning image-segmentation
Last synced: about 2 months ago
JSON representation
Project for the MVA course "Deep Learning for Medical Imaging"
- Host: GitHub
- URL: https://github.com/manifoldfr/dlmi-project
- Owner: ManifoldFR
- Created: 2020-02-18T15:27:35.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-04-06T17:50:04.000Z (about 5 years ago)
- Last Synced: 2025-03-23T19:05:12.095Z (2 months ago)
- Topics: data-augmentation, deep-learning, image-segmentation
- Language: Jupyter Notebook
- Homepage:
- Size: 5.89 MB
- Stars: 4
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Deep Learning: Retinal Vessel Segmentation

## Dependencies
We use PyTorch, TorchVision, OpenCV, PIL, and the [albumentations](https://github.com/albumentations-team/albumentations) library for extensive data augmentation.
```bash
conda install -c pytorch torchvision captum
conda install -c conda-forge imgaug
conda install -c albumentations albumentations
```We use PyTorch's TensorBoard integration to see metrics and some segmentation results during training
```bash
tensorboard --logdir runs/
```## Training
General configuration parameters (training, validation slices, image input size) are located in `config.py`.
To launch the training script:
```bash
python train.py --model attunet --loss combined --lr 0.001 -E 80
```## Inference
```bash
python inference.py --model attunet --weights path/to/weights
```## Data
DRIVE data is located in
```
data/drive
- training
- test
```
Its mean is `[0.5078, 0.2682, 0.1613]`, stdev is `[0.3378, 0.1753, 0.0978]`We also use the STARE dataset with manual vessel annotations: http://cecas.clemson.edu/~ahoover/stare/probing/index.html.
```bash
wget http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar
wget http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar
```
The layout of the data is
```
data/stare/
- images # images
- labels/ # labels
- labels_ah
- labels_vk
- results_hoover
```CHASE dataset:
```
data/chase
```
The 1st manual annotation set is `*_1stHO.png`, the second manual annotations are `*_2ndHO.png`.ARIA data is located in
```
data/aria
- images
- annotation 1
- annotation 2
- (markupdiscfovea)
```
The database contains healthy subjects (61 images, "_c_" suffix), diabetic patients (59 images, "_d_" suffix), and patients with age-related macular degeneration (23 images, "_a_" suffix). We created a CSV file recording theThe data augmentation can be visualized in a [notebook](augmentations-demo.ipynb).
## Visualization
### Attention maps
The Attention U-Net model has attention maps that can be directly interpreted using the following script:
```bash
python visualize_attention.py --img-path path/to/img --weights path/to/model
```
check `python visualize_attention.py -h` for options.### Activation maps
See `visualize_activations.py`
