https://github.com/matt-baugh/many-tasks-make-light-work
https://github.com/matt-baugh/many-tasks-make-light-work
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/matt-baugh/many-tasks-make-light-work
- Owner: matt-baugh
- License: mit
- Created: 2023-06-08T16:29:30.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-12-31T11:55:34.000Z (over 1 year ago)
- Last Synced: 2024-07-31T20:43:56.956Z (9 months ago)
- Language: Python
- Size: 361 KB
- Stars: 6
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
README
# Many tasks make light work: Learning to localise medical anomalies from multiple synthetic tasks
## Environment installation:
- Once that's done, create a virtual environment with ```make_virtual_env.sh```
- Activate the environment with ```source multitask_method_env/bin/activate```
- Set paths for input/output files in ```multitask_method/paths.py```Alternatively, you can use the .devcontainer to run the code in a docker container which creates a virtual environment from the ```requirements.txt``` file.
## Data
The HCP dataset is available at https://www.humanconnectome.org/study/hcp-young-adult
The BraTS 2017 dataset is available at https://www.med.upenn.edu/sbia/brats2017/registration.html
The ISLES 2015 dataset is available at https://www.smir.ch/ISLES/Start2015
The VinDr-CXR dataset is available at https://physionet.org/content/vindr-cxr/1.0.0/
### Inter-dataset blending datasets
The MOOD dataset (for 3D inter-dataset blending) is available at https://www.synapse.org/#!Synapse:syn21343101/wiki/599515
The ImageNET dataset (for 2D inter-dataset blending) is available at https://www.kaggle.com/c/imagenet-object-localization-challenge/overview
## Preprocessing
Preprocessing scripts are available in the ```multitask_method/preprocessing``` folder.
Don't forget to set the paths for preprocessed data and predictions to be saved in ```multitask_method/paths.py```## Training
With the environment activated, run ```python train.py ABSOLUTE_PATH_TO_EXPERIMENT_CONFIG FOLD_NUMBER```Experiment configs used for the paper are in the ```experiments``` folder.
## Prediction and Evaluation
To generate predictions on the test set, run ```python predict.py ABSOLUTE_PATH_TO_EXPERIMENT_CONFIG```.
To evaluate the predictions at the normal resolution, run ```python eval.py ABSOLUTE_PATH_TO_EXPERIMENT_CONFIG```.
Result metrics are saved in the predictions folder as ```results.json```.
To evaluate the predictions at CRADL's resolution, run ```python cradl_eval.py ABSOLUTE_PATH_TO_EXPERIMENT_CONFIG```.
## Reproducibility
### Brain
Download the datasets from the above links
Run the brain preprocessing script ```multitask_method/preprocessing/brain_preproc.py```When experimenting with training on T tasks, run:
For F in range(0, 5CT) run
```python train.py /experiments/exp_HCP_low_res_T_train.py F```
To produce predictions run:
```python predict.py /experiments/exp_HCP_low_res_T_train.py F```
To evaluate the predictions run:
```python eval.py /experiments/exp_HCP_low_res_T_train.py F```
### VinDr-CXR
Download the datasets from the above links
Run the brain preprocessing script ```multitask_method/preprocessing/vindr_cxr_preproc.py```
When experimenting with training on T tasks, run:
For F in range(0, 5CT) run
```python train.py /experiments/exp_VINDR_low_res_T_train.py F```
To produce predictions run:
```python predict.py /experiments/exp_VINDR_low_res_T_train.py F```
To evaluate the predictions run:
```python eval.py /experiments/exp_VINDR_low_res_T_train.py F```