{"id":15634010,"url":"https://github.com/jrieke/cnn-interpretability","last_synced_at":"2025-09-13T10:37:35.779Z","repository":{"id":78974028,"uuid":"114009095","full_name":"jrieke/cnn-interpretability","owner":"jrieke","description":"🏥  Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease","archived":false,"fork":false,"pushed_at":"2019-07-05T12:31:26.000Z","size":64465,"stargazers_count":171,"open_issues_count":1,"forks_count":50,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-09-11T13:50:51.674Z","etag":null,"topics":["alzheimer-disease-prediction","alzheimers-disease","cnn","convolutional-neural-networks","deep-learning","interpretability","interpretable-machine-learning","machine-learning","medical-imaging","mri","visualization-methods"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jrieke.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-12-12T15:49:47.000Z","updated_at":"2025-05-19T11:40:21.000Z","dependencies_parsed_at":null,"dependency_job_id":"4743b58d-05a2-4d0d-a97c-0339b0f8d90a","html_url":"https://github.com/jrieke/cnn-interpretability","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/jrieke/cnn-interpretability","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrieke%2Fcnn-interpretability","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrieke%2Fcnn-interpretability/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrieke%2Fcnn-interpretability/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrieke%2Fcnn-interpretability/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jrieke","download_url":"https://codeload.github.com/jrieke/cnn-interpretability/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrieke%2Fcnn-interpretability/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274949645,"owners_count":25379464,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-13T02:00:10.085Z","response_time":70,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alzheimer-disease-prediction","alzheimers-disease","cnn","convolutional-neural-networks","deep-learning","interpretability","interpretable-machine-learning","machine-learning","medical-imaging","mri","visualization-methods"],"created_at":"2024-10-03T10:51:00.709Z","updated_at":"2025-09-13T10:37:35.749Z","avatar_url":"https://github.com/jrieke.png","language":"Jupyter Notebook","readme":"# Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease\n\n**Johannes Rieke, Fabian Eitel, Martin Weygandt, John-Dylan Haynes and Kerstin Ritter**\n\nOur paper was presented on the [MLCN workshop](https://mlcn2018.com/) at MICCAI 2018 in Granada ([Slides](https://drive.google.com/open?id=1EKHvlWq4_-NC7HQPAbZc_ZaeNZMTQwgh)).\n\n**Preprint:** http://arxiv.org/abs/1808.02874\n\n**Abstract:** Visualizing and interpreting convolutional neural networks (CNNs) is an important task to increase trust in automatic medical decision making systems. In this study, we train a 3D CNN to detect Alzheimer’s disease based on structural MRI scans of the brain. Then, we apply four different gradient-based and occlusion-based visualization methods that explain the network’s classification decisions by highlight- ing relevant areas in the input image. We compare the methods qualita- tively and quantitatively. We find that all four methods focus on brain regions known to be involved in Alzheimer’s disease, such as inferior and middle temporal gyrus. While the occlusion-based methods focus more on specific regions, the gradient-based methods pick up distributed rel- evance patterns. Additionally, we find that the distribution of relevance varies across patients, with some having a stronger focus on the temporal lobe, whereas for others more cortical areas are relevant. In summary, we show that applying different visualization methods is important to understand the decisions of a CNN, a step that is crucial to increase clinical impact and trust in computer-based decision support systems.\n\n![Heatmaps](figures/heatmaps-ad.png)\n\n\n## Quickstart\n\nYou can use the visualization methods in this repo on your own model (PyTorch; for other frameworks see below) like this:\n\n    from interpretation import sensitivity_analysis\n    from utils import plot_slices\n\n    cnn = load_model()\n    mri_scan = load_scan()\n\n    heatmap = sensitivity_analysis(cnn, mri_scan, cuda=True)\n    plot_slices(mri_scan, overlay=heatmap)\n    \n`heatmap` is a numpy array containing the relevance heatmap. The methods should work for 2D and 3D images alike. Currently, four methods are implemented and tested: `sensitivity_analysis`, `guided_backprop`, `occlusion`, `area_occlusion`. There is also a rough implementation of `grad_cam`, which seems to work on 2D photos, but not on brain scans. Please look at `interpretation.py` for further documentation. \n    \n    \n\n## Code Structure\n\nThe codebase uses PyTorch and Jupyter notebooks. The main files for the paper are:\n\n- `training.ipynb` is the notebook to train the model and perform cross validation.\n- `interpretation-mri.ipynb` contains the code to create relevance heatmaps with different visualization methods. It also includes the code to reproduce all figures and tables from the paper.\n- All `*.py` files contain methods that are imported in the notebooks above.\n\nAdditionally, there are two other notebooks:\n- `interpretation-photos.ipynb` uses the same visualization methods as in the paper but applies them to 2D photos. This might be an easier introduction to the topic. \n- `small-dataset.ipynb` contains some old code to run a similar experiment on a smaller dataset.\n\n\n\n## Trained Model and Heatmaps\n\nIf you don't want to train the model and/or run the computations for the heatmaps yourself, you can just download my results: [Here](https://drive.google.com/file/d/14m6v9DOubxrid20BbVyTgOOVF-K7xwV-/view?usp=sharing) is the final model that I used to produce all heatmaps in the paper (as a pytorch state dict; see paper or code for more details on how the model was trained). And [here](https://drive.google.com/open?id=1feEpR-GhKUe_YTkKu9dlnYIKsyF6fyei) are the numpy arrays that contain all average relevance heatmaps (as a compressed numpy .npz file). Please have a look at `interpretations-mri.ipynb` for instructions on how to load and use these files.\n\n\n\n## Data\n\nThe MRI scans used for training are from the [Alzheimer Disease Neuroimaging Initiative (ADNI)](http://adni.loni.usc.edu/). The data is free but you need to apply for access on http://adni.loni.usc.edu/. Once you have an account, go [here](http://adni.loni.usc.edu/data-samples/access-data/) and log in. \n\n\n### Tables\n\nWe included csv tables with metadata for all images we used in this repo (`data/ADNI/ADNI_tables`). These tables were made by combining several data tables from ADNI. There is one table for 1.5 Tesla scans and one for 3 Tesla scans. In the paper, we trained only on the 1.5 Tesla images. \n\n\n### Images\n\nTo download the corresponding images, log in on the ADNI page, go to \"Download\" -\u003e \"Image Collections\" -\u003e \"Data Collections\". In the box on the left, select \"Other shared collections\" -\u003e \"ADNI\" -\u003e \"ADNI1:Annual 2 Yr 1.5T\" (or the corresponding collection for 3T) and download all images. We preprocessed all images by non-linear registration to a 1 mm isotropic ICBM template via [ANTs](http://stnava.github.io/ANTs/) with default parameters, using the quick registration script from [here](https://github.com/ANTsX/ANTs/blob/master/Scripts/antsRegistrationSyNQuick.sh). \n\nTo be consistent with the codebase, put the images into the folders `data/ADNI/ADNI_2Yr_15T_quick_preprocessed` (for the 1.5 Tesla images) or `data/ADNI/ADNI_2Yr_3T_preprocessed` (for the 3 Tesla images). Within these folders, each image should have the following path: `\u003cPTID\u003e/\u003cVisit (spaces removed)\u003e/\u003cPTID\u003e_\u003cScan.Date (/ replaced by -)\u003e_\u003cVisit (spaces removed)\u003e_\u003cImage.ID\u003e_\u003cDX\u003e_Warped.nii.gz`. If you want to use a different directory structure, you need to change the method `get_image_filepath` and/or the filenames in `datasets.py`. \n\n\n### Users from Ritter/Haynes lab\n\nIf you're working in the Ritter/Haynes lab at Charité Berlin, you don't need to download any data, but simply uncomment the correct `ADNI_DIR` variable in `datasets.py`. \n\n\n\n## Requirements\n\n- Python 2 (mostly compatible with Python 3 syntax, but not tested)\n- Scientific packages (included with anaconda): numpy, scipy, matplotlib, pandas, jupyter, scikit-learn\n- Other packages: tqdm, tabulate\n- PyTorch: torch, torchvision (tested with 0.3.1, but mostly compatible with 0.4)\n- torchsample: I made a custom fork of torchsample which fixes some bugs. You can download it from https://github.com/jrieke/torchsample or install directly via `pip install git+https://github.com/jrieke/torchsample`. Please use this fork instead of the original package, otherwise the code will break. \n\n\n\n## Non-pytorch Models\nIf your model is not in pytorch, but you still want to use the visualization methods, you can try to transform the model to pytorch ([overview of conversion tools](https://github.com/ysh329/deep-learning-model-convertor)).\n\nFor keras to pytorch, I can recommend [nn-transfer](https://github.com/gzuidhof/nn-transfer). If you use it, keep in mind that by default, pytorch uses channels-first format and keras channels-last format for images. Even though nn-transfer takes care of this difference for the orientation of the convolution kernels, you may still need to permute your dimensions in the pytorch model between the convolutional and fully-connected stage (for 3D images, I did `x = x.permute(0, 2, 3, 4, 1).contiguous()`). The safest bet is to switch keras to use channels-first as well, then nn-transfer should handle everything by itself.\n\n\n\n## Citation\n\nIf you use our code, please cite our [paper](http://arxiv.org/abs/1808.02874):\n\n    @inproceedings{rieke2018,\n      title={Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer's Disease},\n      author={Rieke, Johannes and Eitel, Fabian and Weygandt, Martin and Haynes, John-Dylan and Ritter, Kerstin},\n      booktitle={Machine Learning in Clinical Neuroimaging (MLCN)},\n      year={2018}\n    }\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjrieke%2Fcnn-interpretability","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjrieke%2Fcnn-interpretability","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjrieke%2Fcnn-interpretability/lists"}