https://github.com/xhu248/AutoSAM
finetuning SAM with non-promptable decoder on medical images
https://github.com/xhu248/AutoSAM
Last synced: about 1 month ago
JSON representation
finetuning SAM with non-promptable decoder on medical images
- Host: GitHub
- URL: https://github.com/xhu248/AutoSAM
- Owner: xhu248
- License: apache-2.0
- Created: 2023-06-22T23:59:56.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-07-18T02:42:25.000Z (almost 2 years ago)
- Last Synced: 2024-08-03T01:25:05.873Z (10 months ago)
- Language: Python
- Homepage:
- Size: 408 KB
- Stars: 104
- Watchers: 1
- Forks: 9
- Open Issues: 12
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-llm-and-aigc - xhu248/AutoSAM
- Awesome-Segment-Anything - [code
README
# AutoSAM
This repo is pytorch implementation of paper "How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Image Domains" by Xinrong Hu et al.[[`Paper`](https://arxiv.org/pdf/2306.13731.pdf)]

## Setup
The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`.clone the repository locally:
```
git clone [email protected]:xhu248/AutoSAM.git
```
and install requirements:
```
cd AutoSAM; pip install -e .
```
Download the checkpoints from [SAM](https://github.com/facebookresearch/segment-anything#model-checkpoints) and place them under AutoSAM/## Dataset
The original ACDC data files can be dowonloaded at [Automated Cardiac Diagnosis Challenge ](https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html).
The data is provided in nii.gz format. We convert them into PNG files as SAM requires RGB input.
The processed data can be downloaded [here](https://drive.google.com/drive/folders/1RcpWYJ7EkwPiCR9u6HRrg7JHQ_Dr7494?usp=drive_link)## How to use
### Finetune CNN decoder
```
python scripts/main_feat_seg.py --src_dir ${ACDC_folder} \
--data_dir ${ACDC_folder}/imgs/ --save_dir ./${output_dir} \
--b 4 --dataset ACDC --gpu ${gpu} \
--fold ${fold} --tr_size ${tr_size} --model_type ${model_type} --num_classes 4
```
${tr_size} decides how many volumes used in the training; ${model_type} is selected from vit_b (default), vit_l, and vit_h;### Finetune AutoSAM
```
python scripts/main_autosam_seg.py --src_dir ${ACDC_folder} \
--data_dir ${ACDC_folder}/imgs/ --save_dir ./${output_dir} \
--b 4 --dataset ACDC --gpu ${gpu} \
--fold ${fold} --tr_size ${tr_size} --model_type ${model_type} --num_classes 4
```
This repo also supports distributed training
```
python scripts/main_autosam_seg.py --src_dir ${ACDC_folder} --dist-url 'tcp://localhost:10002' \
--data_dir ${ACDC_folder}/imgs/ --save_dir ./${output_dir} \
--multiprocessing-distributed --world-size 1 --rank 0 -b 4 --dataset ACDC \
--fold ${fold} --tr_size ${tr_size} --model_type ${model_type} --num_classes 4
```
## Todo
* Evaluate on more datasets
* Add more baselines## Citation
If you find our codes useful, please cite
```
@article{hu2023efficiently,
title={How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images},
author={Hu, Xinrong and Xu, Xiaowei and Shi, Yiyu},
journal={arXiv preprint arXiv:2306.13731},
year={2023}
}
```