{"id":18276248,"url":"https://github.com/hilab-git/myops2020","last_synced_at":"2025-07-13T20:35:20.644Z","repository":{"id":55248363,"uuid":"405649333","full_name":"HiLab-git/MyoPS2020","owner":"HiLab-git","description":null,"archived":false,"fork":false,"pushed_at":"2021-09-24T05:39:34.000Z","size":140,"stargazers_count":18,"open_issues_count":2,"forks_count":7,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-05T03:31:55.987Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/HiLab-git.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-09-12T13:28:05.000Z","updated_at":"2025-03-28T12:19:22.000Z","dependencies_parsed_at":"2022-08-14T17:50:47.621Z","dependency_job_id":null,"html_url":"https://github.com/HiLab-git/MyoPS2020","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/HiLab-git/MyoPS2020","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HiLab-git%2FMyoPS2020","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HiLab-git%2FMyoPS2020/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HiLab-git%2FMyoPS2020/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HiLab-git%2FMyoPS2020/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/HiLab-git","download_url":"https://codeload.github.com/HiLab-git/MyoPS2020/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HiLab-git%2FMyoPS2020/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":265200481,"owners_count":23726841,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T12:15:33.266Z","updated_at":"2025-07-13T20:35:20.616Z","avatar_url":"https://github.com/HiLab-git.png","language":"Python","readme":"# Winner of MyoPS 2020 Challenge\n[PyMIC_link]:https://github.com/HiLab-git/PyMIC\n[nnUNet_link]:https://github.com/MIC-DKFZ/nnUNet\nThis repository provides source code for myocardial pathology segmentation (MyoPS) Challenge 2020. The method is detailed in the [paper](https://link.springer.com/chapter/10.1007/978-3-030-65651-5_5), and it won the 1st place of [MyoPS 2020](http://www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20). Our code is based on [PyMIC][PyMIC_link], a pytorch-based toolkit for medical image computing with deep learning, that is lightweight and easy to use, and [nnUNet][nnUNet_link], a self-adaptive segmentation method for medical images.\n```\n@inproceedings{zhai2020myocardial,\n  title={Myocardial edema and scar segmentation using a coarse-to-fine framework with weighted ensemble},\n  author={Zhai, Shuwei and Gu, Ran and Lei, Wenhui and Wang, Guotai},\n  booktitle={Myocardial Pathology Segmentation Combining Multi-Sequence CMR Challenge},\n  pages={49--59},\n  year={2020},\n  organization={Springer}\n}\n```\n\u003cimg src='./picture/method.png'  width=\"400\"\u003e\n\n## Method overview\nOur solution is a coarse-to-fine method. [PyMIC][PyMIC_link] and [nnUNet][nnUNet_link] are used in the coarse and fine stages, respectively. \n\nIn the coarse segmentation stage, we use 2D U-Net to segment there foreground classes: complete ring-shaped myocardium, left ventricular (LV) blood pool and rigth ventricular (RV) blood pool. The network is trained with a combination of Dice loss and cross entropy loss.\n\nIn the fine stage, we use nnUNet to segment all the five foreground classes: LV blood pool, RV blood pool, LV normal myocardium, LV myocardial edema and LV myocardial scar. The coarse segmentation result will serve as an extra channel for the input of the network, i.e., the first 3 modalities are C0(_0000), DE(_0001) and T2(_0002), respectively. The 4th modality(_0003) is the coarse segmentation result. \n\n## Requirements\nThis code depends on [Pytorch](https://pytorch.org), [PyMIC][PyMIC_link], [GeodisTK](https://github.com/taigw/GeodisTK) and [nnUNet][nnUNet_link].\nTo install PyMIC and GeodisTK, run:\n```\npip install PYMIC==0.2.4\npip install GeodisTK\n``` \nTo use nnUNet, Download [nnUNet][nnUNet_link], and put them in the `ProjectDir` such as `/mnt/data1/swzhai/projects/MyoPS2020`.\nOther requirements can be found in [`requirements.txt`](`./requirements.txt`).\n\n## Configure data directories and environmental variables\n* Configure data directories in `path_confg.py` based on your environment. For example, in my case:\n``` bash\npath_dict['MyoPS_data_dir'] = \"/mnt/data1/swzhai/dataset/MyoPS\"\npath_dict['nnunet_raw_data_dir'] = \"/mnt/data1/swzhai/dataset/MyoPS/nnUNet_raw_data_base/nnUNet_raw_data\"\n```\nwhere `MyoPS_data_dir` is the path of the MyoPS dataset, and `nnunet_raw_data_dir` is the path of raw data used by nnU-Net in the second stage of our method.\n* After installation of [nnUNet][nnUNet_link], set environmental variables as follows.\n```bash\ncd nnUNet\npip install -e .\nexport nnUNet_raw_data_base=\"MyoPS_data_dir/nnUNet_raw_data_base\"\nexport nnUNet_preprocessed=\"MyoPS_data_dir/nnUNet_preprocessed\"\nexport RESULTS_FOLDER=\"ProjectDir/result/nnunet\"\n```\n\n## Dataset and Preprocessing\n* Download the dataset from [MyoPS 2020](http://www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20) and put the dataset in the `MyoPS_data_dir`, specifically, `MyoPS_data_dir/data_raw/imagesTr` for training images, `MyoPS_data_dir/data_raw/labelsTr` for training ground truth and `MyoPS_data_dir/data_raw/imagesTs` for test images.\n\n* For data preprocessing, Run:\n```bash\npython crop_for_coarse_stage.py\n```\nThis will crop the images with the maximal bounding box in the training set, and the cropped results are saved in `MyoPS_data_dir/data_preprocessed/imagesTr`, `MyoPS_data_dir/data_preprocessed/labelsTr` and `MyoPS_data_dir/data_preprocessed/imagesTs` respectively. `crop_information.json` in each folder contains bounding box coordinates that will be used when putting the final segmentation results to the original image space.\n\n## Coarse segmentation Model\nWe use five-fold cross validation for training and validation of the coarse model.\n\n### Training and cross validation\n*  Run the following command to create csv files of training and testing datasets that are required by [PyMIC][PyMIC_link], and split the training data into five folds. The csv files will be saved to `config/data`.\n```bash\npython write_csv_files.py\n```\n* For training and validation of the first fold, run the following command. The segmentation model will be saved in `model/unet2d/fold_1`, and prediction of the validation data for the first fold will be saved in `result/unet2d`.\n```bash\npython myops_run.py train config/train_val.cfg 1\npython myops_run.py test  config/train_val.cfg 1\n```\n* Similarly to the above step, run the code for fold_2-fold_5.\n* After training of all the five folds, to see the performance of the five fold cross validation, set `ground_truth_folder_root` to the correct value in `config/evaluation.cfg`, and run the following command to see the Dice scores of class 1, 2 and 3.\n```bash\npymic_evaluate_seg config/evaluation.cfg\n```\n* For post processing, run:\n```bash\npython  postprocess.py result/unet2d result/unet2d_post\n```\n* The post processed results will be saved in `result/unet2d_post`. You can set `segmentation_folder_root  = result/unet2d_post` in `config/evaluation.cfg` and run the evaluation code again. The average dice scores before after post processing on my machine are:\n\n|---|class_1|class_2|class_3|average|\n|---|---|---|---|---|\n|No postprocess|0.8780|0.9067|0.9180|0.9009|\n|with postprocess|0.8785|0.9095|0.9234|0.9038|\n\n### Inference for testing data\n* We use an ensemble of five models obtained during the five-fold cross validation for inference. Open `config/test.cfg` and set `ckpt_name` to the list of the best performing checkpoints of the five folds. The best performing iteration number for fold i can be found in `model/unet2d/fold_i/model_best.txt`. Run the following command for inference. The results will be saved in `result/unet2d_test`.\n```bash\npython myops_test.py test config/test.cfg\n```\n* Run this command to postprocess the segmentation of the testing images:\n```bash\npython postprocess.py result/unet2d_test result/unet2d_test_post\n```\n\n## Fine segmentation\nIn the fine segmentation stage, we use nnUNet to segment all the classes. This section is highly dependent on [nnUNet][nnUNet_link], so make sure that you have some basic experience of using [nnUNet][nnUNet_link] before you do the following operations.\n\nTips: In order to save unnecessary time, you can change `self.max_num_epochs = 1000` to `self.max_num_epochs = 300` in `nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py`.\n\n### Data preparation\n* Run the following commands to prepare training and testing data for nnUNet. Note that the input of nnUNet has four channels as mentioned above.\n```\npython crop_for_fine_stage.py\npython create_dataset_json.py\n```\n\n### Training\n* Dataset conversion and preprocess. Run:\n```bash\nnnUNet_plan_and_preprocess -t 112 --verify_dataset_integrity\n```\n* Train 2D UNet. For FOLD in [0, 1, 2, 3, 4], run:\n```bash\nnnUNet_train 2d nnUNetTrainerV2 Task112_MyoPS FOLD --npz\n```\n* Train 2.5D(3D) UNet. For FOLD in [0, 1, 2, 3, 4], run:\n```bash\nnnUNet_train 3d_fullres nnUNetTrainerV2 Task112_MyoPS FOLD --npz\n```\n### Inference\n* Here we have 2 fine models, i.e. 2D UNet and 2.5D UNet. To find the best configuration for inference, run:\n```bash\nnnUNet_find_best_configuration -m 2d 3d_fullres -t 112\n```\n* The terminal will output some commands for inference with each model and getting their ensemble. In my case, I get the following commands: \n```bash\nnnUNet_predict -i FOLDER_WITH_TEST_CASES -o OUTPUT_FOLDER_MODEL1 -tr nnUNetTrainerV2 -ctr nnUNetTrainerV2CascadeFullRes -m 2d -p nnUNetPlansv2.1 -t Task112_MyoPS\n\nnnUNet_predict -i FOLDER_WITH_TEST_CASES -o OUTPUT_FOLDER_MODEL2 -tr nnUNetTrainerV2 -ctr nnUNetTrainerV2CascadeFullRes -m 3d_fullres -p nnUNetPlansv2.1 -t Task112_MyoPS\n\nnnUNet_ensemble -f OUTPUT_FOLDER_MODEL1 OUTPUT_FOLDER_MODEL2 -o OUTPUT_FOLDER -pp result/nnunet/nnUNet/ensembles/Task112_MyoPS/ensemble_2d__nnUNetTrainerV2__nnUNetPlansv2.1--3d_fullres__nnUNetTrainerV2__nnUNetPlansv2.1/postprocessing.json\n```\n* In my case, `FOLDER_WITH_TEST_CASES` is `nnunet_raw_data_dir/Task112_MyoPS/imagesTs`. `OUTPUT_FOLDER_MODEL1` is `result/nnunet/test_2D`. `OUTPUT_FOLDER_MODEL2` is `result/nnunet/test_3D`. `OUTPUT_FOLDER` is `result/nnunet/test_ensemble`.\n\n* Notice: Add arguments \"--save_npz\" and \"--npz\" to save .npz file which are model probability for future ensemble.\n\n* Because we crop the images twice in the whole process for test images, we need to insert the fine segmentation results into the original images space. Run the following command, and the final segmentation results are saved in `result/nnunet/test_ensemble_original`.\n```bash\npython get_final_test.py result/nnunet/test_ensemble  result/nnunet/test_ensemble_original\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhilab-git%2Fmyops2020","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhilab-git%2Fmyops2020","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhilab-git%2Fmyops2020/lists"}