{"id":28423995,"url":"https://github.com/icon-lab/denomamba","last_synced_at":"2025-10-08T03:10:58.711Z","repository":{"id":296345473,"uuid":"860265526","full_name":"icon-lab/DenoMamba","owner":"icon-lab","description":"Official implementation of DenoMamba: A fused state-space model for low-dose CT denoising","archived":false,"fork":false,"pushed_at":"2025-05-30T06:13:12.000Z","size":377,"stargazers_count":28,"open_issues_count":2,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-06-25T15:51:30.614Z","etag":null,"topics":["computed-tomography","contextualized-representation","deep-learning","denoising","long-range","low-dose-ct-denoising","mamba","medical-imaging","python","pytorch","sequence-models","state-space-models"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/icon-lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-09-20T05:56:37.000Z","updated_at":"2025-06-12T21:48:27.000Z","dependencies_parsed_at":"2025-05-30T07:54:07.781Z","dependency_job_id":"37bad3dc-0039-4b24-8de4-881f9fda502c","html_url":"https://github.com/icon-lab/DenoMamba","commit_stats":null,"previous_names":["icon-lab/denomamba"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/icon-lab/DenoMamba","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/icon-lab%2FDenoMamba","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/icon-lab%2FDenoMamba/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/icon-lab%2FDenoMamba/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/icon-lab%2FDenoMamba/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/icon-lab","download_url":"https://codeload.github.com/icon-lab/DenoMamba/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/icon-lab%2FDenoMamba/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278882209,"owners_count":26062241,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-08T02:00:06.501Z","response_time":56,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computed-tomography","contextualized-representation","deep-learning","denoising","long-range","low-dose-ct-denoising","mamba","medical-imaging","python","pytorch","sequence-models","state-space-models"],"created_at":"2025-06-05T09:35:58.482Z","updated_at":"2025-10-08T03:10:58.705Z","avatar_url":"https://github.com/icon-lab.png","language":"Python","readme":"\u003chr\u003e\n\u003ch1 align=\"center\"\u003e\n  DenoMamba \u003cbr\u003e\n  \u003csub\u003eA fused state-space model for low-dose CT denoising\u003c/sub\u003e\n\u003c/h1\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://avesis.hacibayram.edu.tr/saban.ozturk\" target=\"_blank\"\u003eŞaban\u0026nbsp;Öztürk\u003c/a\u003e;\n  \u003ca href=\"https://www.linkedin.com/in/oguz-can-duran/\" target=\"_blank\"\u003eOğuz\u0026nbspCan Duran\u003c/a\u003e;\n  \u003ca href=\"https://kilyos.ee.bilkent.edu.tr/~cukur/\" target=\"_blank\"\u003eTolga\u0026nbsp;Çukur\u003c/a\u003e;\n\u003c/div\u003e  \n\u003chr\u003e\n\n\u003ch3 align=\"center\"\u003e[\u003ca href=\"https://arxiv.org/abs/2409.13094\"\u003earXiv\u003c/a\u003e]\u003c/h3\u003e\n\nOfficial PyTorch implementation of **DenoMamba**, a novel denoising method based on state-space modeling (SSM), that efficiently captures short- and long-range context in medical images. Following an hourglass architecture with encoder-decoder stages, DenoMamba employs a spatial SSM module to encode spatial context and a novel channel SSM module equipped with a secondary gated convolution network to encode latent features of channel context at each stage. Feature maps from the two modules are then consolidated with low-level input features via a convolution fusion module (CFM).\n\n\n![architecture](figures/main1.png)\n\n\n## Dependencies\n\n```\npython\u003e=3.8.13\ncuda=\u003e11.6\ntorch\u003e=2.2\ntorchvision\u003e=0.17\npillow\nscipy\nmamba-ssm==1.1.3\n```\nYou are welcome to ask any library issues.\n\n## Dataset\nTo reproduce the results reported in the paper, we recommend the following dataset processing steps:\n\nSequentially select subjects from the dataset.\nSelect 2D cross-sections from each subject.\nNormalize the selected 2D cross-sections before training and before metric calculation.\n\nYou should structure your aligned dataset in the following way:\n\n```\n/datasets/LDCT/\n  ├── Full_Dose                 ├── Quarter_Dose\n  │   ├── 1mm                   │   ├── 1mm\n  │     ├── Sharp Kernel        │     ├── Sharp Kernel\n  │       ├── L506              │       ├── L506\n  |          - ...              |          - ...\n  │       ├── ...               │       ├── ...\n  │     ├── Soft Kernel         │     ├── Soft Kernel\n  │       ├── L506              │       ├── L506\n  |          - ...              |          - ...\n  │       ├── ...               │       ├── ...\n  │   ├── 3mm                   │   ├── 3mm\n  │     ├── Sharp Kernel        │     ├── Sharp Kernel\n  │       ├── L192              │       ├── L192\n  |          - ...              |          - ...\n  │       ├── ...               │       ├── ...\n  │     ├── Soft Kernel         │     ├── Soft Kernel\n  │       ├── L192              │       ├── L192\n  |          - ...              |          - ...\n  │       ├── ...               │       ├── ... \n\n``` \n## Training\n\n### Example Command for training with default arguments\n```\npython3 train.py --full_dose_path example_full_dose_path --quarter_dose_path example_quarter_dose_path --path_to_save example_path_to_save_the_trained_model\n```\n### Argument Descriptions\n\n| Argument            | Description                                                   |\n|---------------------|---------------------------------------------------------------|\n| --full_dose_path        | Path to full dose images                                  |\n| --quarter_dose_path     | Path to quarter dose images                               |\n| --dataset_ratio         | The ratio of dataset to use (in case of big dataset)      |\n| --train_ratio           | Ratio of train dataset to all dataset                     |\n| --batch_size            | Batch size.                                               |\n| --in_ch                 | Number of input image channels                            |\n| --out_ch                | Number of output image channels                           |\n| --learning_rate         | Learning rate                                             |\n| --max_epoch             | Maximum number of epochs                                  |\n| --continue_to_train     | Continue any interrupted training                         |\n| --path_to_save          | Path to save the trained model                            |\n| --ckpt_path             | Path to trained and saved checkpoint model                |\n| --validation_freq       | Frequency to run validation                               |\n| --save_freq             | Frequency to save model                                   |\n| --batch_number          | Number of a batch in validation to show the sample images |\n| --num_blocks            | Number of FuseSSM block layers                            |\n| --dim                   | Number of FuseSSM blocks                                  |\n| --num_refinement_blocks | Number of refinement blocks                               |\n\n\n\n#### Key Required Arguments\n```\n--full_dose_path: Must specify the path to the full-dose dataset.\n--quarter_dose_path: Must provide the path to the quarter-dose dataset.\n--path_to_save: Required to specify where the trained model will be saved.\n```\nThese arguments are essential for the program to locate the necessary datasets and save the trained model.\n\n\n\n## Test\n\n### Example Command for test with default arguments\n```\npython test.py --full_dose_path example_full_dose_path --quarter_dose_path example_quarter_dose_path --path_to_save example_path_to_save_the_trained_model --output_root example_path_to_save_the_output_images\n```\n### Argument Descriptions\n\n| Argument            | Description                                                   |\n|---------------------|---------------------------------------------------------------|\n| --full_dose_path        | Path to full dose images                                  |\n| --quarter_dose_path     | Path to quarter dose images                               |\n| --dataset_ratio         | The ratio of dataset to use (in case of big dataset)      |\n| --train_ratio           | Ratio of train dataset to all dataset                     |\n| --batch_size            | Batch size.                                               |\n| --in_ch                 | Number of input image channels                            |\n| --out_ch                | Number of output image channels                           |\n| --ckpt_path             | Path to trained and saved checkpoint model                |\n| --output_root           | Path to save denoised images                              |\n| --num_blocks            | Number of FuseSSM block layers                            |\n| --dim                   | Number of FuseSSM blocks                                  |\n| --num_refinement_blocks | Number of FuseSSM blocks                                  |\n\n\n\n#### Key Required Arguments\n```\n--full_dose_path: Must specify the path to the full-dose dataset.\n--quarter_dose_path: Must provide the path to the quarter-dose dataset.\n--ckpt_path: Required to specify where the trained model was saved.\n--output_root : Required to specify where to save the output images.\n```\nThese arguments are essential for the program to locate the necessary datasets and save the trained model.\n\n\n## Citation\nYou are encouraged to modify/distribute this code. However, please acknowledge this code and cite the paper appropriately.\n```\n@article{ozturk2024denomamba,\n  title={DenoMamba: A fused state-space model for low-dose CT denoising}, \n  author={Şaban Öztürk and Oğuz Can Duran and Tolga Çukur},\n  year={2024},\n  journal={arXiv:2409.13094}\n}\n```\n\n\nFor any questions, comments and contributions, please feel free to contact Şaban Öztürk (saban.ozturk[at]bilkent.edu.tr)\n\n## Acknowledgments\n\nThis code uses libraries from [Restormer](https://github.com/swz30/Restormer) and [mamba](https://github.com/state-spaces/mamba) repository.\n\n\u003chr\u003e\n\nCopyright © 2024, ICON Lab.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ficon-lab%2Fdenomamba","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ficon-lab%2Fdenomamba","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ficon-lab%2Fdenomamba/lists"}