Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/reshalfahsi/medical-image-generation
Medical Image Generation Using Diffusion Model
https://github.com/reshalfahsi/medical-image-generation
diffusion-models image-generation image-synthesis medical-image-generation medical-image-synthesis pytorch-lightning
Last synced: about 6 hours ago
JSON representation
Medical Image Generation Using Diffusion Model
- Host: GitHub
- URL: https://github.com/reshalfahsi/medical-image-generation
- Owner: reshalfahsi
- Created: 2023-08-19T17:38:51.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-01-07T01:28:49.000Z (10 months ago)
- Last Synced: 2024-01-07T02:31:23.641Z (10 months ago)
- Topics: diffusion-models, image-generation, image-synthesis, medical-image-generation, medical-image-synthesis, pytorch-lightning
- Language: Jupyter Notebook
- Homepage:
- Size: 890 KB
- Stars: 0
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Medical Image Generation Using Diffusion Model
Image synthesis on medical images can aid in generating more data for biomedical problems, which is hindered due to some legal and technical issues. Using the diffusion model, this problem can be solved. The diffusion model works by progressively adding noise, typically Gaussian, to an image until it is entirely undistinguishable from randomly generated pixels. Then, the noisy image is restored to its original appearance gradually. The forward process (noise addition) is guided by a noise scheduler, and the backward process (image restoration) is carried out by a U-Net model. In this project, the diffusion model is trained on the BloodMNIST dataset from the MedMNIST dataset.
## Experiment
To see the code under the hood, visit this [link](https://github.com/reshalfahsi/medical-image-generation/blob/master/Medical_Image_Generation_Using_Diffusion_Model.ipynb).
## Result
## Quantitative Result
Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) are leveraged to quantitatively measure the performance of the diffusion model, which is presented below.
Evaluation metric | Score
------------ | -------------
FID | 5.039
KID | 4.141 ± 0.343## Evaluation Metric Curve
Loss of the model at the training stage.
FID on the training and validation sets.
KID on the training and validation sets.## Qualitative Result
Qualitatively, the generated images are shown in the following figure:
Unconditional progressive generation on the BloodMNIST dataset (left) and a montage of the actual BloodMNIST dataset (right).## Credit
- [A Diffusion Model from Scratch in Pytorch](https://colab.research.google.com/drive/1sjy9odlSSy0RBVgMTgP7s99NXsqglsUL)
- [MedMNIST](https://medmnist.com/)
- [Denoising Diffusion Probabilistic Models](https://arxiv.org/pdf/2006.11239.pdf)
- [Diffusion Models Beat GANs on Image Synthesis](https://arxiv.org/pdf/2105.05233.pdf)
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/latest/)