{"id":13574391,"url":"https://github.com/oneapi-src/medical-imaging-diagnostics","last_synced_at":"2025-04-04T15:30:50.866Z","repository":{"id":222214610,"uuid":"536270364","full_name":"oneapi-src/medical-imaging-diagnostics","owner":"oneapi-src","description":"AI Starter Kit for image-based abnormalities for different diseases classification using Intel® Optimized Tensorflow*","archived":true,"fork":false,"pushed_at":"2024-02-01T23:51:30.000Z","size":371,"stargazers_count":1,"open_issues_count":1,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-11-05T09:44:35.889Z","etag":null,"topics":["deep-learning","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/oneapi-src.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2022-09-13T19:00:19.000Z","updated_at":"2024-09-25T16:45:39.000Z","dependencies_parsed_at":"2024-02-13T00:53:15.636Z","dependency_job_id":null,"html_url":"https://github.com/oneapi-src/medical-imaging-diagnostics","commit_stats":null,"previous_names":["oneapi-src/medical-imaging-diagnostics"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fmedical-imaging-diagnostics","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fmedical-imaging-diagnostics/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fmedical-imaging-diagnostics/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fmedical-imaging-diagnostics/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/oneapi-src","download_url":"https://codeload.github.com/oneapi-src/medical-imaging-diagnostics/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247202574,"owners_count":20900806,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","tensorflow"],"created_at":"2024-08-01T15:00:51.185Z","updated_at":"2025-04-04T15:30:45.857Z","avatar_url":"https://github.com/oneapi-src.png","language":"Python","readme":"PROJECT NOT UNDER ACTIVE MANAGEMENT\n\nThis project will no longer be maintained by Intel.\n\nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \n\nIntel no longer accepts patches to this project.\n\nIf you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n\nContact: webadmin@linux.intel.com\n# **Medical Imaging Diagnostics**\r\n\r\n## Introduction\r\n\r\nIn this refkit we highlighted the advantages of using Intel® OneAPI packages specially TensorFlow* Optimizations from Intel, and Intel® Distribution for Python*. \u003cbr\u003e\r\n\r\nIn this refkit we use a Convolutional Neural Network (CNN) model architecture for image classification based on a dataset form healthcare domain. The CNN-based model is a promising method to diagnose the disease through X-ray images. In this case, X-ray images used for the diagnosis of pneumonia.\r\n\r\nModel has been quantized using Intel® Neural Compressor, which has shown high performance vectorized operations on Intel platforms.\r\n\r\nCheck out more workflow examples in the [Developer Catalog](https://developer.intel.com/aireferenceimplementations).\r\n\r\n## **Table of Contents**\r\n - [Introduction](#introduction)\r\n - [Solution Technical Overview](#solution-technical-overview)\r\n - [Validated Hardware Details](#validated-hardware-details)\r\n - [How it Works](#how-it-works)\r\n - [Get Started](#get-started)\r\n - [Download the dataset](#download-the-dataset)\r\n - [Supported Runtime Environment](#supported-runtime-environment)\r\n\t- [Run Using Bare Metal](#run-using-bare-metal)\r\n - [Summary and Next Steps](#summary-and-next-steps)\r\n - [Learn More](#learn-more)\r\n - [Support](#support)\r\n - [Appendix](#appendix)\r\n\r\n## Solution Technical Overview\r\n\r\nMedical diagnosis of image-base abnormalities for different diseases classification is the process of determining the abnormality or condition explains a person's symptoms and signs. It is most often referred to as diagnosis with the medical context being implicit.\r\nImages are a significant component of the patient’s electronic healthcare record (EHR) and are one of the most challenging data sources to analyze as they are unstructured. As the number of images that require analysis and reporting per patient is growing, global concerns around shortages of radiologists have also been reported. AI-enabled diagnostic imaging aid can help address the challenge by increasing productivity, improving diagnosis, and reading accuracy (e.g., reducing missed findings or false negatives), improving departmental throughput, and helping to reduce clinician burnout.\r\n\r\nThe most common and widely adopted application of AI algorithms in medical image diagnosis is in the classification of abnormalities. With the use of machine learning (ML) and deep learning, the AI algorithm identifies images within a study that warrants further attention by the radiologist/reader to classify the diseases. This aids in reducing the read time as it draws the reader’s attention to the specific image and identifies abnormalities.\r\n\r\nX-ray images are critical in the detection of lung cancer, pneumonia, certain tumors, abnormal masses, calcifications, etc. In this reference kit, we demonstrate the detection of pneumonia using X-ray images and how a CNN model architecture can help identify and localize pneumonia in chest X-ray (CXR) images, using Intel® OneAPI packages specially TensorFlow* Optimizations from Intel, and Intel® Distribution for Python*.\r\n\r\nThe experiment aims to classify pneumonia X-ray images to detect abnormalities from the normal lung images. The goal to improve latency, throughput (Frames/sec), and accuracy of the abnormality detection by training a CNN model in batch and inference in real-time. Hyperparameter tuning is applied at training for further optimization. \u003cbr\u003e\r\n\r\nSince GPUs are the natural choice for deep learning and AI processing to achieve a higher FPS rate but they are also very expensive and memory consuming, the experiment applies model quantization to speed up the process using CPU, whilst reaching the standard FPS for these types of applications to operate, to show a more cost-effective option using Intel® Neural Compressor, which has shown high performance vectorized operations on Intel platforms. When it comes to the deployment of this model on edge devices, with less computing and memory resources, the experiment applies further quantization and compression to the model whilst keeping the same level of accuracy showing a more efficient utilization of underlying computing resources. \u003cbr\u003e\r\n\r\nThe solution contained in this repo uses the following Intel® packages:\r\n\r\n\u003e - **Intel® Distribution for Python***\r\n\u003e\r\n\u003e   The [Intel® Distribution for Python*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html#gs.52te4z) provides:\r\n\u003e\r\n\u003e   - Scalable performance using all available CPU cores on laptops, desktops, and powerful servers\r\n\u003e   - Support for the latest CPU instructions\r\n\u003e   - Near-native performance through acceleration of core numerical and machine learning packages with libraries like the Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library\r\n\u003e   - Productivity tools for compiling Python code into optimized instructions\r\n\u003e   - Essential Python bindings for easing integration of Intel® native tools with your Python* project\r\n\u003e\r\n\u003e - **Intel® Optimization for TensorFlow\\***\r\n\u003e\r\n\u003e   The [Intel® Optimization for TensorFlow\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html):\r\n\u003e\r\n\u003e   - Accelerate AI performance with Intel® oneAPI Deep Neural Network Library (oneDNN) features such as graph optimizations and memory pool allocation.\r\n\u003e   - Automatically use Intel® Deep Learning Boost instruction set features to parallelize and accelerate AI workloads.\r\n\u003e   - Reduce inference latency for models deployed using TensorFlow Serving*.\r\n\u003e   - Starting with TensorFlow* 2.9, take advantage of oneDNN optimizations automatically.\r\n\u003e   - Enable optimizations by setting the environment variable TF_ENABLE_ONEDNN_OPTS=1 in TensorFlow* 2.5 through 2.8.\r\n\u003e\r\n\u003e - **Intel® Neural Compressor\\***\r\n\u003e\r\n\u003e   The [Intel® Neural Compressor\\*](https://github.com/intel/neural-compressor) aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow*, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel® Extension for TensorFlow* and Intel® Extension for PyTorch*.\r\n\r\n## Validated Hardware Details\r\n\r\n| Recommended Hardware\r\n| ----------------------------\r\n| CPU: Intel® 2th Gen Xeon® Platinum 8280 CPU @ 2.70GHz or higher\r\n| RAM: 187 GB\r\n| Recommended Free Disk Space: 20 GB or more\r\n\r\nCode was tested on Ubuntu\\* 22.04 LTS.\r\n\r\n## How it Works\r\n\r\n### Use Case E2E flow\r\n\r\n![Use_case_flow](assets/E2E_2.PNG)\r\n\r\n### Expected Input-Output\r\n\r\n**Input**                                 | **Output** |\r\n| :---: | :---: |\r\n| X-ray Imaged data (Normal and Infected)          |  Disease classification\r\n\r\n**Example Input**                                 | **Example Output** |\r\n| :---: | :---: |\r\n| \u003cb\u003eX-ray Imaged data based on patient's complain \u003cbr\u003e\u003c/b\u003e Fast breathing, shallow breathing, shortness of breath, or wheezing, Patient reports pain in throat, chest pain. fever and loss of appetite over the last few days. | {'Normal': 0.1, 'Pneumonia ': 0.99}\r\n\r\n### Dataset\r\n\r\nThe dataset is downloaded from [1] Kaggle* and is composed of 5,863 images (JPEG) divided into 2 categories: Normal and Pneumonia. In addition, it is already divided into 3 folders: train, test and val (follow this [link](#legal_disclaimer) to read the legal disclaimer). Also the case Studio and Repo, can be found at [2].\r\n\r\n### Use Case E2E Architecture\r\n\r\n![Use_case_flow](assets/E2E_1.PNG)\r\n\r\n## Get Started\r\n\r\n### Download the Workflow Repository\r\n\r\nDefine an environment variable that will store the workspace path and will be used for all the commands executed using absolute paths:\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\nexport WORKSPACE=$PWD/medical-imaging-diagnostics\r\nexport DATA_DIR=$WORKSPACE/data/chest_xray\r\nexport OUTPUT_DIR=$WORKSPACE/output\r\n```\r\n\r\nCreate the working directory and clone the [Main Repository](https://github.com/oneapi-src/medical-imaging-diagnostics) into the working directory:\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\nmkdir -p $WORKSPACE \u0026\u0026 cd $WORKSPACE\r\n```\r\n\r\n```bash\r\ngit clone https://github.com/oneapi-src/medical-imaging-diagnostics.git $WORKSPACE\r\n```\r\n\r\n### Set up Conda*\r\n\r\n```bash\r\n# Download the latest Miniconda installer for linux\r\nwget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\r\n# Install \r\nbash Miniconda3-latest-Linux-x86_64.sh\r\n# Clean downloaded file\r\nrm Miniconda3-latest-Linux-x86_64.sh\r\n```\r\n\r\nTo learn more about installing Conda, see the [Conda Linux installation instructions](https://docs.conda.io/projects/conda/en/stable/user-guide/install/linux.html).\r\n\r\n## Set Up Environment\r\n\r\nThe conda yaml dependencies are kept in $WORKSPACE/env/intel_env.yml:\r\n\r\n| **Package**                | **Version**\r\n| :---                       | :---\r\n| Neural-compressor          | neural-compressor==2.3.1\r\n| TensorFlow                 | intel-tensorflow=2.14.0\r\n\r\nFollow the next steps for setup the conda environment:\r\n\r\n```bash\r\nconda install -n base conda-libmamba-solver\r\nconda config --set solver libmamba\r\nconda env create -f $WORKSPACE/env/intel_env.yml --no-default-packages\r\nconda activate medical_diagnostics_intel\r\n```\r\n\r\nSetting up the environment is only needed once. This step does not clean up the existing environment with the same name, so we need to make sure there is no conda environment with the same name. This will create a new conda environment with the dependencies listed in the YAML configuration.\r\n\r\n## Download the dataset\r\n\r\n| **Use case** | Automated methods to detect and classify Pneumonia diseases from medical images\r\n| :--- | :---\r\n| **Object of interest** | Medical diagnosis healthcare Industry\r\n| **Size** | Total 5856 Images Pneumonia and Normal \u003cbr\u003e\r\n| **Train: Test Split** | 90:10\r\n\r\n### Dataset preparation\r\n\r\n\u003e Chest X-Ray Images (Pneumonia) is downloaded and prepared by extracted in a \u003cb\u003edata \u003cb\u003e folder before running the training python module.\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\n# Navigate to inside the data folder\r\ncd $WORKSPACE/data\r\n# Download the data\r\nwget -q https://s3.eu-central-1.amazonaws.com/public.unit8.co/data/chest_xray.tar.gz\r\n# Uncompress the files\r\ntar -xf chest_xray.tar.gz \r\n```\r\n\r\n \u003e**Note**: Make sure \"chest_xray\" folder should be inside \"data\" folder  \r\n scripts have been written in such folder structure.\r\n\r\n\u003cbr\u003eFolder structure Looks as below after extraction of dataset.\u003c/br\u003e\r\n\r\n```\r\n- data\r\n    - chest_xray\r\n        - train\r\n            - NORMAL\r\n            - PNEUMONIA\r\n        - test\r\n            - NORMAL\r\n            - PNEUMONIA\r\n        - val\r\n            - NORMAL\r\n            - PNEUMONIA\r\n```\r\n\r\n## Supported Runtime Environment\r\n\r\nThis reference kit offers the following option for running the fine-tuning and inference processes:\r\n\r\n- [Bare Metal](#run-using-bare-metal)\r\n\r\n## Run Using Bare Metal\r\n\r\n\u003eFollow these instructions to set up and run this workflow on your own development system.\r\n\r\n### Set Up and run Workflow\r\nBelow are the steps to reproduce the results given in this repository\r\n\r\n1. Training CNN model\r\n2. Hyperparameter tuning\r\n3. Model Inference\r\n4. Quantize trained models using INC and benchmarking\r\n\r\n### 1. Training CNN model\r\n\r\n\u003cbr\u003eRun the training module as given below to start training and prediction using the active environment. This module takes option to run the training.\r\n\r\n```bash\r\nusage: medical_diagnosis_initial_training.py  [--datadir] \r\n\r\noptional arguments:\r\n  -h,                   show this help message and exit\r\n  \r\n  --data_dir \r\n                        Absolute path to the data folder containing\r\n                        \"chest_xray\" and \"chest_xray\" folder containing \"train\" \"test\" and \"val\" \r\n                         and each subfolders contain \"Pneumonia\" and \"NORMAL\" folders \r\n```\r\n\r\n**Command to run training**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\ncd $WORKSPACE\r\npython src/medical_diagnosis_initial_training.py  --datadir $DATA_DIR\r\n```\r\n\r\nBy default, model checkpoint will be saved in \"output\" folder.\r\n\r\n\u003e **Note**:  If any gcc dependency comes please upgrade it using sudo apt install build-essential.\r\nAbove training command will run in Intel environment and the output trained model would be saved in TensorFlow* checkpoint format.\r\n\r\n### 2. Hyperparameter tuning\r\n\r\n\u003cbr\u003e Dataset remains same with 90:10 split for Training and testing. It needs to be ran multiple times on the same dataset, across different hyper-parameters\r\n\r\nBelow parameters been used for tuning\r\n\u003cbr\u003e- \"learning rates\"      : [0.001, 0.01]\r\n\u003cbr\u003e- \"batch size\"           : [10,20]\r\n\r\n```\r\nusage: medical_diagnosis_hyperparameter_tuning.py \r\n\r\noptional arguments:\r\n  -h,                   show this help message and exit\r\n  \r\n\r\n  --data_dir \r\n                        Absolute path to the data folder containing\r\n                        \"chest_xray\" and \"chest_xray\" folder containing \"train\" \"test\" and \"val\" \r\n                         and each subfolders contain \"Pneumonia\" and \"NORMAL\" folders\r\n\r\n```\r\n**Command to run hyperparameter tuning**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\npython src/medical_diagnosis_hyperparameter_tuning.py --datadir  $DATA_DIR\r\n```\r\n\r\nBy default, model checkpoint will be saved in \"output\" folder.\r\n\r\n\u003e **Note**: Here using --codebatchsize 20 and  --learningRate 0.001 best accuracy has been evaluated, even that model is compatible for INC conversion\r\n\r\n\u003cbr\u003e**Convert the model to frozen graph**\r\n\r\nRun the conversion module to convert the TensorFlow* checkpoint model format to frozen graph format. This frozen graph can be later used for Inferencing and Intel® Neural Compressor.\r\n\r\n```\r\nusage: python src/model_conversion.py [-h] [--model_dir] [--output_node_names]\r\n\r\noptional arguments:\r\n  -h  \r\n                            show this help message and exit\r\n  --model_dir\r\n                            Please provide the Latest Checkpoint path e.g for\r\n                            \"./output\"...Default path is mentioned\r\n\r\n  --output_node_names       Default name is mentioned as \"Softmax\"\r\n```\r\n\r\n**Command to run conversion**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\npython src/model_conversion.py --model_dir $OUTPUT_DIR --output_node_names Softmax\r\n```\r\n\r\n\u003e **Note**: We need to make sure Intel frozen_graph.pb gets generated using Intel model files only\r\n\r\n### 3. Inference\r\n\r\nPerformed inferencing on the trained model using TensorFlow* 2.14.0 with oneDNN\r\n\r\n#### Running inference using TensorFlow*\r\n\r\n```bash\r\nusage: inference.py [--codebatchsize ] [--modeldir ]\r\n\r\noptional arguments:\r\n  -h,                       show this help message and exit\r\n\r\n  --codebatchsize           --codebatchsize\r\n                              batchsize used for inference\r\n                        \r\n  --modeldir                --modeldir         \r\n                              provide frozen Model path \".pb\" file...users can also\r\n                              use INC INT8 quantized model here\r\n\r\n```\r\n**Command to run inference**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\nOMP_NUM_THREADS=4 KMP_BLOCKTIME=100 python src/inference.py --codebatchsize 1  --modeldir $OUTPUT_DIR/updated_model.pb\r\n```\r\n\r\n\u003e**Note** : Above inference script can be run in Intel environment using different batch sizes.\u003cbr\u003e\r\n\r\n### 4. Quantize trained models using Intel® Neural Compressor\r\n\r\nIntel® Neural Compressor is used to quantize the FP32 Model to the INT8 Model. Optimized model is used here for evaluating and timing Analysis.\r\nIntel® Neural Compressor supports many optimization methods. In this case, we used post training quantization with `Default Quantization Mode` method to quantize the FP32 model.\r\n\r\n\u003e**Note**: We need to make sure Intel frozen_graph.pb gets generated using Intel model files only. We recommend initiate running hyperparameter tuning script with default parameter to get a new model then convert to Frozen graph and using that get the compressed model, if model gets corrupted for any reason below script will not run.\r\n\r\n*Step-1: Conversion of FP32 Model to INT8 Model*\r\n\r\n```bash\r\nusage: src/INC/neural_compressor_conversion.py  [--modelpath] $OUTPUT_DIR/updated_model.pb  [--outpath] $OUTPUT_DIR/output/compressedmodel.pb [--config]  ./src/INC/deploy.yaml\r\n\r\noptional arguments:\r\n  -h                          show this help message and exit\r\n\r\n  --modelpath                 --modelpath \r\n                                Model path trained with TensorFlow \".pb\" file\r\n  --outpath                   --outpath \r\n                                default output quantized model will be save in \".model//output\" folder\r\n  --config                    --config \r\n                                Yaml file for quantizing model, default is \"./deploy.yaml\"\r\n  \r\n```\r\n\r\n**Command to run the neural_compressor_conversion**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\n python src/INC/neural_compressor_conversion.py  --modelpath  $OUTPUT_DIR/updated_model.pb  --outpath $OUTPUT_DIR/output/compressedmodel.pb  --config  ./src/INC/deploy.yaml\r\n```\r\n\r\n\u003e Quantized model will be saved by default in `output/output` folder as `compressedmodel.pb`\r\n\r\n*Step-2: Inferencing using quantized Model*\r\n\r\n```bash\r\nusage: inference_inc.py [--codebatchsize ] [--modeldir ]\r\n\r\noptional arguments:\r\n  -h,                       show this help message and exit\r\n\r\n  --codebatchsize           --codebatchsize\r\n                              batchsize used for inference\r\n                        \r\n  --modeldir                --modeldir         \r\n                              provide frozen Model path \".pb\" file...users can also\r\n                              use INC INT8 quantized model here\r\n```\r\n\r\n**Command to run inference**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\nOMP_NUM_THREADS=4 KMP_BLOCKTIME=100 python src/INC/inference_inc.py --codebatchsize 1  --modeldir $OUTPUT_DIR/updated_model.pb\r\n```\r\n\r\n\u003e**Note** : Above inference script can be run in Intel environment using different batch sizes.\u003cbr\u003e\r\nSame script can be used to benchmark INC INT8 Quantized model. For more details please refer to INC quantization section. By using different batch size one can observe the gain obtained using Intel® oneDNN optimized TensorFlow* in Intel environment. \u003cbr\u003e\r\n\r\nRun this script to record multiple trials and the minimum value can be calculated.\r\n\r\n*Step-3: Performance of quantized Model*\r\n\r\n```bash\r\nusage: \r\nsrc/INC/run_inc_quantization_acc.py  [--datapath]  [--fp32modelpath]  [--config]   \r\n\r\nor\r\n\r\nsrc/INC/run_inc_quantization_acc.py  [--datapath]  [--int8modelpath]  [--config]\r\n   \r\noptional arguments:\r\n  -h,                       show this help message and exit\r\n\r\n  --datapath                --datapath\r\n                              need to mention absolute path of data\r\n                        \r\n  ---fp32modelpath          --fp32modelpath         \r\n                              provide frozen Model path \".pb\" file...(Absolute path)\r\n\r\n  --config                  --config        \r\n                              provide config path...(Absolute path)\r\n\r\n  --int8modelpath          --int8modelpath      \r\n                             provide int8 model path \".pb\" file...(Absolute path)                              \r\n```\r\n\r\n**Command to run Evaluation of FP32 Model**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\npython src/INC/run_inc_quantization_acc.py --datapath $DATA_DIR/val --fp32modelpath $OUTPUT_DIR/updated_model.pb --config ./src/INC/deploy.yaml\r\n```\r\n\r\n**Command to run Evaluation of INT8 Model**\r\n\r\n[//]: # (capture: baremetal)\r\n```bash\r\npython src/INC/run_inc_quantization_acc.py --datapath $DATA_DIR/val --int8modelpath $OUTPUT_DIR/output/compressedmodel.pb --config ./src/INC/deploy.yaml\r\n```\r\n\r\n## Clean Up Bare Metal\r\nFollow these steps to restore your $WORKSPACE directory to an initial step. Please note that all downloaded dataset files, conda environment, and logs created by workflow will be deleted. Before executing next steps back up your important files:\r\n\r\n```bash\r\nconda deactivate\r\nconda env remove -n medical_diagnostics_intel\r\n```\r\n\r\n```bash\r\nrm -rf $DATA_DIR/*\r\n```\r\n\r\n## Remove repository\r\n\r\n```bash\r\nrm -rf $WORKSPACE\r\n```\r\n\r\n## Expected Output\r\n\r\nA successful execution of medical_diagnosis_initial_training.py should produce results similar to those shown below:\r\n\r\n```bash\r\nINFO:__main__:ABS_VAL_PATH is ============================================\u003e./data/chest_xray/val\r\nINFO:__main__:ABS_TRAIN_PATH is ============================================\u003e./data/chest_xray/train\r\nINFO:__main__:ABS_TEST_PATH is ============================================\u003e./data/chest_xray/test\r\nINFO:__main__:Data paths exist , executing the programme\r\n2023-09-29 17:45:43.022617: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:375] MLIR V1 optimization pass is not enabled\r\nINFO:__main__:epoch in 4 we are evaluting warm up time : epoch_number --\u003e0 \r\nINFO:__main__:epoch in 4 we are evaluting warm up time : epoch_number --\u003e1 \r\nINFO:__main__:epoch in 4 we are evaluting warm up time : epoch_number --\u003e2 \r\nINFO:__main__:epoch in 4 we are evaluting warm up time : epoch_number --\u003e3 \r\nINFO:__main__:Warm Up  time in seconds --\u003e 6.302253\r\nINFO:__main__:epoch in 5 we are evaluting training time : epoch_number --\u003e0 \r\nINFO:__main__:epoch in 5 we are evaluting training time : epoch_number --\u003e1 \r\nINFO:__main__:epoch in 5 we are evaluting training time : epoch_number --\u003e2 \r\nINFO:__main__:epoch in 5 we are evaluting training time : epoch_number --\u003e3 \r\nINFO:__main__:epoch in 5 we are evaluting training time : epoch_number --\u003e4 \r\nINFO:__main__:Total training time in seconds --\u003e 673.476682\r\nINFO:__main__:the number of correct predcitions (TP + TN) is:196\r\nINFO:__main__:The number of wrong predictions (FP + FN) is4\r\nINFO:__main__:Accuracy of the model is :98.000000\r\n```\r\n\r\nA successful execution of medical_diagnosis_hyperparameter_tuning.py should produce results similar to those shown below, this take time:\r\n\r\n```bash\r\nCurrent fit is at  4\r\nINFO:__main__:epoch --\u003e 0\r\nINFO:__main__:epoch --\u003e 1\r\nINFO:__main__:epoch --\u003e 2\r\nINFO:__main__:epoch --\u003e 3\r\nINFO:__main__:epoch --\u003e 4\r\nINFO:__main__:Total training time in seconds --\u003e1763.169316291809 \r\nINFO:__main__:the number of correct predcitions (TP + TN) is:196\r\nINFO:__main__:The number of wrong predictions (FP + FN) is:4\r\nINFO:__main__:Accuracy of the model is :98.000000\r\nINFO:__main__:Time taken for hyperparameter tuning -\u003e5610.670215845108\r\nINFO:__main__:best accuracy acheived in 0.980000\r\nINFO:__main__:best combination is (20, 0.001)\r\n```\r\n\r\nA successful execution of inference.py should produce results similar to those shown below:\r\n\r\n```bash\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'MatMul_1:0' shape=(None, 2400) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :LeakyRelu_13\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'LeakyRelu_13:0' shape=(None, 2400) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :MatMul_2\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'MatMul_2:0' shape=(None, 1600) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :LeakyRelu_14\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'LeakyRelu_14:0' shape=(None, 1600) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :MatMul_3\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'MatMul_3:0' shape=(None, 800) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :LeakyRelu_15\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'LeakyRelu_15:0' shape=(None, 800) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :MatMul_4\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'MatMul_4:0' shape=(None, 64) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :LeakyRelu_16\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'LeakyRelu_16:0' shape=(None, 64) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :MatMul_5\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'MatMul_5:0' shape=(None, 2) dtype=float32\u003e,)\r\nINFO:__main__:Operation Name :Softmax\r\nINFO:__main__:Tensor Stats :(\u003ctf.Tensor 'Softmax:0' shape=(None, 2) dtype=float32\u003e,)\r\n2023-09-29 19:38:16.884707: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:375] MLIR V1 optimization pass is not enabled\r\nINFO:__main__:Time taken for inference : 0.012781\r\n```\r\n\r\nA successful execution of neural_compressor_conversion.py should produce results similar to those shown below:\r\n\r\n```bash\r\n2023-09-29 19:40:21 [WARNING] All replaced equivalent node types are {}\r\n2023-09-29 19:40:21 [INFO] Pass StripEquivalentNodesOptimizer elapsed time: 38.43 ms\r\n2023-09-29 19:40:22 [INFO] Pass PostCseOptimizer elapsed time: 782.89 ms\r\n2023-09-29 19:40:22 [INFO] Pass PostHostConstConverter elapsed time: 24.37 ms\r\n2023-09-29 19:40:22 [INFO] |*Mixed Precision Statistics|\r\n2023-09-29 19:40:22 [INFO] +------------+-------+------+\r\n2023-09-29 19:40:22 [INFO] |  Op Type   | Total | INT8 |\r\n2023-09-29 19:40:22 [INFO] +------------+-------+------+\r\n2023-09-29 19:40:22 [INFO] |   MatMul   |   6   |  6   |\r\n2023-09-29 19:40:22 [INFO] |   Conv2D   |   12  |  12  |\r\n2023-09-29 19:40:22 [INFO] |  MaxPool   |   6   |  6   |\r\n2023-09-29 19:40:22 [INFO] | QuantizeV2 |   19  |  19  |\r\n2023-09-29 19:40:22 [INFO] | Dequantize |   13  |  13  |\r\n2023-09-29 19:40:22 [INFO] +------------+-------+------+\r\n2023-09-29 19:40:22 [INFO] Pass quantize model elapsed time: 24199.29 ms\r\n2023-09-29 19:40:22 [INFO] Save tuning history to \u003cPATH\u003e/medical-imaging-diagnostics/nc_workspace/2023-09-29_19-39-09/./history.snapshot.\r\n2023-09-29 19:40:22 [INFO] Specified timeout or max trials is reached! Found a quantized model which meet accuracy goal. Exit.\r\n2023-09-29 19:40:22 [INFO] Save deploy yaml to \u003cPATH\u003e/medical-imaging-diagnostics/nc_workspace/2023-09-29_19-39-09/deploy.yaml\r\n2023-09-29 19:40:22 [INFO] Save quantized model to \u003cPATH\u003e/medical-imaging-diagnostics/model/output/compressedmodel.pb.\r\n```\r\n\r\nA successful execution of inference_inc.py should produce results similar to those shown below:\r\n\r\n```bash\r\nOperation Name : MatMul_4\r\nTensor Stats : (\u003ctf.Tensor 'MatMul_4:0' shape=(None, 64) dtype=float32\u003e,)\r\nOperation Name : LeakyRelu_16\r\nTensor Stats : (\u003ctf.Tensor 'LeakyRelu_16:0' shape=(None, 64) dtype=float32\u003e,)\r\nOperation Name : MatMul_5\r\nTensor Stats : (\u003ctf.Tensor 'MatMul_5:0' shape=(None, 2) dtype=float32\u003e,)\r\nOperation Name : Softmax\r\nTensor Stats : (\u003ctf.Tensor 'Softmax:0' shape=(None, 2) dtype=float32\u003e,)\r\nShape of input :  Tensor(\"Shape_1:0\", shape=(4,), dtype=int32)\r\n2023-09-29 19:41:50.033647: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:375] MLIR V1 optimization pass is not enabled\r\nINFO:__main__:Time taken for inference : 0.012837\r\n```\r\n\r\nA successful execution of run_inc_quantization_acc.py should produce results similar to those shown below:\r\n\r\n```bash\r\nINFO:__main__:Evaluating the compressed Model=========================================================\r\n2023-09-29 19:42:52 [WARNING] Force convert framework model to neural_compressor model.\r\n2023-09-29 19:42:52 [WARNING] Output tensor names should not be empty.\r\n2023-09-29 19:42:52 [WARNING] Input tensor names is empty.\r\n2023-09-29 19:42:52 [INFO] Start to run Benchmark.\r\n2023-09-29 19:42:52 [WARNING] Found possible input node names: ['Placeholder'], output node names: ['Softmax'].\r\n2023-09-29 19:42:52 [INFO] Start to evaluate the TensorFlow model.\r\nOMP: Info #255: KMP_AFFINITY: pid 3075279 tid 3075428 thread 1 bound to OS proc set 1\r\n2023-09-29 19:43:02 [INFO] Model inference elapsed time: 10289.94 ms\r\n2023-09-29 19:43:02 [INFO] \r\nperformance mode benchmark result:\r\n2023-09-29 19:43:02 [INFO] Batch size = 1\r\n2023-09-29 19:43:02 [INFO] Latency: 11.679 ms\r\n2023-09-29 19:43:02 [INFO] Throughput: 85.624 images/sec\r\n```\r\n\r\n## Summary and Next Steps\r\n\r\nThe experiment aims to classify pneumonia X-ray images to detect abnormalities from the normal lung images, using a CNN model architecture.  \r\nThe process of developing this ML pipeline can be summarized as follows:\r\n\r\n- Setup and installation\r\n- Download the dataset\r\n- Train the CNN model\r\n- Quantize trained models using INC\r\n\r\nThis process can be replicated and used as an example for other types of problems or applications in medical imaging diagnostics, such as the detection of lung cancer, certain tumors, abnormal masses, calcifications, etc.\r\n\r\n## Learn More\r\n\r\nFor more information about AI in image-base abnormalities for different diseases classification or to read about other relevant workflow examples, see these guides and software resources:\r\n\r\n- [Intel® AI Analytics Toolkit (AI Kit)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html)\r\n- [Intel® Distribution for Python*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html#gs.52te4z)\r\n- [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html)\r\n\r\n## Support\r\n\r\nThe End-to-end AI in image-base abnormalities for different diseases classification team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/oneapi-src/medical-imaging-diagnostics/issues). Before submitting a suggestion or bug report, search the [DLSA GitHub issues](https://github.com/oneapi-src/medical-imaging-diagnostics/issues) to see if your issue has already been reported.\r\n\r\n## Appendix\r\n\r\n### References\r\n\r\n[1] Chest X-Ray Images (Pneumonia) dataset. Retrieved 5 October 2023, from https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia\u003cbr\u003e\r\n\r\n[2] Image Classification With TensorFlow 2.0 ( Without Keras ). Case Study \u0026 Repo. Retrieved 5 October 2023, from https://becominghuman.ai/image-classification-with-tensorflow-2-0-without-keras-e6534adddab2\r\n\r\n### Notices \u0026 Disclaimers\r\n\r\n\u003ca id=\"legal_disclaimer\"\u003e\u003c/a\u003e\r\n\r\nPlease see this data set's applicable license for terms and conditions. Intel® Corporation does not own the rights to this data set and does not confer any rights to it.\r\n\r\n*Names and brands that may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html).\r\n\r\nPerformance varies by use, configuration, and other factors. Learn more on the [Performance Index site](https://edc.intel.com/content/www/us/en/products/performance/benchmarks/overview/).\r\n\r\nPerformance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary.\r\n\r\nIntel technologies may require enabled hardware, software, or service activation.\r\n\r\nTo the extent that any public or non-Intel datasets or models are referenced by or accessed using tools or code on this site those datasets or models are provided by the third party indicated as the content source. Intel does not create the content and does not warrant its accuracy or quality. By accessing the public content, or using materials trained on or with such content, you agree to the terms associated with that content and that your use complies with the applicable license.\r\nIntel expressly disclaims the accuracy, adequacy, or completeness of any such public content, and is not liable for any errors, omissions, or defects in the content, or for any reliance on the content. Intel is not liable for any liability or damages relating to your use of public content.\r\n\r\n© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.\r\n","funding_links":[],"categories":["Table of Contents"],"sub_categories":["AI - Frameworks and Toolkits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fmedical-imaging-diagnostics","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foneapi-src%2Fmedical-imaging-diagnostics","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fmedical-imaging-diagnostics/lists"}