{"id":13574494,"url":"https://github.com/oneapi-src/visual-quality-inspection","last_synced_at":"2025-04-04T15:31:10.043Z","repository":{"id":44477726,"uuid":"506429950","full_name":"oneapi-src/visual-quality-inspection","owner":"oneapi-src","description":"AI Starter Kit for Quality Visual Inspection using Intel® Extension for Pytorch","archived":true,"fork":false,"pushed_at":"2024-02-01T23:51:20.000Z","size":278,"stargazers_count":4,"open_issues_count":1,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-11-05T09:44:40.837Z","etag":null,"topics":["deep-learning","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/oneapi-src.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-06-22T23:00:31.000Z","updated_at":"2024-07-28T07:07:18.000Z","dependencies_parsed_at":"2024-02-13T00:49:14.577Z","dependency_job_id":null,"html_url":"https://github.com/oneapi-src/visual-quality-inspection","commit_stats":null,"previous_names":[],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fvisual-quality-inspection","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fvisual-quality-inspection/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fvisual-quality-inspection/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fvisual-quality-inspection/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/oneapi-src","download_url":"https://codeload.github.com/oneapi-src/visual-quality-inspection/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247202679,"owners_count":20900827,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","pytorch"],"created_at":"2024-08-01T15:00:52.099Z","updated_at":"2025-04-04T15:31:05.035Z","avatar_url":"https://github.com/oneapi-src.png","language":"Python","readme":"PROJECT NOT UNDER ACTIVE MANAGEMENT\n\nThis project will no longer be maintained by Intel.\n\nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \n\nIntel no longer accepts patches to this project.\n\nIf you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n\nContact: webadmin@linux.intel.com\n# Visual Quality Inspection\n\n## Introduction\n\nThe goal of this visual inspection use case is to provide AI-powered quality visual inspection on a dataset for the pharma industry which includes different data augmentations. For this purpose, a computer vision model is built using machine learning tools/libraries in Intel® oneAPI AI Analytics Toolkit. Specifically, Intel® Extension for PyTorch\\* is utilized to enhance performance on Intel® hardware.\n\nCheck out more workflow examples and reference implementations in the [Developer Catalog](https://developer.intel.com/aireferenceimplementations).\n\n## **Table of Contents**\n\n- [Solution Technical Overview](#solution-technical-overview)\n  - [Dataset](#Dataset)\n- [Validated Hardware Details](#validated-hardware-details)\n- [Software Requirements](#software-requirements)\n- [How it Works?](#how-it-works)\n- [Get Started](#get-started)\n  - [Download the Workflow Repository](#Download-the-Workflow-Repository)\n- [Ways to run this reference use case](#Ways-to-run-this-reference-use-case)\n  - [Run Using Bare Metal](#run-using-bare-metal)\n- [Expected Output](#expected-output)\n- [Summary and Next Steps](#summary-and-next-steps)\n  - [Adopt to your dataset](#adopt-to-your-dataset)\n- [Learn More](#learn-more)\n- [Support](#support)\n- [Appendix](#appendix)\n\n## Solution Technical Overview\n\nPyTorch* is a machine learning open source framework, and is based on the popular Torch library. PyTorch* is designed to provide good flexibility and high speeds for deep neural network implementation. PyTorch* is different from other deep learning frameworks in that it uses dynamic computation graphs. While static computational graphs (like those used in TensorFlow*) are defined prior to runtime, dynamic graphs are defined \"on the fly\" via the forward computation. In other words, the graph is rebuilt from scratch on every iteration.\n\nManual visual Inspection involves analyzing data and identifying anomalies through human observation and intuition. It can be useful in certain scenarios and also has several challenges and limitations. Some difficulties that can be found when performing manual anomaly detection are subjectivity, limited pattern recognition, lack of consistency, time and cost, and detection latency, among others.\n\nThe solution contained in this repo uses the following Intel® packages:\n\n\u003e - **Intel® Distribution for Python\\***\n\u003e\n\u003e   The [Intel® Distribution for Python\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html#gs.52te4z) provides:\n\u003e\n\u003e   - Scalable performance using all available CPU cores on laptops, desktops, and powerful servers\n\u003e   - Support for the latest CPU instructions\n\u003e   - Near-native performance through acceleration of core numerical and machine learning packages with libraries like the Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library\n\u003e   - Productivity tools for compiling Python code into optimized instructions\n\u003e   - Essential Python bindings for easing integration of Intel® native tools with your Python\\* project\n\u003e\n\u003e - **Intel® Extension for Pytorch\\***\n\u003e\n\u003e   The [Intel® Extension for PyTorch\\*](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master):\n\u003e\n\u003e   - Extends PyTorch\\* with up-to-date features optimizations for an extra performance boost on Intel hardware\n\u003e   - Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs\n\u003e   - Through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\\*\n\u003e   - Provides optimizations for both eager mode and graph mode\n\u003e\n\u003e - **Intel® Neural Compressor\\***\n\u003e\n\u003e   The [Intel® Neural Compressor\\*](https://github.com/intel/neural-compressor) aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch.\n\nFor more details, visit [Quality Visual Inspection GitHub](https://github.com/oneapi-src/visual-quality-inspection) repository.\n\n## Solution Technical Details\n\n![Use_case_flow](assets/quality_visual_inspection_e2e_flow.png)\n\nThis sample code is implemented for CPU using the Python language and Intel® Extension for PyTorch\\* v1.13.120 has been used in this code base. VGGNet, a classical convolutional neural network (CNN) architecture is being used for training. The Visual Geometric Group model (VGG) was developed to increase the depth of such CNNs in order to increase the model performance and it is widely used in computer vision use cases. Tuning parameters has been introduced to the model in an optimization algorithm with different learning rate for checking how quickly the model is adapted to the problem in order to increase the model performance.\n\n### Dataset\n\n[MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad) [[1]](#mvtec_ad_dataset) is a dataset for benchmarking anomaly detection methods with a focus on industrial inspection (follow this [link](#legal_disclaimer) to read the legal disclaimer). It contains over 5000 high-resolution images divided into fifteen different object and texture categories. Each category comprises a set of defect-free training images and a test set of images with various kinds of defects as well as images without defects. We are going to use only the Pill (262 MB) dataset for this use case.\n\nMore information can be found on the case study [Explainable Defect Detection Using Convolutional Neural Networks: Case Study](https://towardsdatascience.com/explainable-defect-detection-using-convolutional-neural-networks-case-study-284e57337b59) [[2]](#case_study) and in [VGG16 Model Training](https://github.com/OlgaChernytska/Visual-Inspection) [[3]](#vgg).\n\n![Statistical_overview_of_the_MVTec_AD_dataset](assets/mvtec_dataset_characteristics.JPG)\n\u003cbr\u003e\nTable 1: Statistical overview of the MVTec AD dataset. For each category, the number of training and test images is given together with additional information about the defects present in the respective test images [[4]](#mvtec_ad).\n\n## Validated Hardware Details\n\nThere are workflow-specific hardware and software setup requirements.\n\n| Recommended Hardware                                            | Precision  |\n| --------------------------------------------------------------- | ---------- |\n| CPU: Intel® 2nd Gen Xeon® Platinum 8280 CPU @ 2.70GHz or higher | FP32, INT8 |\n| RAM: 187 GB                                                     |            |\n| Recommended Free Disk Space: 20 GB or more                      |            |\n\nCode was tested on Ubuntu\\* 22.04 LTS.\n\n## How it Works\n\nThis reference use case uses a classical convolutional neural network (CNN) architecture, named VGGNet, implemented for CPU using the Python language and Intel® Extension for PyTorch\\*. VGG was developed to increase the depth of such CNNs in order to increase the model performance and it is widely used in computer vision use cases.\n\nThe use case can be summarized in three steps:\n\n1. Training\n1. Tunning\n1. Inference\n\n### 1) Training\n\nVGG-16 is a convolutional neural network that is 16 layers deep and same has been used as classification architecture to classify the good and defect samples from the production pipeline.\nIntel® Extension for PyTorch\\* is used for transfer learning the VGGNet classification architecture on the pill dataset created.\n\n| **Input Size**          | 224x224   |\n| :---------------------- | :-------- |\n| **Output Model format** | PyTorch\\* |\n\n### 2) Tuning\n\nCreated VGGNet classification architecture on the dataset and fine tune the hyper parameters to reach out the maximum accuracy. Introduced different learning rate to the model architecture on the dataset, also we increased the number of epochs to reach maximum accuracy on the training set. Hyperparameters considered for tuning are Learning Rate \u0026 Epochs.\n\n_Parameters considered_ `Learning Rate, Epochs, Target training accuracy`\n\n\u003e Created code replication for GridSearchCV to support the code base.\n\n### 3) Inference\n\nPerformed inferencing using the trained model with\n\n- Intel® Extension for PyTorch\\*\n- Intel® Neural Compressor\n\n## Get Started\n\nDefine an environment variable that will store the workspace path, this can be an existing directory or one created specifically for this reference use case. You can use the following commands.\n\n[//]: # (capture: baremetal)\n\n```\nexport WORKSPACE=$PWD/visual-quality-inspection\nexport DATA_DIR=$WORKSPACE/data\nexport OUTPUT_DIR=$WORKSPACE/output\n```\n\n### Download the Workflow Repository\n\nCreate a working directory for the workflow and clone the [Quality Visual Inspection](https://github.com/oneapi-src/visual-quality-inspection) repository into your working directory.\n\n```\nmkdir -p $WORKSPACE \u0026\u0026 cd $WORKSPACE\ngit clone https://github.com/oneapi-src/visual-quality-inspection.git .\n```\n\n### Set up Miniconda\n\n1. Download the appropriate Miniconda Installer for linux.\n\n   ```bash\n   wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\n   ```\n\n2. In your terminal, run.\n\n   ```bash\n   bash Miniconda3-latest-Linux-x86_64.sh\n   ```\n\n3. Delete downloaded file.\n\n   ```bash\n   rm Miniconda3-latest-Linux-x86_64.sh\n   ```\n\nTo learn more about conda installation, see the [Conda Linux installation instructions](https://docs.conda.io/projects/conda/en/stable/user-guide/install/linux.html).\n\n### Set Up Environment\nThe conda yaml dependencies are kept in `$WORKSPACE/env/intel_env.yml`.\n\n| **Packages required in YAML file:**                 | **Version:**\n| :---                          | :--\n| `python`  | 3.9\n| `intel-aikit-pytorch`  | 2024.0\n| `scikit-learn-intelex`  | 2024.0.0\n| `seaborn`  | 0.13.0\n| `dataset_librarian`  | 1.0.4\n\n\nFollow the next steps to setup the conda environment:\n\n```sh\nconda config --set solver libmamba # If conda\u003c2.10.0 \nconda env create -f $WORKSPACE/env/intel_env.yml --no-default-packages\nconda activate visual_inspection_intel\n```\n\nEnvironment setup is required only once. This step does not cleanup the existing environment with the same name hence we need to make sure there is no conda environment exists with the same name. During this setup, `visual_inspection_intel` conda environment will be created with the dependencies listed in the YAML configuration.\n\n### Download the Dataset\n\n\u003e The pill dataset is downloaded and extracted in a folder before running the training python module.\n\nDownload the mvtec dataset using Intel® AI Reference Models Dataset Librarian (You can get the Dataset from [MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad) [[1]](#mvtec_ad_dataset)). We are going to use the **pill dataset**.\n\nMore details of the Intel® AI Reference Models Dataset Librarian can be found [here](https://github.com/IntelAI/models/tree/master/datasets/dataset_api) and, terms and conditions can be found [here](https://github.com/IntelAI/models/blob/master/datasets/dataset_api/src/dataset_librarian/terms_and_conditions.txt).\n\n[//]: # (capture: baremetal)\n\n```sh\npython -m dataset_librarian.dataset -n mvtec-ad --download --preprocess -d $DATA_DIR\n```\n\nNote: See this dataset's applicable license for terms and conditions. Intel Corporation does not own the rights to this dataset and does not confer any rights to it.\n\n#### Dataset Preparation\n\nThe dataset available from the source requires a filtering before the training. Assuming the pill dataset is downloaded with the Intel® AI Reference Models Dataset Librarian or using from the dataset source given above in this document, follow the below steps to filter the dataset extracted from the source.\n\n[//]: # (capture: baremetal)\n\n```sh\nmkdir -p $DATA_DIR/{train/{good,bad},test/{good,bad}}\n\ncd $DATA_DIR/pill/train/good/\ncp $(ls | head -n 210) $DATA_DIR/train/good/\ncp $(ls | tail -n 65) $DATA_DIR/test/good/\n\ncd $DATA_DIR/pill/test/combined\ncp $(ls | head -n 17) $DATA_DIR/train/bad/\ncp $(ls | tail -n 5) $DATA_DIR/test/bad/\n```\n\n**Data Cloning**\n\n\u003e **Note** Data cloning is an optional step\n\nAssuming that pill dataset is downloaded and created the folder structure as mentioned above. Use the below code to clone the data to handle data distribution. Data will be cloned in same directory (e.g. \"data\")\n\n```\nusage: clone_dataset.py [-h] [-d DATAPATH]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -d DATAPATH, --datapath DATAPATH\n                        dataset path which consists of train and test folders\n```\n\nUse the below sample command to perform data cloning\n\n[//]: # (capture: baremetal)\n\n```sh\ncd $WORKSPACE/src\npython clone_dataset.py -d $DATA_DIR\n```\n\n### Supported Runtime Environment\n\nThis reference kit offers one options for running the fine-tuning and inference processes:\n\n- [Bare Metal](#run-using-bare-metal)\n\n\u003e **Note**: The performance were tested on Xeon based processors. Some portions of the ref kits may run slower on a client machine, so utilize the flags supported to modify the epochs/batch size to run the training or inference faster.\n\n## Run Using Bare Metal\n\n\u003e **Note**: Follow these instructions to set up and run this workflow on your own development system. \n\n### Set Up and run Workflow\n\nBelow are the steps to reproduce the bechmarking results given in this repository\n\n1. Training VGG16 model\n1. Model Inference\n1. Quantize trained models using INC and benchmarking\n\n### 1. Training VGG16 model\n\nRun the training module as given below to start training and prediction using the active environment. This module takes option to run the training with and without hyper parameter tuning.\n\n```\nusage: training.py [-h] [-d DATAPATH] [-o OUTMODEL] [-a DATAAUG] [-hy HYPERPARAMS]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -d DATAPATH, --datapath DATAPATH\n                        dataset path which consists of train and test folders\n  -o OUTMODEL, --outmodel OUTMODEL\n                        outfile name without extension to save the model.\n  -a DATAAUG, --dataaug DATAAUG\n                        use 1 for enabling data augmentation, default is 0\n  -hy HYPERPARAMS, --hyperparams HYPERPARAMS\n                        use 1 for enabling hyperparameter tuning, default is 0\n```\n\n_You need to change directory to src folder_\n\n[//]: # (capture: baremetal)\n\n```sh\ncd $WORKSPACE/src\n```\n\n_Command to run training without data augmentation nor hyperparameter tuning_\n\n[//]: # (capture: baremetal)\n\n```sh\npython training.py -d $DATA_DIR -o $OUTPUT_DIR/pill_intel_model.h5\n```\n\nThe model is saved in the OUTPUT_DIR as pill_intel.h5.\n\n_Command to run training with data augmentation_\n\n[//]: # (capture: baremetal)\n\n```sh\npython training.py -d  $DATA_DIR -a 1 -o $OUTPUT_DIR/pill_intel_model.h5\n```\n\n_Command to run training with hyperparameter tuning_\n\n[//]: # (capture: baremetal)\n\n```sh\npython training.py -d $DATA_DIR -hy 1 -o $OUTPUT_DIR/pill_intel_model.h5\n```\n\n_Command to run training with data augmentation and hyperparameter tuning_\n\n[//]: # (capture: baremetal)\n\n```sh\npython training.py -d $DATA_DIR -a 1 -hy 1 -o $OUTPUT_DIR/pill_intel_model.h5\n```\n\n### 2. Inference\n\n#### Running inference using PyTorch\\*\n\nUse the following commands to run the inference on test images and get the inference timing for each batch of images.\u003cbr\u003e\n\n```\nusage: pytorch_evaluation.py [-h] [-d DATA_FOLDER] [-m MODEL_PATH] [-b BATCHSIZE]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -d DATA_FOLDER, --data_folder DATA_FOLDER\n                        dataset path which consists of train and test folders\n  -m MODEL_PATH, --model_path MODEL_PATH\n                        Absolute path to the h5 PyTorch* model with extension \".h5\"\n\n  -b BATCHSIZE, --batchsize BATCHSIZE\n                        use the batchsize that want do inference, default is 1\n```\n\n_You need to activate visual_inspection_intel environment and change directory to src folder_\n\n[//]: # (capture: baremetal)\n\n```sh\ncd $WORKSPACE/src\n```\n\n_Command to run the real-time inference using Intel® PyTorch\\*_\n\n```sh\npython pytorch_evaluation.py -d $DATA_DIR -m $OUTPUT_DIR/{trained_model.h5} -b 1\n```\n\nUsing model from previous steps:\n\n[//]: # (capture: baremetal)\n\n```sh\npython pytorch_evaluation.py -d $DATA_DIR -m $OUTPUT_DIR/pill_intel_model.h5 -b 1\n```\n\n\u003e By using different batchsize one can observe the gain obtained using Intel® Extension for PyTorch\\*\n\n### 3. Quantize trained models using Intel® Neural Compressor\n\nIntel® Neural Compressor is used to quantize the FP32 Model to the INT8 Model. Optimized model is used here for evaluating and timing analysis.\nIntel® Neural Compressor supports many optimization methods. In this case, we used post training quantization with `Accuracy aware mode` method to quantize the FP32 model.\n\n_Step-1: Conversion of FP32 Model to INT8 Model_\n\n```\nusage: neural_compressor_conversion.py [-h] [-d DATAPATH] [-m MODELPATH]\n                                       [-c CONFIG] [-o OUTPATH]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -d DATAPATH, --datapath DATAPATH\n                        dataset path which consists of train and test folders\n  -m MODELPATH, --modelpath MODELPATH\n                        Model path trained with PyTorch* \".h5\" file\n  -c CONFIG, --config CONFIG\n                        Yaml file for quantizing model, default is\n                        \"./config.yaml\"\n  -o OUTPATH, --outpath OUTPATH\n                        default output quantized model will be save in\n                        ./output folder\n```\n\n_Command to run the neural_compressor_conversion_\n\n```sh\ncd $WORKSPACE/src/intel_neural_compressor\npython neural_compressor_conversion.py -d $DATA_DIR -m $OUTPUT_DIR/{trained_model.h5} -o $OUTPUT_DIR\n```\n\nUsing model from previous steps:\n\n[//]: # (capture: baremetal)\n\n```sh\ncd $WORKSPACE/src/intel_neural_compressor\npython neural_compressor_conversion.py -d $DATA_DIR -m $OUTPUT_DIR/pill_intel_model.h5 -o $OUTPUT_DIR\n```\n\n\u003e Quantized model will be saved by default in `OUTPUT_DIR` folder\n\n_Step-2: Inferencing using quantized Model_\n\n```\nusage: neural_compressor_inference.py [-h] [-d DATAPATH] [-fp32 FP32MODELPATH]\n                                      [-c CONFIG] [-int8 INT8MODELPATH]\n\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -d DATAPATH, --datapath DATAPATH\n                        dataset path which consists of train and test folders\n  -fp32 FP32MODELPATH, --fp32modelpath FP32MODELPATH\n                        Model path trained with PyTorch* \".h5\" file\n  -c CONFIG, --config CONFIG\n                        Yaml file for quantizing model, default is\n                        \"./config.yaml\"\n  -int8 INT8MODELPATH, --int8modelpath INT8MODELPATH\n                        load the quantized model folder. default is ./output\n                        folder\n```\n\n_Command to run neural_compressor_inference for realtime `(batchsize =1)`_\n\n```sh\ncd $WORKSPACE/src/intel_neural_compressor\npython neural_compressor_inference.py -d $DATA_DIR -fp32 $OUTPUT_DIR/{trained_model.h5}  -int8 $OUTPUT_DIR -b 1\n```\n\nUsing model from previous steps:\n\n[//]: # (capture: baremetal)\n\n```sh\ncd $WORKSPACE/src/intel_neural_compressor\npython neural_compressor_inference.py -d $DATA_DIR -fp32 $OUTPUT_DIR/pill_intel_model.h5 -int8 $OUTPUT_DIR -b 1\n```\n\n\u003e Use `-b` to test with different batch size (e.g. `-b 10`)\n\n### Clean Up Bare Metal\n\nFollow these steps to restore your `$WORKSPACE` directory to an initial step. Please note that all downloaded dataset files, conda environment, and logs created by workflow will be deleted. Before executing next steps back up your important files.\n\n[//]: # (capture: baremetal)\n\n```bash\nconda deactivate\nconda env remove -n visual_inspection_intel\nrm -rf $DATA_DIR/*\nrm -rf $OUTPUT_DIR/*\n```\n\nRemove repository\n\n[//]: # (capture: baremetal)\n\n```sh\nrm -rf $WORKSPACE\n```\n\n## Expected Outputs\n\n### Expected Output for training without data augmentation and hyperparameter tuning\n\nBelow output would be generated by the training module which will capture the overall training time.\n\n```\nDataset path Found!!\nTrain and Test Data folders Found!\nDataset data/: N Images = 694, Share of anomalies = 0.218\nEpoch 1/10: Loss = 0.6575, Accuracy = 0.7236\nEpoch 2/10: Loss = 0.4175, Accuracy = 0.8455\nEpoch 3/10: Loss = 0.3731, Accuracy = 0.8691\nEpoch 4/10: Loss = 0.2419, Accuracy = 0.9273\nEpoch 5/10: Loss = 0.0951, Accuracy = 0.9745\nEpoch 6/10: Loss = 0.0796, Accuracy = 0.9709\nEpoch 7/10: Loss = 0.0696, Accuracy = 0.9764\nEpoch 8/10: Loss = 0.0977, Accuracy = 0.9727\nEpoch 9/10: Loss = 0.0957, Accuracy = 0.9727\nEpoch 10/10: Loss = 0.1580, Accuracy = 0.9600\ntrain_time= 1094.215266942978\n```\n\n**Capturing the time for training and inferencing**\nThe line containing `train_time` gives the time required for the training the model.\nRun this script to record multiple trials and the average can be calculated.\n\n### Expected Output for Inferencing using quantized Model\n\nBelow output would be generated by the Inferencing using quantized Model with neural compressor.\n\n```\nBatch Size used here is  1\nAverage Inference Time Taken Fp32 --\u003e  0.035616397857666016\nAverage Inference Time Taken Int8 --\u003e  0.011458873748779297\n**************************************************\nEvaluating the Quantizaed Model\n**************************************************\n2023-07-04 05:59:42 [WARNING] Force convert framework model to neural_compressor model.\n2023-07-04 05:59:42 [INFO] Start to run Benchmark.\n2023-07-04 05:59:49 [INFO]\naccuracy mode benchmark result:\n2023-07-04 05:59:49 [INFO] Batch size = 1\n2023-07-04 05:59:49 [INFO] Accuracy is 0.9595\n**************************************************\nEvaluating the FP32 Model\n**************************************************\n2023-07-04 05:59:49 [WARNING] Force convert framework model to neural_compressor model.\n2023-07-04 05:59:49 [INFO] Start to run Benchmark.\n2023-07-04 05:59:59 [INFO]\naccuracy mode benchmark result:\n2023-07-04 05:59:59 [INFO] Batch size = 1\n2023-07-04 05:59:59 [INFO] Accuracy is 0.9595\n```\n\n## Summary and Next Steps\n\nIntel® Extension for PyTorch\\* v2.0.110\n\n- The reference use case above demonstrates an Anomaly Detection approach using a convolutional neural network. For optimal performance on Intel architecture, the scripts are also enabled with Intel extension for PyTorch\\*, Intel extension for scikit-learn and Intel® Neural Compressor.\n\n#### Adopt to your dataset\n\nThis reference use case can be easily deployed on a different or customized dataset by simply arranging the images for training and testing in the following folder structure (Note: this approach only uses good images for training):\n\n![adapt_dataset](assets/adapt_dataset.png)\n\n#### Conclusion\n\nWith the arrival of computer vision (CV) techniques, powered by Artificial Intelligence (AI) and deep learning, visual inspection has been digitalized and automated. Factories have installed cameras in each production line and huge quantities of images are read and processed using a deep learning model trained for defect detection. If each production line will have its CV application running on the edge to train that can show the scale of the challenge this industry faces with automation. CV applications demand, however, huge amounts of processing power to process the increasing image load, requiring a trade-off between accuracy, inference performance, and compute cost. Manufacturers will look for easy and cost-effective ways to deploy computer vision applications across edge-cloud infrastructures to balance the cost without impacting accuracy and inference performance. This reference kit implementation provides performance-optimized guide around quality visual inspection use cases that can be easily scaled across similar use cases.\n\n## Learn More\n\nFor more information about \u003cworkflow\u003e or to read about other relevant workflow\nexamples, see these guides and software resources:\n\n- [Intel® AI Analytics Toolkit (AI Kit)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html)\n- [Developer Catalog](https://developer.intel.com/aireferenceimplementations)\n- [Intel® Distribution for Python\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html#gs.52te4z)\n- [Intel® Extension for PyTorch\\*](https://github.com/intel/intel-extension-for-pytorch)\n- [Intel® Neural Compressor\\*](https://github.com/intel/neural-compressor)\n\n## Support\n\nIf you have any questions with this workflow, want help with troubleshooting, want to report a bug or submit enhancement requests, please submit a GitHub issue.\n\n## Appendix\n\n### References\n\n\u003ca id=\"mvtec_ad_dataset\"\u003e[1]\u003c/a\u003e GmbH, M. (2023). MVTec Anomaly Detection Dataset: MVTec Software. Retrieved 5 September 2023, from https://www.mvtec.com/company/research/datasets/mvtec-ad\n\n\u003ca id=\"case_study\"\u003e[2]\u003c/a\u003e Explainable Defect Detection Using Convolutional Neural Networks: Case Study. (2022). Retrieved 5 September 2023, from https://towardsdatascience.com/explainable-defect-detection-using-convolutional-neural-networks-case-study-284e57337b59\n\n\u003ca id=\"vgg\"\u003e[3]\u003c/a\u003e GitHub - OlgaChernytska/Visual-Inspection: Explainable Defect Detection using Convolutional Neural Networks: Case Study. (2023). Retrieved 5 September 2023, from https://github.com/OlgaChernytska/Visual-Inspection\n\n\u003ca id=\"mvtec_ad\"\u003e[4]\u003c/a\u003e Bergmann, P., Fauser, M., Sattlegger, D., \u0026 Steger, C. (2019). [MVTec AD--A comprehensive real-world dataset for unsupervised anomaly detection](https://www.mvtec.com/fileadmin/Redaktion/mvtec.com/company/research/datasets/mvtec_ad.pdf). In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9592-9600).\n\n\n### Notices \u0026 Disclaimers\n\n\u003ca id=\"legal_disclaimer\"\u003e\u003c/a\u003e\n\nTo the extent that any public or non-Intel datasets or models are referenced by or accessed using tools or code on this site those datasets or models are provided by the third party indicated as the content source. Intel does not create the content and does not warrant its accuracy or quality. By accessing the public content, or using materials trained on or with such content, you agree to the terms associated with that content and that your use complies with the applicable license.\n\nIntel expressly disclaims the accuracy, adequacy, or completeness of any such public content, and is not liable for any errors, omissions, or defects in the content, or for any reliance on the content. Intel is not liable for any liability or damages relating to your use of public content.\n\nPlease see this data set's applicable license for terms and conditions. Intel®Corporation does not own the rights to this data set and does not confer any rights to it.\n\n\\*Other names and brands that may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html).\n\nPerformance varies by use, configuration, and other factors. Learn more on the [Performance Index site](https://edc.intel.com/content/www/us/en/products/performance/benchmarks/overview/).\n\nPerformance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.\nYour costs and results may vary.\n\nIntel technologies may require enabled hardware, software, or service activation.\n\n© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.\n","funding_links":[],"categories":["Table of Contents"],"sub_categories":["AI - Frameworks and Toolkits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fvisual-quality-inspection","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foneapi-src%2Fvisual-quality-inspection","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fvisual-quality-inspection/lists"}