{"id":13574350,"url":"https://github.com/oneapi-src/drone-navigation-inspection","last_synced_at":"2025-04-04T14:32:29.193Z","repository":{"id":66145926,"uuid":"574715793","full_name":"oneapi-src/drone-navigation-inspection","owner":"oneapi-src","description":"AI Starter Kit for AI applications in Drone technology using Intel® Optimized Tensorflow*","archived":true,"fork":false,"pushed_at":"2024-05-08T23:57:42.000Z","size":282,"stargazers_count":14,"open_issues_count":0,"forks_count":7,"subscribers_count":4,"default_branch":"main","last_synced_at":"2024-11-05T09:44:27.071Z","etag":null,"topics":["ai-starter-kit","deep-learning","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/oneapi-src.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-12-05T23:18:29.000Z","updated_at":"2024-10-22T06:08:50.000Z","dependencies_parsed_at":"2024-02-26T20:24:35.078Z","dependency_job_id":"6ec04b12-b240-4c62-8f3c-536ecb3e81db","html_url":"https://github.com/oneapi-src/drone-navigation-inspection","commit_stats":null,"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdrone-navigation-inspection","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdrone-navigation-inspection/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdrone-navigation-inspection/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fdrone-navigation-inspection/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/oneapi-src","download_url":"https://codeload.github.com/oneapi-src/drone-navigation-inspection/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247194360,"owners_count":20899473,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-starter-kit","deep-learning","tensorflow"],"created_at":"2024-08-01T15:00:50.740Z","updated_at":"2025-04-04T14:32:28.813Z","avatar_url":"https://github.com/oneapi-src.png","language":"Python","readme":"PROJECT NOT UNDER ACTIVE MANAGEMENT\n\nThis project will no longer be maintained by Intel.\n\nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \n\nIntel no longer accepts patches to this project.\n\nIf you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n\nContact: webadmin@linux.intel.com\n# Drone Navigation Inspection\n\n## Introduction\nBuild an optimized semantic segmentation solution based on the Visual Geometry Group (VGG)-UNET architecture and oriented to assist drones on safely landing by identifying and segmenting paved areas. The proposed system uses Intel® oneDNN optimized TensorFlow\\* to accelerate the training and inference performance of drones equipped with Intel® hardware, whereas Intel® Neural Compressor is applied to compress the trained segmentation model to further increase speed up inference. Check out the [Developer Catalog](https://developer.intel.com/aireferenceimplementations) for information about different use cases.\n\n## Solution Technical Overview\nDrones are unmanned aerial vehicles (UAVs) or unmanned aircraft systems. Essentially, a drone is a flying robot that can be remotely controlled using remote control devices which communicate with the drone. While drones have huge applications in sectors like urban development, construction \u0026 infrastructure, supply chain and logistics use cases, safety is a huge concern.\n\nDrones are used commercially as first-aid vehicles, as tools for investigation by police departments, in high-tech photography and as recording devices for real estate properties, concerts, sporting events, etc. This reference kit model has been built with the objective of improving the safety of autonomous drone flight and landing procedures at the edge (which runs on CPU based hardware) without ground-based controllers or human pilots onsite.\n\nDrones at construction sites are used to scan, record, and map locations or buildings land surveys, tracking machines, remote monitoring, construction site security, building inspection, and worker safety. However, drone crashes are dangerous and can lead to devastation.\n\nIn utilities sector, inspecting growing numbers of towers, powerlines, and wind turbines is difficult and creates prime opportunities for drones to replace human inspection with accurate image-based inspection and diagnosis. Drones transform the way inspection and maintenance personnel do their jobs at utility companies. If a drone meets with an accident while landing, it could damage assets and injure personnel.\n\nSafe landing of drones without injuring people or damaging property is vital for massive adoption of drones in day-to-day life. Considering the risks associated with drone landing, paved areas dedicated for drones to land are considered safe. The Artificial Intelligence (AI) system introduced in this project presents a deep learning model which segments paved areas for safe landing. Furthermore, the proposed solution allows an efficient deployment while maintaining the accuracy and speeding up the inference time by leveraging the following Intel® oneAPI packages:\n\n* ***Intel® Distribution for Python\\****\n\n\tThe [Intel® Distribution for Python*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html) provides:\n\n    * Scalable performance using all available CPU cores on laptops, desktops, and powerful servers\n\t  * Support for the latest CPU instructions\n\t  * Near-native performance through acceleration of core numerical and machine learning packages with libraries like the Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library\n\t  * Productivity tools for compiling Python* code into optimized instructions\n\t  * Essential Python\\* bindings for easing integration of Intel® native tools with your Python\\* project\n\n* ***[Intel® Optimizations for TensorFlow\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html#gs.174f5y)***\n\n    * Accelerate AI performance with Intel® oneAPI Deep Neural Network Library (oneDNN) features such as graph optimizations and memory pool allocation.\n    * Automatically use Intel® Deep Learning Boost instruction set features to parallelize and accelerate AI workloads.\n    * Reduce inference latency for models deployed using TensorFlow Serving.\n    * Starting with TensorFlow 2.9, take advantage of oneDNN optimizations automatically.\n    * Enable optimizations by setting the environment variable TF_ENABLE_ONEDNN_OPTS=1 in TensorFlow\\* 2.5 through 2.8.\n\n* ***Intel® Neural Compressor***\n\n  [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html#gs.5vjr1p) performs model compression to reduce the model size and increase the speed of deep learning inference for deployment on CPUs or GPUs. This open source Python\\* library automates popular model compression technologies, such as quantization, pruning, and knowledge distillation across multiple deep learning frameworks.\n\nThe use of AI in the context of drones can be further optimized using Intel® oneAPI which improves the performance of computing intensive image processing, reduces training/inference time and scales the usage of complex models by compressing models to run efficiently on edge devices. Intel® oneDNN optimized TensorFlow\\* provides additional optimizations for an extra performance boost on Intel® CPU.\n\nFor more details, visit [Intel® Distribution for Python\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html), [Intel® Optimizations for TensorFlow\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html#gs.174f5y), [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html#gs.5vjr1p), and the [Drone Navigation Inspection](https://github.com/oneapi-src/drone-navigation-inspection) GitHub repository.\n\n## Solution Technical Details\nThis reference kit leverages Intel® oneAPI to demonstrate the application of TensorFlow\\* based AI models that works on drone technology to help segment paved areas which increases the probability of landing drones safely.\n\nThe experiment focus is drone navigation for inspections. Therefore, the experiment aims to segment the paved area and different objects around the drone path in order to land the drone safely on the paved area. The goal is therefore to take an image captured by the drone camera as input and pass it through the semantic segmentation model (VGG-UNET architecture) to accurately recognize entities, like paved area, people, vehicles or dogs, to then benchmark speed and accuracy of batch/real-time training and inference against Intel®’s technology. When it comes to the deployment of this model on edge devices with less computing and memory resources such as drones themselves, model is quantized and compressed while bringing out the same level of accuracy and efficient utilization of underlying computing resources. Model optimization and compression is done using Intel® Neural Compressor. \n\n### Dataset\nPixel-accurate annotation for drone dataset focuses on semantic understanding of urban scenes for increasing the safety of drone landing procedures. The imagery depicts more than 20 houses from nadir (bird's eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The complexity of the dataset is limited to 20 classes and the target output is paved area class. The training set contains 320  publicly available images, and the test set is made up of 80 images. Here the train \u0026 test dataset split is 80:20.\n\n| **Use case** | Paved Area Segmentation\n| :--- | :---\n| **Object of interest** | Paved Area \n| **Size** | Total 400 Labelled Images\u003cbr\u003e\n| **Train : Test Split** | 80:20\n| **Source** | https://www.kaggle.com/datasets/bulentsiyah/semantic-drone-dataset\n\nInstructions on how to download and manage the dataset can be found in this [subsection](#download-the-dataset).\n\n\u003e *Please see this dataset's applicable license for terms and conditions. Intel® does not own the rights to this data set and does not confer any rights to it.*\n\n## Validated Hardware Details\nThere are workflow-specific hardware and software setup requirements depending on\nhow the workflow is run. \n\n| Recommended Hardware                                            | Precision\n| ----------------------------------------------------------------|-\n| CPU: Intel® 2th Gen Xeon® Platinum 8280L CPU @ 2.70GHz or higher | FP32, INT8\n| RAM: 187 GB                                                     |\n| Recommended Free Disk Space: 20 GB or more                      |\n\nCode was tested on Ubuntu\\* 22.04 LTS.\n\n## How it Works\nThe semantic segmentation pipeline presented in this reference kit enables the optimization of the training, hyperparameter tuning and inference modalities by using Intel® oneAPI specialized packages. The next diagram illustrates the workflow of these processes and how the Intel® optimization features are applied in each stage.\n\n![segmentation-flow](assets/segmentation_workflow.png)\n\n### Intel® oneDNN optimized TensorFlow\\*\nTraining a convolutional neural network, like the VGG-UNET model used in this reference kit, and making inference with it, usually represent compute-intensive tasks. To address these requirements and to gain a performance boost on Intel® hardware, in this reference kit the training and inference stages include the implementation of TensorFlow\\* optimized via Intel ®oneDNN.\n\nRegarding the training step, the present solution allows to perform regular training and to undertake an exhaustive search of optimal hyperparameters by implementing a hyperparameter tuning scheme. \n\nIn the case of the regular training of the VGG-UNET architecture, the efficiency of the process is increased by using transfer learning based on the pre-trained VGG encoder. Also, the machine learning practitioner can set different epochs values to assess the performance of multiple segmentation models. Please refer to this [subsection](#training-vgg-unet-model) to see how the regular training procedure is deployed.\n\nThe semantic segmentation model fitted through regular training can obtain a performance boost by implementing a hyperparameter tuning process using different values for `learning rate`, `optimizer` and `loss function`. In this reference kit, the hyperparameter search space is confined to the few hyperparameters listed in the next table: \n\n| **Hyperparameter** | Values\n| :--- | :---\n| **Learning rates** | [0.001, 0.01, 0.0001]\u003cbr\u003e\n| **Optimizers** | [\"Adam\", \"adadelta\", \"rmsprop\"]\n| **Loss function** | [\"categorical_crossentropy\"]\n\nAs part of the hyperparameter tuning process, it is important to state that the dataset remains the same with 80:20 split for training and testing (see [here](#dataset) for more details about the dataset). Once the best combination of hyperparameters is defined, the model can be retrained with such combination to achieve better accuracy. The hyperparameter tuning execution is shown in this [subsection](#hyperparameter-tuning).\n\nAnother important aspect of the VGG-UNET model trained with Intel ®oneDNN optimized TensorFlow is that this model is trained using a FP32 precision.\n\n### Intel® Neural Compressor\nAfter training the VGG-UNET model using Intel® oneDNN optimized TensorFlow, its inference efficiency can be accelerated even more by the Intel® Neural Compressor library. This project enables the use of Intel® Neural Compressor to convert the trained FP32 VGG-UNET model into an INT8 CRNN model by implementing post-training quantization, which apart from reducing model size, increase the inference speed up.\n\nThe quantization of the trained FP32 VGG-UNET model into an INT8 CRNN model and other executed operations based on Intel® Neural Compressor optimizations can be inspected [here](#optimizations-with-intel-neural-compressor).\n\n## Get Started\nStart by **defining an environment variable** that will store the workspace path, this can be an existing directory or one to be created in further steps. This ENVVAR will be used for all the commands executed using absolute paths.\n\n[//]: # (capture: baremetal)\n```bash\nexport WORKSPACE=$PWD/drone-navigation-inspection\n```\n\nAlso, it is necessary to define the following environment variables that will be used in later stages.\n\n[//]: # (capture: baremetal)\n```bash\nexport SRC_DIR=$WORKSPACE/src\nexport DATA_DIR=$WORKSPACE/data\nexport OUTPUT_DIR=$WORKSPACE/output\n```\n\n### Download the Workflow Repository\nCreate the workspace directory for the workflow and clone the [Drone Navigation Inspection]() repository inside it.\n\n```bash\nmkdir -p $WORKSPACE \u0026\u0026 cd $WORKSPACE\n```\n\n```bash\ngit clone https://github.com/oneapi-src/drone-navigation-inspection $WORKSPACE\n```\n\nCreate the `$DATA_DIR` folder that will store the dataset in later steps.\n\n[//]: # (capture: baremetal)\n```bash\nmkdir -p $DATA_DIR\n```\n\n### Set Up Conda\nPlease follow the instructions below to download and install Miniconda.\n\n1. Download the required Miniconda installer for Linux.\n   \n   ```bash\n   wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\n   ```\n\n2. Install Miniconda.\n   \n   ```bash\n   bash Miniconda3-latest-Linux-x86_64.sh\n   ```\n\n3. Delete Miniconda installer.\n   \n   ```bash\n   rm Miniconda3-latest-Linux-x86_64.sh\n   ```\n\nPlease visit [Conda Installation on Linux](https://docs.anaconda.com/free/anaconda/install/linux/) for more details. \n\n### Set Up Environment\nExecute the next commands to install and setup libmamba as conda's default solver.\n\n```bash\nconda install -n base conda-libmamba-solver\nconda config --set solver libmamba\n```\n\n| Packages | Version | \n| -------- | ------- |\n| intelpython3_core | 2024.1.0 |\n| python | 3.9 |\n| intelpython3_core | 2024.1.0 |\n| intel-aikit-tensorflow | 2024.1 |\n| tqdm | 4.66.2 |\n| pip | 24.0 |\n| opencv-python | 4.9.0.80 |\n\nThe dependencies required to properly execute this workflow can be found in the yml file [$WORKSPACE/env/intel_env.yml](env/intel_env.yml).\n\nProceed to create the conda environment.\n\n```bash\nconda env create -f $WORKSPACE/env/intel_env.yml\n```\n\nEnvironment setup is required only once. This step does not cleanup the existing environment with the same name hence we need to make sure there is no conda environment with the same name. During this setup, `drone_navigation_intel` conda environment will be created with the dependencies listed in the YAML configuration.\n\nActivate the `drone_navigation_intel` conda environment as follows:\n\n```bash\nconda activate drone_navigation_intel\n```\n\n### Download the Dataset\nPlease follow the next instructions to correctly download and setup the dataset required for this semantic segmentation workload.\n\n1. Install [Kaggle\\* API](https://github.com/Kaggle/kaggle-api) and configure your [credentials](https://github.com/Kaggle/kaggle-api#api-credentials) and [proxies](https://github.com/Kaggle/kaggle-api#set-a-configuration-value).\n\n2. Navigate inside the `data` folder and download the dataset from https://www.kaggle.com/datasets/bulentsiyah/semantic-drone-dataset.\n\n   ```bash\n   cd $DATA_DIR\n   kaggle datasets download -d bulentsiyah/semantic-drone-dataset\n   ```\n\n3. Unzip the dataset file.\n\n   ```bash\n   unzip semantic-drone-dataset.zip\n   ```\n\n4. Move the dataset and the image masks into a proper locations.\n\n   ```bash\n   mkdir Aerial_Semantic_Segmentation_Drone_Dataset\n   mv  ./dataset ./Aerial_Semantic_Segmentation_Drone_Dataset\n   mv ./RGB_color_image_masks ./Aerial_Semantic_Segmentation_Drone_Dataset\n   ```\n\nAfter completing the previous steps, the `data` folder should have the below structure:\n\n```\n- Aerial_Semantic_Segmentation_Drone_Dataset\n    - dataset\n        - semantic_drone_dataset\n            - label_images_semantic\n            - original_images\n    - RGB_color_image_masks\n```\n\n## Supported Runtime Environment\nThe execution of this reference kit is compatible with the following environments:\n* Bare Metal\n\n### Run Using Bare Metal\n\n#### Set Up System Software\n\nOur examples use the `conda` package and environment on your local computer. If you don't already have `conda` installed or the `conda` environment created, go to [Set Up Conda*](#set-up-conda) or see the [Conda* Linux installation instructions](https://docs.conda.io/projects/conda/en/stable/user-guide/install/linux.html).\n\n\u003e *Note: It is assumed that the present working directory is the root directory of this code repository. Use the following command to go to the root directory.*\n\n```bash\ncd $WORKSPACE\n```\n\n### Run Workflow\nThe following subsections provide the commands to make an optimized execution of this semantic segmentation workflow based on Intel® oneDNN optimized TensorFlow\\* and Intel® Neural Compressor. As an illustrative guideline to understand how the Intel® specialized packages are used to optimize the performance of the VGG-UNET semantic segmentation model, please check the [How it Works section](#how-it-works).\n\n### Optimizations with Intel® oneDNN TensorFlow\\*\nBased on TensorFlow\\* optimized by Intel® oneDNN, the stages of training, hyperparameter tuning, conversion to frozen graph, inference and evaluation are executed below. \n\n### Training VGG-UNET Model \nThe Python\\* script given below needs to be executed to start training the VGG-UNET. For more details about the training process, see this [subsection](#intel®-onednn-optimized-tensorflow). About the training data, please check this [subsection](#dataset).\n\n```\nusage: training.py [-h] [-m MODEL_PATH] -d DATA_PATH [-e EPOCHS] [-hy HYPERPARAMS]\n\noptional arguments:\n  -h, --help            show this help message and exit.\n  -m MODEL_PATH, --model_path MODEL_PATH\n                        Please provide the Latest Checkpoint path. Default is None.\n  -d DATA_PATH, --data_path DATA_PATH\n                        Absolute path to the dataset folder containing \"original_images\" and \"label_images_semantic\" folders. \n  -e EPOCHS, --epochs EPOCHS\n                        Provide the number of epochs want to train.\n  -hy HYPERPARAMS, --hyperparams HYPERPARAMS\n                        Enable hyperparameter tuning. Default is \"0\" to indicate unabled hyperparameter tuning.\n```\n\nExample:\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/training.py -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -e 10 -m $OUTPUT_DIR/model\n```\n\nIn this example, Intel® oneDNN optimized TensorFlow\\* is applied to boost training performance and the generated TensorFlow\\* checkpoint model will be saved in the `$OUTPUT_DIR/model` folder.\n\n### Hyperparameter Tuning \nThe Python\\* script given below needs to be executed to start hyperparameter tuned training. The model generated using the regular training approach will be regard as the pretrained model in which the fine tuning process will be applied. To obtain more details about the hyperparameter tuning modality, refer to this [subsection](#intel®-onednn-optimized-tensorflow).\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/training.py -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -e 3 -m $OUTPUT_DIR/model -hy 1 \n```\n\n\u003e **Note**: **The best combinations of hyperparameters will get printed at the end of the script. The model can be retrained for a longer time (more number of epochs) with the best combination of hyperparameters to achieve a better accuracy.**\n\n### Convert the Model to Frozen Graph\nRun the Python\\* conversion script given below to convert the TensorFlow\\* checkpoint model format to frozen graph format. This frozen graph can be later used when performing inference with Intel® Neural Compressor.\n\n```\nusage: create_frozen_graph.py [-h] [-m MODEL_PATH] -o OUTPUT_SAVED_DIR\n\noptional arguments:\n  -h, --help            show this help message and exit.\n  -m MODEL_PATH, --model_path MODEL_PATH\n                        Please provide the Latest Checkpoint path. Default is None.\n  -o OUTPUT_SAVED_DIR, --output_saved_dir OUTPUT_SAVED_DIR\n                        Directory to save frozen graph to.\n\n```\n\nExample:\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/create_frozen_graph.py -m $OUTPUT_DIR/model/vgg_unet --output_saved_dir $OUTPUT_DIR/model\n```\n\nIn this example, the generated frozen graph will be saved in the `$OUTPUT_DIR/model` folder with the name `frozen_graph.pb`.\n\n### Inference\n\nThe Python\\* script given below needs to be executed to perform inference based on the segmentation model converted into frozen graph.\n\n```\nusage: run_inference.py [-h] [-m MODELPATH] -d DATA_PATH [-b BATCHSIZE]\n\noptional arguments:\n  -h, --help            show this help message and exit.\n  -m MODELPATH, --modelpath MODELPATH\n                        Provide frozen Model path \".pb\" file. Users can also use Intel® Neural Compressor INT8 quantized model here. \n  -d DATA_PATH, --data_path DATA_PATH\n                        Absolute path to the dataset folder containing \"original_images\" and \"label_images_semantic\" folders. \n  -b BATCHSIZE, --batchsize BATCHSIZE\n                        batchsize used for inference.\n```\n\nExample:\n-\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/run_inference.py -m $OUTPUT_DIR/model/frozen_graph.pb -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -b 1\n```\n\n\u003e Above inference script can be run using different batch sizes\u003cbr\u003e\nSame script can be used to benchmark Intel® Neural Compressor INT8 quantized model. For more details please refer to Intel® Neural Compressor quantization section.\u003cbr\u003e\nBy using different batchsize one can observe the gain obtained using Intel® oneDNN optimized TensorFlow.\u003cbr\u003e\nRun this script to record multiple trials and the average can be calculated.\n\n### Evaluating the Model on Test Dataset\nRun the Python\\* script given below to evaluate the semantic segmentation model and find out the class-wise accuracy score.\n\n```\nusage: evaluation.py [-h] [-m MODEL_PATH] -d DATA_PATH [-t MODEL_TYPE]\n\noptional arguments:\n  -h, --help            Show this help message and exit\n  -m MODEL_PATH, --model_path MODEL_PATH\n                        Please provide the Latest Checkpoint path. Default is None.\n  -d DATA_PATH, --data_path DATA_PATH\n                        Absolute path to the dataset folder containing \"original_images\" and \"label_images_semantic\" folders. \n  -t MODEL_TYPE, --model_type MODEL_TYPE\n                        0 for checkpoint, 1 for frozen_graph.\n```\n\nExample to run evaluation using the original TensorFlow checkpoint model:\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/evaluation.py -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -m $OUTPUT_DIR/model/vgg_unet -t 0   \n```\n\nExample to run evaluation using the frozen graph:\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/evaluation.py -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -m $OUTPUT_DIR/model/frozen_graph.pb  -t 1  \n```\n\n\u003e Same script can be used for Evaluating Intel® Neural Compressor INT8 quantized model. For more details please refer to Intel® Neural Compressor quantization section.\u003cbr\u003e\n\n### Optimizations with Intel® Neural Compressor\nIntel® Neural Compressor is used to quantize the FP32 VGG-UNET model into a INT8 model. In this case, we used post training quantization method to quantize the FP32 model.\n\n### Conversion of FP32 VGG-UNET Model to INT8 Model\nRun the Python\\* script given below to convert the FP32 VGG-UNET model in the form of the frozen graph into a INT8 model.\n\n```\nusage: neural_compressor_conversion.py [-h] -m MODELPATH -o OUTPATH [-c CONFIG] -d DATA_PATH [-b BATCHSIZE]\n\noptional arguments:\n  -h, --help            show this help message and exit.\n  -m MODELPATH, --modelpath MODELPATH\n                        Path to the model trained with TensorFlow and saved as a \".pb\" file.\n  -o OUTPATH, --outpath OUTPATH\n                        Directory to save the INT8 quantized model to.\n  -c CONFIG, --config CONFIG\n                        Yaml file for quantizing model, default is \"$SRC_DIR/intel_neural_compressor/deploy.yaml\".\n  -d DATA_PATH, --data_path DATA_PATH\n                        Absolute path to the dataset folder containing \"original_images\" and \"label_images_semantic\" folders.\n  -b BATCHSIZE, --batchsize BATCHSIZE\n                        batchsize for the dataloader. Default is 1.\n```\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/intel_neural_compressor/neural_compressor_conversion.py -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -m  $OUTPUT_DIR/model/frozen_graph.pb -o  $OUTPUT_DIR/model/inc_compressed_model/output  \n```\n\nThe model after conversion, that is the quantized model, will be stored in the `$OUTPUT_DIR/model/inc_compressed_model/` folder with the name `output.pb`.\n\n### Inference Using Quantized INT8 VGG-UNET Model\nThe Python\\* script given below needs to be executed to perform inference based on the quantized INT8 VGG-UNET Model.\n\n```\nusage: run_inference.py [-h] [-m MODELPATH] [-d DATA_PATH] [-b BATCHSIZE]\n\noptional arguments:\n  -h, --help            show this help message and exit.\n  -m MODELPATH, --modelpath MODELPATH\n                        Provide frozen Model path \".pb\" file .Users can also use Intel® Neural Compressor INT8 quantized model here. Default is $OUTPUT_DIR/model/frozen_graph.pb\n  -d DATA_PATH, --data_path DATA_PATH\n                        Absolute path to the dataset folder containing \"original_images\" and \"label_images_semantic\" folders. Default is $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset\n  -b BATCHSIZE, --batchsize BATCHSIZE\n                        batchsize used for inference.\n```\n\nExample:\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/run_inference.py -m $OUTPUT_DIR/model/inc_compressed_model/output.pb -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset  -b 1\n```\n\n\u003e Use `-b` to test with different batch size (e.g. `-b 10`)\n\n### Evaluating the Quantized INT8 VGG-UNET Model on Test Dataset\nRun the Python\\* script given below to evaluate the quantized INT8 VGG-UNET model and find out the class-wise accuracy score.\n\n```\nusage: evaluation.py [-h] [-m MODEL_PATH] [-d DATA_PATH] [-t MODEL_TYPE]\n\noptional arguments:\n  -h, --help            Show this help message and exit\n  -m MODEL_PATH, --model_path MODEL_PATH\n                        Please provide the Latest Checkpoint path. Default is None.\n  -d DATA_PATH, --data_path DATA_PATH\n                        Absolute path to the dataset folder containing \"original_images\" and \"label_images_semantic\" folders. \n  -t MODEL_TYPE, --model_type MODEL_TYPE\n                        0 for checkpoint, 1 for frozen_graph.\n```\n\nExample to run evaluation using the frozen graph:\n\n[//]: # (capture: baremetal)\n```bash\npython $SRC_DIR/evaluation.py -d $DATA_DIR/Aerial_Semantic_Segmentation_Drone_Dataset/dataset/semantic_drone_dataset -m $OUTPUT_DIR/model/inc_compressed_model/output.pb  -t 1  \n```\n\n#### Clean Up Bare Metal\nThe next commands are useful to remove the previously generated conda environment, as well as the dataset and the multiple models and files created during the workflow execution. Before proceeding with the clean up process, it is recommended to back up the data you want to preserve.\n\n```bash\nconda deactivate #Run this line if the drone_navigation_intel environment is still active\nconda env remove -n drone_navigation_intel\nrm -rf $WORKSPACE\n```\n\n---\n\n### Expected Output\nA successful execution of the different stages of this workflow should produce outputs similar to the following:\n\n#### Regular Training Output with Intel® oneDNN optimized TensorFlow\\*\n\n```\n2023-12-07 06:53:46.349193: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 06:53:46.379141: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\nStarted data validation and Training for  10  epochs\nModel Input height , Model Input width, Model Output Height, Model Output Width\n416 608 208 304\nBatch Size used for Training --\u003e  4\nBatch Size used for Validation --\u003e  4\n```\n...\n```\nStarting Epoch  5\n128/128 [==============================] - 298s 2s/step - loss: 1.0430 - accuracy: 0.6553\nsaved  //drone-navigation-inspection/output/model/vgg_unet\nFinished Epoch 5\nStarting Epoch  6\n128/128 [==============================] - 298s 2s/step - loss: 1.0443 - accuracy: 0.6564\nsaved  //drone-navigation-inspection/output/model/vgg_unet\nFinished Epoch 6\nStarting Epoch  7\n128/128 [==============================] - 297s 2s/step - loss: 1.0667 - accuracy: 0.6519\nsaved  //drone-navigation-inspection/output/model/vgg_unet\nFinished Epoch 7\nStarting Epoch  8\n128/128 [==============================] - 298s 2s/step - loss: 1.0777 - accuracy: 0.6458\nsaved  //drone-navigation-inspection/output/model/vgg_unet\nFinished Epoch 8\nStarting Epoch  9\n128/128 [==============================] - 297s 2s/step - loss: 1.0888 - accuracy: 0.6441\nsaved  //drone-navigation-inspection/output/model/vgg_unet\nFinished Epoch 9\nTime Taken for Training in seconds --\u003e  3034.8698456287384\n```\n\n#### Hyperparameter Tuning Output with Intel® oneDNN optimized TensorFlow\\*\n\n```\n2023-12-07 12:11:35.758937: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 12:11:35.789043: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\nStarted Hyperprameter tuning\nModel Input height , Model Input width, Model Output Height, Model Output Width\n416 608 208 304\nBatch Size used for Training --\u003e  4\nTotal number of fits =  9\nTake Break!!!\nThis will take time!\nLoading weights from  //drone-navigation-inspection/output/model/vgg_unet\nCurrent fit is at  1\nCurrent fit parameters --\u003e epochs= 3  learning rate= 0.001  optimizer= Adam  loss= categorical_crossentropy\n/drone-navigation-inspection/src/utils.py:542: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.\n  hist=model.fit_generator(train_gen, steps_per_epoch, epochs=epochs, workers=1, use_multiprocessing=False)\n80% of Data is considered for Training ===\u003e  320\n```\n...\n```\nCurrent fit is at  7\nCurrent fit parameters --\u003e epochs= 3  learning rate= 0.0001  optimizer= Adam  loss= categorical_crossentropy\nEpoch 1/3\n32/32 [==============================] - 76s 2s/step - loss: 0.9938 - accuracy: 0.6624 - mae: 0.0451 - mean_io_u_6: 0.4771\nEpoch 2/3\n32/32 [==============================] - 75s 2s/step - loss: 0.9846 - accuracy: 0.6790 - mae: 0.0444 - mean_io_u_6: 0.4770\nEpoch 3/3\n32/32 [==============================] - 75s 2s/step - loss: 0.9586 - accuracy: 0.6818 - mae: 0.0446 - mean_io_u_6: 0.4771\nFit number:  7  ==\u003e Time Taken for Training in seconds --\u003e  230.05844235420227\nThe best Tuningparameter combination is : {'accuracy': 0.6624360680580139, 'best_fit': (0.0001, 'Adam', 'categorical_crossentropy')}\nLoading weights from  //drone-navigation-inspection/output/model/vgg_unet\nCurrent fit is at  8\nCurrent fit parameters --\u003e epochs= 3  learning rate= 0.0001  optimizer= adadelta  loss= categorical_crossentropy\nEpoch 1/3\n32/32 [==============================] - 76s 2s/step - loss: 1.0724 - accuracy: 0.6356 - mae: 0.0461 - mean_io_u_7: 0.4771\nEpoch 2/3\n32/32 [==============================] - 75s 2s/step - loss: 1.0755 - accuracy: 0.6294 - mae: 0.0465 - mean_io_u_7: 0.4771\nEpoch 3/3\n32/32 [==============================] - 76s 2s/step - loss: 1.0776 - accuracy: 0.6274 - mae: 0.0466 - mean_io_u_7: 0.4771\nFit number:  8  ==\u003e Time Taken for Training in seconds --\u003e  230.5231795310974\nThe best Tuningparameter combination is : {'accuracy': 0.6624360680580139, 'best_fit': (0.0001, 'Adam', 'categorical_crossentropy')}\nLoading weights from  //drone-navigation-inspection/output/model/vgg_unet\nCurrent fit is at  9\nCurrent fit parameters --\u003e epochs= 3  learning rate= 0.0001  optimizer= rmsprop  loss= categorical_crossentropy\nEpoch 1/3\n32/32 [==============================] - 75s 2s/step - loss: 1.0011 - accuracy: 0.6676 - mae: 0.0449 - mean_io_u_8: 0.4770\nEpoch 2/3\n32/32 [==============================] - 75s 2s/step - loss: 0.9739 - accuracy: 0.6791 - mae: 0.0451 - mean_io_u_8: 0.4771\nEpoch 3/3\n32/32 [==============================] - 75s 2s/step - loss: 0.9588 - accuracy: 0.6840 - mae: 0.0440 - mean_io_u_8: 0.4771\nFit number:  9  ==\u003e Time Taken for Training in seconds --\u003e  229.0527949333191\nThe best Tuningparameter combination is : {'accuracy': 0.6676027774810791, 'best_fit': (0.0001, 'rmsprop', 'categorical_crossentropy')}\nTime Taken for Total Hyper parameter Tuning and Model loading in seconds --\u003e  2080.2837102413177\ntotal_time --\u003e  2079.7287237644196\n```\n\n#### Inference Output with Intel® oneDNN optimized TensorFlow\\*\n\n```\n2023-12-07 13:09:59.528826: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 13:09:59.558645: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\nload graph\n```\n...\n```\nTime Taken for model inference in seconds ---\u003e  0.05593061447143555\nTime Taken for model inference in seconds ---\u003e  0.05590033531188965\nTime Taken for model inference in seconds ---\u003e  0.055976152420043945\nTime Taken for model inference in seconds ---\u003e  0.0559389591217041\nTime Taken for model inference in seconds ---\u003e  0.055884599685668945\nTime Taken for model inference in seconds ---\u003e  0.05602383613586426\nTime Taken for model inference in seconds ---\u003e  0.0559384822845459\nTime Taken for model inference in seconds ---\u003e  0.055918216705322266\nTime Taken for model inference in seconds ---\u003e  0.05595517158508301\nTime Taken for model inference in seconds ---\u003e  0.05599474906921387\nAverage Time Taken for model inference in seconds ---\u003e  0.05594611167907715\n```\n\n#### Evaluation Output with Intel® oneDNN optimized TensorFlow\\*\nOutput using the original TensorFlow checkpoint model:\n\n```\n2023-12-07 13:18:49.975518: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 13:18:50.004278: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n```\n...\n```\n[TARGET CLASS] paved-area =\u003e  64.86\ndirt =\u003e  19.10\ngrass =\u003e  58.54\ngravel =\u003e  12.78\nwater =\u003e  12.97\nrocks =\u003e  0.09\npool =\u003e  33.43\nvegetation =\u003e  33.72\nroof =\u003e  49.62\nwall =\u003e  2.15\nwindow =\u003e  0.00\ndoor =\u003e  0.00\nfence =\u003e  0.00\nfence-pole =\u003e  0.34\nperson =\u003e  12.24\ndog =\u003e  0.00\ncar =\u003e  9.96\nbicycle =\u003e  1.97\ntree =\u003e  3.10\nbald-tree =\u003e  0.00\nTime Taken for Prediction in seconds --\u003e  0.7498071193695068\n```\n\nOutput using the frozen graph:\n\n```\n2023-12-07 13:26:49.874614: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 13:26:49.903073: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n```\n...\n```\n[TARGET CLASS] paved-area =\u003e  67.84\ndirt =\u003e  20.96\ngrass =\u003e  47.30\ngravel =\u003e  33.26\nwater =\u003e  9.31\nrocks =\u003e  0.11\npool =\u003e  43.17\nvegetation =\u003e  32.72\nroof =\u003e  25.90\nwall =\u003e  3.51\nwindow =\u003e  0.00\ndoor =\u003e  0.00\nfence =\u003e  0.00\nfence-pole =\u003e  0.00\nperson =\u003e  5.84\ndog =\u003e  0.00\ncar =\u003e  0.00\nbicycle =\u003e  0.04\ntree =\u003e  0.15\nbald-tree =\u003e  0.01\n```\n\n#### Expected Output for Conversion of FP32 VGG-UNET Model to INT8 Model Using Intel® Neural Compressor\n\n```\n2023-12-07 13:30:13.110902: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 13:30:13.140783: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\nTo enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n```\n...\n```\n2023-12-07 13:31:02 [INFO] |****Mixed Precision Statistics****|\n2023-12-07 13:31:02 [INFO] +------------+-------+------+------+\n2023-12-07 13:31:02 [INFO] |  Op Type   | Total | INT8 | FP32 |\n2023-12-07 13:31:02 [INFO] +------------+-------+------+------+\n2023-12-07 13:31:02 [INFO] |  ConcatV2  |   3   |  0   |  3   |\n2023-12-07 13:31:02 [INFO] |   Conv2D   |   15  |  15  |  0   |\n2023-12-07 13:31:02 [INFO] |  MaxPool   |   4   |  4   |  0   |\n2023-12-07 13:31:02 [INFO] | QuantizeV2 |   5   |  5   |  0   |\n2023-12-07 13:31:02 [INFO] | Dequantize |   8   |  8   |  0   |\n2023-12-07 13:31:02 [INFO] +------------+-------+------+------+\n2023-12-07 13:31:02 [INFO] Pass quantize model elapsed time: 4136.69 ms\nModel Input height , Model Input width, Model Output Height, Model Output Width\n416 608 208 304\n20% of Data is considered for Evaluating===\u003e  80\n80it [00:36,  2.18it/s]\n2023-12-07 13:31:39 [INFO] Tune 1 result is: [Accuracy (int8|fp32): 0.6624|0.6784, Duration (seconds) (int8|fp32): 36.7283|38.3752], Best tune result is: [Accuracy: 0.6624, Duration (seconds): 36.7283]\n2023-12-07 13:31:39 [INFO] |**********************Tune Result Statistics**********************|\n2023-12-07 13:31:39 [INFO] +--------------------+----------+---------------+------------------+\n2023-12-07 13:31:39 [INFO] |     Info Type      | Baseline | Tune 1 result | Best tune result |\n2023-12-07 13:31:39 [INFO] +--------------------+----------+---------------+------------------+\n2023-12-07 13:31:39 [INFO] |      Accuracy      | 0.6784   |    0.6624     |     0.6624       |\n2023-12-07 13:31:39 [INFO] | Duration (seconds) | 38.3752  |    36.7283    |     36.7283      |\n2023-12-07 13:31:39 [INFO] +--------------------+----------+---------------+------------------+\n2023-12-07 13:31:39 [INFO] Save tuning history to /drone-navigation-inspection/src/intel_neural_compressor/nc_workspace/2023-12-07_13-30-16/./history.snapshot.\n2023-12-07 13:31:39 [INFO] Specified timeout or max trials is reached! Found a quantized model which meet accuracy goal. Exit.\n2023-12-07 13:31:39 [INFO] Save deploy yaml to /drone-navigation-inspection/src/intel_neural_compressor/nc_workspace/2023-12-07_13-30-16/deploy.yaml\n2023-12-07 13:31:39 [INFO] Save quantized model to //drone-navigation-inspection/output/model/inc_compressed_model/output.pb.\n```\n\n#### Inference Output with Intel® Neural Compressor\n\n```\n2023-12-07 13:33:52.653916: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 13:33:52.684842: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n```\n...\n```\nTime Taken for model inference in seconds ---\u003e  0.027923583984375\nTime Taken for model inference in seconds ---\u003e  0.027917861938476562\nTime Taken for model inference in seconds ---\u003e  0.027941226959228516\nTime Taken for model inference in seconds ---\u003e  0.02792835235595703\nTime Taken for model inference in seconds ---\u003e  0.024005651473999023\nTime Taken for model inference in seconds ---\u003e  0.02797222137451172\nTime Taken for model inference in seconds ---\u003e  0.027898311614990234\nTime Taken for model inference in seconds ---\u003e  0.02796459197998047\nTime Taken for model inference in seconds ---\u003e  0.027928590774536133\nTime Taken for model inference in seconds ---\u003e  0.027961254119873047\nAverage Time Taken for model inference in seconds ---\u003e  0.027544164657592775\n```\n\n#### Evaluation Output with Intel® Neural Compressor\n```\n2023-12-07 13:36:36.775949: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2023-12-07 13:36:36.805139: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n```\n...\n```\n[TARGET CLASS] paved-area =\u003e  66.24\ndirt =\u003e  22.63\ngrass =\u003e  46.54\ngravel =\u003e  29.26\nwater =\u003e  11.96\nrocks =\u003e  0.07\npool =\u003e  37.07\nvegetation =\u003e  32.53\nroof =\u003e  21.08\nwall =\u003e  1.17\nwindow =\u003e  0.00\ndoor =\u003e  0.00\nfence =\u003e  0.00\nfence-pole =\u003e  0.00\nperson =\u003e  4.47\ndog =\u003e  0.00\ncar =\u003e  0.00\nbicycle =\u003e  0.02\ntree =\u003e  0.08\nbald-tree =\u003e  0.01\n```\n\n## Summary and Next Steps\n\nThis reference kit presents an AI semantic segmentation solution specialized in accurately recognize entities and segment paved areas from input images captured by drones. Thus, this system could contribute to the safe landing of drones in dedicated paved areas, reducing the risk of injuring people or damaging property.\n\nTo carry out the segmentation task, the system makes use of a semantic segmentation model called VGG-UNET. Furthermore, the VGG-UNET model leverages the optimizations given by Intel® oneDNN optimized TensorFlow\\* and Intel® Neural Compressor to accelerate its training, hyperparameter tuning and inference processing capabilities while maintaining the accuracy. \n\nAs next steps, the machine learning practitioner could adapt this semantic segmentation solution for different drone navigation scenarios by including a larger and more complex dataset, which could be used for a sophisticated training based on TensorFlow\\* optimized with Intel® oneDNN. Finally, the trained model could be quantized with Intel® Neural Compressor to meet the resource-constrained demands of drone technology.\n\n## Learn More\nFor more information about Predictive Asset Maintenance or to read about other relevant workflow examples, see these guides and software resources:\n\n- [Intel® AI Analytics Toolkit (AI Kit)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html)\n- [Intel® Distribution for Python](https://www.intel.com/content/www/us/en/developer/tools/oneapi/distribution-for-python.html)\n- [Intel® oneDNN optimized TensorFlow\\*](https://www.intel.com/content/www/us/en/developer/tools/oneapi/optimization-for-tensorflow.html#gs.174f5y)\n- [Intel® Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html#gs.5vjr1p)\n\n## Troubleshooting\n1. libGL.so.1/libgthread-2.0.so.0: cannot open shared object file: No such file or directory\n   \n    **Issue:**\n      ```\n      ImportError: libGL.so.1: cannot open shared object file: No such file or directory\n      or\n      libgthread-2.0.so.0: cannot open shared object file: No such file or directory\n      ```\n\n    **Solution:**\n\n      Install the libgl11-mesa-glx and libglib2.0-0 libraries. For Ubuntu this will be:\n\n      ```bash\n     apt install libgl1-mesa-glx\n     apt install libglib2.0-0\n      ```\n\n## Support\nIf you have questions or issues about this workflow, want help with troubleshooting, want to report a bug or submit enhancement requests, please submit a GitHub issue.\n\n## Appendix\n\\*Names and brands that may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html).\n\n### Disclaimer\n\nTo the extent that any public or non-Intel datasets or models are referenced by or accessed using tools or code on this site those datasets or models are provided by the third party indicated as the content source. Intel does not create the content and does not warrant its accuracy or quality. By accessing the public content, or using materials trained on or with such content, you agree to the terms associated with that content and that your use complies with the applicable license.\n\nIntel expressly disclaims the accuracy, adequacy, or completeness of any such public content, and is not liable for any errors, omissions, or defects in the content, or for any reliance on the content. Intel is not liable for any liability or damages relating to your use of public content.\n","funding_links":[],"categories":["Table of Contents"],"sub_categories":["AI - Frameworks and Toolkits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fdrone-navigation-inspection","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foneapi-src%2Fdrone-navigation-inspection","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fdrone-navigation-inspection/lists"}