{"id":13438571,"url":"https://github.com/iwatake2222/InferenceHelper","last_synced_at":"2025-03-20T06:30:45.479Z","repository":{"id":43074079,"uuid":"319588839","full_name":"iwatake2222/InferenceHelper","owner":"iwatake2222","description":"C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ONNX Runtime, LibTorch, TensorFlow","archived":false,"fork":false,"pushed_at":"2022-04-09T04:43:56.000Z","size":4442,"stargazers_count":278,"open_issues_count":1,"forks_count":58,"subscribers_count":17,"default_branch":"master","last_synced_at":"2024-10-28T00:23:14.465Z","etag":null,"topics":["cpp","deep-learning","deeplearning","inference","mnn","ncnn","nnabla","opencv","tensorflow","tensorrt"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/iwatake2222.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-12-08T09:27:27.000Z","updated_at":"2024-10-26T03:22:32.000Z","dependencies_parsed_at":"2022-08-30T18:21:30.899Z","dependency_job_id":null,"html_url":"https://github.com/iwatake2222/InferenceHelper","commit_stats":null,"previous_names":[],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iwatake2222%2FInferenceHelper","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iwatake2222%2FInferenceHelper/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iwatake2222%2FInferenceHelper/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iwatake2222%2FInferenceHelper/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/iwatake2222","download_url":"https://codeload.github.com/iwatake2222/InferenceHelper/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244564964,"owners_count":20473166,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cpp","deep-learning","deeplearning","inference","mnn","ncnn","nnabla","opencv","tensorflow","tensorrt"],"created_at":"2024-07-31T03:01:06.542Z","updated_at":"2025-03-20T06:30:40.466Z","avatar_url":"https://github.com/iwatake2222.png","language":"C++","readme":"\u003cp align=\"center\"\u003e\r\n  \u003cimg src=\"00_doc/logo.png\" /\u003e\r\n\u003c/p\u003e\r\n\r\n# Inference Helper\r\n- This is a wrapper of deep learning frameworks especially for inference\r\n- This class provides a common interface to use various deep learnig frameworks, so that you can use the same application code\r\n\r\n## Supported frameworks\r\n- TensorFlow Lite\r\n- TensorFlow Lite with delegate (XNNPACK, GPU, EdgeTPU, NNAPI)\r\n- TensorRT (GPU, DLA)\r\n- OpenCV(dnn) (with GPU)\r\n- OpenVINO with OpenCV (xml+bin)\r\n- ncnn (with Vulkan)\r\n- MNN (with Vulkan)\r\n- SNPE (Snapdragon Neural Processing Engine SDK (Qualcomm Neural Processing SDK for AI v1.51.0))\r\n- Arm NN\r\n- NNabla (with CUDA)\r\n- ONNX Runtime (with CUDA)\r\n- LibTorch (with CUDA)\r\n- TensorFlow (with GPU)\r\n\r\n![Overview](00_doc/overview.png) \r\n\r\n## Supported targets\r\n- Windows 10 (Visual Studio 2019 x64)\r\n- Linux (x64, armv7, aarch64)\r\n- Android (armeabi-v7a, arm64-v8a)\r\n\r\n## CI Status\r\n| Framework                 | Windows (x64)                                 | Linux (x64)                                   | Linux (armv7)                                 | Linux (aarch64)                               | Android (aarch64)                              |\r\n|---------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------------------------------------|\r\n|                           | [![CI Windows](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_windows.yml/badge.svg)](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_windows.yml) | [![CI Ubuntu](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_ubuntu.yml/badge.svg)](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_ubuntu.yml) | [![CI Arm](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_arm.yml/badge.svg)](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_arm.yml) | [![CI Arm](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_arm.yml/badge.svg)](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_arm.yml) | [![CI Android](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_android.yml/badge.svg)](https://github.com/iwatake2222/InferenceHelper/actions/workflows/ci_android.yml) |\r\n| TensorFlow Lite           | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| TensorFlow Lite + XNNPACK | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| Unsupported                                   | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| TensorFlow Lite + EdgeTPU | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| Unsupported                                    |\r\n| TensorFlow Lite + GPU     | No library                                    | No library                                    | No library                                    | No library                                    | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| TensorFlow Lite + NNAPI   | Unsupported                                   | Unsupported                                   | Unsupported                                   | Unsupported                                   | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| TensorRT                  | \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| Unsupported                                    |\r\n| OpenCV(dnn)               | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| OpenVINO with OpenCV      | \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| Unsupported                                    |\r\n| ncnn                      | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| No library                                    | No library                                    | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| MNN                       | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| No library                                    | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| SNPE                      | Unsupported                                   | Unsupported                                   | \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [ ] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| Arm NN                    | Unsupported                                   | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| Unsupported                                   | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| No library                                     |\r\n| NNabla                    | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| No library                                    | Unsupported                                   | No library                                    | No library                                     |\r\n| ONNX Runtime              | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| Unsupported                                   | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[ ] Test \u003c/li\u003e\u003c/ul\u003e |\r\n| LibTorch                  | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| No library                                    | No library                                    | No library                                     |\r\n| TensorFlow                | \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| \u003cul\u003e\u003cli\u003e [x] Build\u003c/li\u003e\u003cli\u003e[x] Test \u003c/li\u003e\u003c/ul\u003e| No library                                    | No library                                    | No library                                     |\r\n\r\n* Unchedked(blank) doesn't mean that the framework is unsupported. Unchecked just means that the framework is not tested in CI. For instance, TensorRT on Windows/Linux works and I confirmed it in my PC, but can't run it in CI.\r\n* `No Library` means a pre-built library is not provided so that I cannot confirm it in CI. It may work if you build a library by yourself.\r\n\r\n## Sample projects\r\n- https://github.com/iwatake2222/InferenceHelper_Sample\r\n- https://github.com/iwatake2222/play_with_tflite\r\n- https://github.com/iwatake2222/play_with_tensorrt\r\n- https://github.com/iwatake2222/play_with_ncnn\r\n- https://github.com/iwatake2222/play_with_mnn\r\n\r\n# Usage\r\nPlease refer to https://github.com/iwatake2222/InferenceHelper_Sample\r\n\r\n## Installation\r\n- Add this repository into your project (Using `git submodule` is recommended)\r\n- Download prebuilt libraries\r\n    - `sh third_party/download_prebuilt_libraries.sh`\r\n\r\n## Additional steps\r\nYou need some additional steps if you use the frameworks listed below\r\n\r\n### Additional steps: OpenCV / OpenVINO\r\n- Install OpenCV or OpenVINO\r\n    - You may need to set/modify `OpenCV_DIR` and `PATH` environment variable\r\n    - To use OpenVINO, you may need to run `C:\\Program Files (x86)\\Intel\\openvino_2021\\bin\\setupvars.bat` or `source /opt/intel/openvino_2021/bin/setupvars.sh`\r\n\r\n### Additional steps: TensorRT\r\n- Install CUDA + cuDNN\r\n- Install TensorRT 8.x\r\n\r\n### Additional steps: Tensorflow Lite (EdgeTPU)\r\n- Install the following library\r\n    - Linux: https://github.com/google-coral/libedgetpu/releases/download/release-grouper/edgetpu_runtime_20210726.zip\r\n    - Windows: https://github.com/google-coral/libedgetpu/releases/download/release-frogfish/edgetpu_runtime_20210119.zip\r\n        - the latest version doesn't work\r\n        - it may be better to delete `C:\\Windows\\System32\\edgetpu.dll` to ensure the program uses our pre-built library\r\n\r\n### Additional steps: ncnn\r\n- Install Vulkan\r\n    - You need Vulkan even if you don't use it because the pre-built libraries require it. Otherwise you need to build libraries by yourself disabling Vulkan\r\n    - https://vulkan.lunarg.com/sdk/home\r\n    - Windows\r\n        - https://sdk.lunarg.com/sdk/download/latest/windows/vulkan-sdk.exe\r\n        - It's better to check `(Optional) Debuggable Shader API Libraries -64-bit` , so that you can use Debug in Visual Studio\r\n    - Linux (x64)\r\n        ```sh\r\n        wget https://sdk.lunarg.com/sdk/download/latest/linux/vulkan-sdk.tar.gz\r\n        tar xzvf vulkan-sdk.tar.gz\r\n        export VULKAN_SDK=$(pwd)/1.2.198.1/x86_64\r\n        sudo apt install -y vulkan-utils libvulkan1 libvulkan-dev\r\n        ```\r\n\r\n### Additional steps: SNPE\r\n- Download library from https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk/tools\r\n- Extract `snpe-1.51.0.zip` , then place `lib` and `include` folders to `third_party/snpe_prebuilt`\r\n\r\n### Note:\r\n- `Debug` mode in Visual Studio doesn't work for ncnn, NNabla and LibTorch because debuggable libraries are not provided\r\n    - `Debug` will cause unexpected bahavior, so use `Release` or `RelWithDebInfo`\r\n- See `third_party/download_prebuilt_libraries.sh` and `third_party/cmakes/*` to check which libraries are being used. For instance, libraries without GPU(CUDA/Vulkan) are used to be safe. So, if you want to use GPU, modify these files.\r\n\r\n## Project settings in CMake\r\n- Add InferenceHelper to your project\r\n    ```cmake\r\n    set(INFERENCE_HELPER_DIR ${CMAKE_CURRENT_LIST_DIR}/../../InferenceHelper/)\r\n    add_subdirectory(${INFERENCE_HELPER_DIR}/inference_helper inference_helper)\r\n    target_include_directories(${LibraryName} PUBLIC ${INFERENCE_HELPER_DIR}/inference_helper)\r\n    target_link_libraries(${LibraryName} InferenceHelper)\r\n    ```\r\n\r\n## CMake options\r\n- Deep learning framework:\r\n    - You can enable multiple options althoguh the following example enables just one option\r\n\r\n    ```sh\r\n    # OpenCV (dnn), OpenVINO\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_OPENCV=on\r\n    # Tensorflow Lite\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE=on\r\n    # Tensorflow Lite (XNNPACK)\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=on\r\n    # Tensorflow Lite (GPU)\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on\r\n    # Tensorflow Lite (EdgeTPU)\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on\r\n    # Tensorflow Lite (NNAPI)\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_NNAPI=on\r\n    # TensorRT\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TENSORRT=on\r\n    # ncnn, ncnn + vulkan\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_NCNN=on\r\n    # MNN (+ Vulkan)\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_MNN=on\r\n    # SNPE\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_SNPE=on\r\n    # Arm NN\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_ARMNN=on\r\n    # NNabla\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_NNABLA=on\r\n    # NNabla with CUDA\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_NNABLA_CUDA=on\r\n    # ONNX Runtime\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_ONNX_RUNTIME=on\r\n    # ONNX Runtime with CUDA\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_ONNX_RUNTIME_CUDA=on\r\n    # LibTorch\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_LIBTORCH=on\r\n    # LibTorch with CUDA\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_LIBTORCH_CUDA=on\r\n    # TensorFlow\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TENSORFLOW=on\r\n    # TensorFlow with GPU\r\n    cmake .. -DINFERENCE_HELPER_ENABLE_TENSORFLOW_GPU=on\r\n    ```\r\n\r\n- Enable/Disable preprocess using OpenCV:\r\n    - By disabling this option, InferenceHelper is not dependent on OpenCV\r\n    ```sh\r\n    cmake .. -INFERENCE_HELPER_ENABLE_PRE_PROCESS_BY_OPENCV=off\r\n    ```\r\n\r\n# Structure\r\n![Class Diagram](00_doc/class_diagram.png) \r\n\r\n# APIs\r\n## InferenceHelper\r\n### Enumeration\r\n```c++\r\ntypedef enum {\r\n    kOpencv,\r\n    kOpencvGpu,\r\n    kTensorflowLite,\r\n    kTensorflowLiteXnnpack,\r\n    kTensorflowLiteGpu,\r\n    kTensorflowLiteEdgetpu,\r\n    kTensorflowLiteNnapi,\r\n    kTensorrt,\r\n    kNcnn,\r\n    kNcnnVulkan,\r\n    kMnn,\r\n    kSnpe,\r\n    kArmnn,\r\n    kNnabla,\r\n    kNnablaCuda,\r\n    kOnnxRuntime,\r\n    kOnnxRuntimeCuda,\r\n    kLibtorch,\r\n    kLibtorchCuda,\r\n    kTensorflow,\r\n    kTensorflowGpu,\r\n} HelperType;\r\n```\r\n\r\n### static InferenceHelper* Create(const HelperType helper_type)\r\n- Create InferenceHelper instance for the selected framework\r\n\r\n```c++\r\nstd::unique_ptr\u003cInferenceHelper\u003e inference_helper(InferenceHelper::Create(InferenceHelper::kTensorflowLite));\r\n```\r\n\r\n### static void PreProcessByOpenCV(const InputTensorInfo\u0026 input_tensor_info, bool is_nchw, cv::Mat\u0026 img_blob)\r\n- Run preprocess (convert image to blob(NCHW or NHWC))\r\n- This is just a helper function. You may not use this function.\r\n    - Available when `INFERENCE_HELPER_ENABLE_PRE_PROCESS_BY_OPENCV=on`\r\n\r\n```c++\r\nInferenceHelper::PreProcessByOpenCV(input_tensor_info, false, img_blob);\r\n```\r\n\r\n### int32_t SetNumThreads(const int32_t num_threads)\r\n- Set the number of threads to be used\r\n- This function needs to be called before initialize\r\n\r\n```c++\r\ninference_helper-\u003eSetNumThreads(4);\r\n```\r\n\r\n### int32_t SetCustomOps(const std::vector\u003cstd::pair\u003cconst char*, const void*\u003e\u003e\u0026 custom_ops)\r\n- Set custom ops\r\n- This function needs to be called before initialize\r\n\r\n```c++\r\nstd::vector\u003cstd::pair\u003cconst char*, const void*\u003e\u003e custom_ops;\r\ncustom_ops.push_back(std::pair\u003cconst char*, const void*\u003e(\"Convolution2DTransposeBias\", (const void*)mediapipe::tflite_operations::RegisterConvolution2DTransposeBias()));\r\ninference_helper-\u003eSetCustomOps(custom_ops);\r\n```\r\n\r\n### int32_t Initialize(const std::string\u0026 model_filename, std::vector\u003cInputTensorInfo\u003e\u0026 input_tensor_info_list, std::vector\u003cOutputTensorInfo\u003e\u0026 output_tensor_info_list)\r\n- Initialize inference helper\r\n    - Load model\r\n    - Set tensor information\r\n\r\n```c++\r\nstd::vector\u003cInputTensorInfo\u003e input_tensor_list;\r\nInputTensorInfo input_tensor_info(\"input\", TensorInfo::TENSOR_TYPE_FP32, false);    /* name, data_type, NCHW or NHWC */\r\ninput_tensor_info.tensor_dims = { 1, 224, 224, 3 };\r\ninput_tensor_info.data_type = InputTensorInfo::kDataTypeImage;\r\ninput_tensor_info.data = img_src.data;\r\ninput_tensor_info.image_info.width = img_src.cols;\r\ninput_tensor_info.image_info.height = img_src.rows;\r\ninput_tensor_info.image_info.channel = img_src.channels();\r\ninput_tensor_info.image_info.crop_x = 0;\r\ninput_tensor_info.image_info.crop_y = 0;\r\ninput_tensor_info.image_info.crop_width = img_src.cols;\r\ninput_tensor_info.image_info.crop_height = img_src.rows;\r\ninput_tensor_info.image_info.is_bgr = false;\r\ninput_tensor_info.image_info.swap_color = false;\r\ninput_tensor_info.normalize.mean[0] = 0.485f;   /* https://github.com/onnx/models/tree/master/vision/classification/mobilenet#preprocessing */\r\ninput_tensor_info.normalize.mean[1] = 0.456f;\r\ninput_tensor_info.normalize.mean[2] = 0.406f;\r\ninput_tensor_info.normalize.norm[0] = 0.229f;\r\ninput_tensor_info.normalize.norm[1] = 0.224f;\r\ninput_tensor_info.normalize.norm[2] = 0.225f;\r\ninput_tensor_list.push_back(input_tensor_info);\r\n\r\nstd::vector\u003cOutputTensorInfo\u003e output_tensor_list;\r\noutput_tensor_list.push_back(OutputTensorInfo(\"MobilenetV2/Predictions/Reshape_1\", TensorInfo::TENSOR_TYPE_FP32));\r\n\r\ninference_helper-\u003einitialize(\"mobilenet_v2_1.0_224.tflite\", input_tensor_list, output_tensor_list);\r\n```\r\n\r\n### int32_t Finalize(void)\r\n- Finalize inference helper\r\n\r\n```c++\r\ninference_helper-\u003eFinalize();\r\n```\r\n\r\n### int32_t PreProcess(const std::vector\u003cInputTensorInfo\u003e\u0026 input_tensor_info_list)\r\n- Run preprocess\r\n- Call this function before invoke\r\n- Call this function even if the input data is already pre-processed in order to copy data to memory\r\n- **Note** : Some frameworks don't support crop, resize. So, it's better to resize image before calling preProcess.\r\n\r\n```c++\r\ninference_helper-\u003ePreProcess(input_tensor_list);\r\n```\r\n\r\n### int32_t Process(std::vector\u003cOutputTensorInfo\u003e\u0026 output_tensor_info_list)\r\n- Run inference\r\n\r\n```c++\r\ninference_helper-\u003eProcess(output_tensor_info_list)\r\n```\r\n\r\n## TensorInfo (InputTensorInfo, OutputTensorInfo)\r\n### Enumeration\r\n```c++\r\nenum {\r\n    kTensorTypeNone,\r\n    kTensorTypeUint8,\r\n    kTensorTypeInt8,\r\n    kTensorTypeFp32,\r\n    kTensorTypeInt32,\r\n    kTensorTypeInt64,\r\n};\r\n```\r\n\r\n### Properties\r\n```c++\r\nstd::string name;           // [In] Set the name_ of tensor\r\nint32_t     id;             // [Out] Do not modify (Used in InferenceHelper)\r\nint32_t     tensor_type;    // [In] The type of tensor (e.g. kTensorTypeFp32)\r\nstd::vector\u003cint32_t\u003e tensor_dims;    // InputTensorInfo:   [In] The dimentions of tensor. (If empty at initialize, the size is updated from model info.)\r\n                                     // OutputTensorInfo: [Out] The dimentions of tensor is set from model information\r\nbool        is_nchw;        // [IN] NCHW or NHWC\r\n\r\n```\r\n\r\n## InputTensorInfo\r\n### Enumeration\r\n```c++\r\nenum {\r\n    kDataTypeImage,\r\n    kDataTypeBlobNhwc,  // data_ which already finished preprocess(color conversion, resize, normalize_, etc.)\r\n    kDataTypeBlobNchw,\r\n};\r\n```\r\n\r\n### Properties\r\n```c++\r\nvoid*   data;      // [In] Set the pointer to image/blob\r\nint32_t data_type; // [In] Set the type of data_ (e.g. kDataTypeImage)\r\n\r\nstruct {\r\n    int32_t width;\r\n    int32_t height;\r\n    int32_t channel;\r\n    int32_t crop_x;\r\n    int32_t crop_y;\r\n    int32_t crop_width;\r\n    int32_t crop_height;\r\n    bool    is_bgr;        // used when channel == 3 (true: BGR, false: RGB)\r\n    bool    swap_color;\r\n} image_info;              // [In] used when data_type_ == kDataTypeImage\r\n\r\nstruct {\r\n    float mean[3];\r\n    float norm[3];\r\n} normalize;              // [In] used when data_type_ == kDataTypeImage\r\n```\r\n\r\n\r\n## OutputTensorInfo\r\n### Properties\r\n```c++\r\nvoid* data;     // [Out] Pointer to the output data_\r\nstruct {\r\n    float   scale;\r\n    uint8_t zero_point;\r\n} quant;        // [Out] Parameters for dequantization (convert uint8 to float)\r\n```\r\n\r\n### float* GetDataAsFloat()\r\n- Get output data in the form of FP32\r\n- When tensor type is INT8 (quantized), the data is converted to FP32 (dequantized)\r\n\r\n```c++\r\nconst float* val_float = output_tensor_list[0].GetDataAsFloat();\r\n```\r\n\r\n# License\r\n- InferenceHelper\r\n- https://github.com/iwatake2222/InferenceHelper\r\n- Copyright 2020 iwatake2222\r\n- Licensed under the Apache License, Version 2.0\r\n\r\n# Acknowledgements\r\n- This project utilizes OSS (Open Source Software)\r\n    - [NOTICE.md](NOTICE.md)\r\n","funding_links":[],"categories":["C++","🛠️ Tools \u0026 Utilities"],"sub_categories":["🔗 Inference Helpers"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fiwatake2222%2FInferenceHelper","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fiwatake2222%2FInferenceHelper","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fiwatake2222%2FInferenceHelper/lists"}