{"id":13574143,"url":"https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Detection-Inference-API","last_synced_at":"2025-04-04T14:31:53.929Z","repository":{"id":50752375,"uuid":"322331704","full_name":"BMW-InnovationLab/BMW-IntelOpenVINO-Detection-Inference-API","owner":"BMW-InnovationLab","description":"This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.","archived":false,"fork":false,"pushed_at":"2024-11-15T14:31:34.000Z","size":7368,"stargazers_count":72,"open_issues_count":1,"forks_count":3,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-11-15T15:35:16.747Z","etag":null,"topics":["computer-vision","cpu","deeplearning","detection-algorithm","detection-api","docker","inference","inference-engine","neural-network","nocode","object-detection","openvino","openvino-model-zoo","openvino-toolkit","resnet","rest-api"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/BMW-InnovationLab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-12-17T15:13:49.000Z","updated_at":"2024-11-15T14:31:39.000Z","dependencies_parsed_at":"2024-11-15T15:39:26.605Z","dependency_job_id":null,"html_url":"https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Detection-Inference-API","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/BMW-InnovationLab","download_url":"https://codeload.github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Detection-Inference-API/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247194147,"owners_count":20899435,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","cpu","deeplearning","detection-algorithm","detection-api","docker","inference","inference-engine","neural-network","nocode","object-detection","openvino","openvino-model-zoo","openvino-toolkit","resnet","rest-api"],"created_at":"2024-08-01T15:00:47.092Z","updated_at":"2025-04-04T14:31:48.916Z","avatar_url":"https://github.com/BMW-InnovationLab.png","language":"Python","readme":"﻿# OpenVINO Inference API \n\nThis is a repository for an object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.\n\nModels in Intermediate Representation(IR) format, converted using [Intel\u0026reg; OpenVINO\u0026trade; toolkit v2021.1](https://docs.openvino.ai/2021.1/index.html) or using [Intel\u0026reg; OpenVINO\u0026trade; toolkit v2021.4](https://docs.openvino.ai/2021.4/index.html), can be deployed in this API. Currently, OpenVINO supports conversion for Models trained in several Machine Learning frameworks including Caffe, Tensorflow etc. Please refer to [the OpenVINO documentation](https://docs.openvinotoolkit.org/2021.1/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html) for further details on converting your Model.\n\n**Note: To be able to use the sample inference model provided with this repository make sure to have **`git lfs`** installed and initialized then use** `git clone` **and avoid downloading the repository as ZIP because it will not download the acutual model stored on** `git lfs` **but just the pointer instead**\n\n## Prerequisites\n\n- OS:\n  - Ubuntu 18.04\n  - Windows 10 pro/enterprise\n- Docker\n- [git-lfs](https://git-lfs.github.com/)\n\n### Check for prerequisites\n\nTo check if you have docker-ce installed:\n\n```sh\ndocker --version\n```\n\n### Install prerequisites\n\n#### Ubuntu\n\nUse the following command to install docker on Ubuntu:\n\n```sh\nchmod +x install_prerequisites.sh \u0026\u0026 source install_prerequisites.sh\n```\n\n#### Windows 10\n\nTo [install Docker on Windows](https://docs.docker.com/docker-for-windows/install/), please follow the link.\n\n**P.S: For Windows users, open the Docker Desktop menu by clicking the Docker Icon in the Notifications area. Select Settings, and then Advanced tab to adjust the resources available to Docker Engine.**\n\n## Build The Docker Image\n\nIn order to build the project run the following command from the project's root directory:\n\n```sh\nsudo docker build -t openvino_inference_api .\n```\n### Behind a proxy\n\n```sh\nsudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t openvino_inference_api .\n```\n\n## Run The Docker Container\n\nIf you wish to deploy this API using **docker**, please issue the following run command.\n\nTo run the API, go the to the API's directory and run the following:\n\n#### Using Linux based docker:\n\n```sh\nsudo docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash -p \u003cdocker_host_port\u003e:80 openvino_inference_api\n```\n#### Using Windows based docker:\n\n```sh\ndocker run -itv ${PWD}\\models:/models -v ${PWD}\\models_hash:/models_hash -p \u003cdocker_host_port\u003e:80 openvino_inference_api\n```\n\nThe \u003cdocker_host_port\u003e  can be any unique port of your choice.\n\nThe API file will be run automatically, and the service will listen to http requests on the chosen port.\n\n## API Endpoints\n\nTo see all available endpoints, open your favorite browser and navigate to:\n\n```\nhttp://\u003cmachine_IP\u003e:\u003cdocker_host_port\u003e/docs\n```\n\n### Endpoints summary\n\n#### /load (GET)\n\nLoads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again.\n\n![load model](./files/load_models.gif)\n\n#### /detect (POST)\n\nPerforms inference on an image using the specified model and returns the bounding-boxes of the objects in a JSON format.\n\n![detect image](./files/detect_image.gif)\n\n#### /models/{model_name}/predict_image (POST)\n\nPerforms inference on an image using the specified model, draws bounding boxes on the image, and returns the resulting image as response.\n\n![predict image](./files/predict_image.gif)\n\n#### /models/{model_name}/config (GET)\n\nReturns the model's configuration\n\n![config image](./files/config_image.gif)\n\n#### /models (GET)\n\nLists all the available models\n\n#### /models/{model_name}/labels (GET)\n\nReturns all the object labels of the model as a list\n\n#### /models/{model_name}/predict (POST)\n\nPerforms inference on a given image using the model and returns the bounding-boxes of the objects as JSON.\n\n**P.S: If you are using custom endpoints like /detect, /predict_image, you should always use the /load endpoint first and then use /detect**\n\n## Model structure\n\nThe folder \"models\" contains subfolders of all the models to be loaded.\nInside each subfolder there should be a:\n\n- bin file (\u003cyour_converted_model\u003e.bin): contains the model weights\n\n- xml file (\u003cyour_converted_model\u003e.xml): describes the network topology\n\n- class file (classes.txt): contains the names of the object classes, which should be in the below format\n\n  ```text\n      class1\n      class2\n      ...\n  ```\n- config.json (This is a json file containing information about the model)\n\n  ```json\n    {\n        \"inference_engine_name\": \"openvino_detection\",\n        \"confidence\": 60,\n        \"predictions\": 15,\n        \"number_of_classes\": 2,\n        \"framework\": \"openvino\",\n        \"type\": \"detection\",\n        \"network\": \"fasterrcnn\"\n    }\n  ```\n  P.S:\n  - You can change confidence and predictions values while running the API\n  - The API will return bounding boxes with a confidence higher than the \"confidence\" value. A high \"confidence\" can show you only accurate predictions\n\nThe \"models\" folder structure should be similar to as shown below:\n\n```shell\n│──models\n  │──model_1\n  │  │──\u003cmodel_1\u003e.bin\n  │  │──\u003cmodel_1\u003e.xml\n  │  │──classes.txt\n  │  │──config.json\n  │\n  │──model_2\n  │  │──\u003cmodel_2\u003e.bin\n  │  │──\u003cmodel_2\u003e.xml\n  │  │──classes.txt\n  │  │──config.json\n```\n## Using with Anonymization Api\n\nIn this section, docker-compose will build and run the OpenVINO Inference Api alongside the Anonymization Api.\n\nTo build and run both the APIs together, clone the Anonymization API repository to your machine. Replace the \"/jsonFiles/url_configuration.json\" with the file in the \"/docker_anonymize\" directory of this repo.\n\nTwo services are configured in the \"docker-compose.yml\" file in the \"/docker_anonymize\" directory: the OpenVINO Inference API and the Anonymization API. \n\nYou can modify the build context to specify the base directory of anonymization api (ensure the correct path is also given for the mounted volumes).You can also modify the host ports you wish to use for the APIs.\n\nNow, run the following command in the \"/docker_anonymize\" directory of this repo:\n\n```sh\ndocker-compose up\n```\n\nIn the terminal, you should now see all the APIs running together.\n\n## Acknowledgements\n\n[OpenVINO Toolkit](https://github.com/openvinotoolkit)\n\n[intel.com](https://intel.com)\n\n[robotron.de](https://www.robotron.de/)\n","funding_links":[],"categories":["Table of Contents","Table of content"],"sub_categories":["AI - Computer Vision","AI Computer Vision"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FBMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FBMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FBMW-InnovationLab%2FBMW-IntelOpenVINO-Detection-Inference-API/lists"}