{"id":13800777,"url":"https://github.com/IBM/MAX-Object-Detector","last_synced_at":"2025-05-13T09:31:55.713Z","repository":{"id":30321503,"uuid":"124604154","full_name":"IBM/MAX-Object-Detector","owner":"IBM","description":" Localize and identify multiple objects in a single image.","archived":false,"fork":false,"pushed_at":"2023-05-23T00:41:00.000Z","size":209936,"stargazers_count":292,"open_issues_count":7,"forks_count":223,"subscribers_count":38,"default_branch":"master","last_synced_at":"2025-05-07T05:42:15.466Z","etag":null,"topics":["coco-dataset","docker-image","machine-learning","machine-learning-models","tensorflow-model"],"latest_commit_sha":null,"homepage":"https://developer.ibm.com/exchanges/models/all/max-object-detector/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/IBM.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-03-09T23:29:27.000Z","updated_at":"2025-04-08T09:21:36.000Z","dependencies_parsed_at":"2023-01-11T15:37:18.906Z","dependency_job_id":"58415296-cf80-41a0-a627-aeb4dfbc8d52","html_url":"https://github.com/IBM/MAX-Object-Detector","commit_stats":null,"previous_names":[],"tags_count":10,"template":true,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IBM%2FMAX-Object-Detector","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IBM%2FMAX-Object-Detector/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IBM%2FMAX-Object-Detector/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IBM%2FMAX-Object-Detector/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/IBM","download_url":"https://codeload.github.com/IBM/MAX-Object-Detector/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253913236,"owners_count":21983281,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["coco-dataset","docker-image","machine-learning","machine-learning-models","tensorflow-model"],"created_at":"2024-08-04T00:01:16.198Z","updated_at":"2025-05-13T09:31:52.615Z","avatar_url":"https://github.com/IBM.png","language":"Python","readme":"[![Build Status](https://travis-ci.com/IBM/MAX-Object-Detector.svg?branch=master)](https://travis-ci.com/IBM/MAX-Object-Detector) [![Website Status](https://img.shields.io/website/http/max-object-detector.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud/swagger.json.svg?label=api+demo)](http://max-object-detector.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud)\n\n[\u003cimg src=\"docs/deploy-max-to-ibm-cloud-with-kubernetes-button.png\" width=\"400px\"\u003e](http://ibm.biz/max-to-ibm-cloud-tutorial)\n\n# IBM Developer Model Asset Exchange: Object Detector\n\nThis repository contains code to instantiate and deploy an object detection model. This model recognizes the objects\npresent in an image from the 80 different high-level classes of objects in the [COCO Dataset](http://mscoco.org/). The\nmodel consists of a deep convolutional net base model for image feature extraction, together with additional\nconvolutional layers specialized for the task of object detection, that was trained on the COCO data set. The input to\nthe model is an image, and the output is a list of estimated class probabilities for the objects detected in the image.\n\nThe model is based on the [SSD Mobilenet V1 and Faster RCNN ResNet101 object detection model for TensorFlow](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). The model files are hosted on IBM Cloud Object Storage: [ssd_mobilenet_v1.tar.gz](https://max-cdn.cdn.appdomain.cloud/max-object-detector/1.0.2/ssd_mobilenet_v1.tar.gz) and [faster_rcnn_resnet101.tar.gz](https://max-cdn.cdn.appdomain.cloud/max-object-detector/1.0.2/faster_rcnn_resnet101.tar.gz). The code in this repository deploys the model as a web service in a Docker container. This repository was developed as part of the [IBM Developer Model Asset Exchange](https://developer.ibm.com/exchanges/models/) and the public API is powered by [IBM Cloud](https://ibm.biz/Bdz2XM).\n\n## Model Metadata\n| Domain | Application | Industry  | Framework | Training Data | Input Data Format |\n| ------------- | --------  | -------- | --------- | --------- | -------------- |\n| Vision | Object Detection | General | TensorFlow | [COCO Dataset](http://mscoco.org/) | Image (RGB/HWC) |\n\n## References\n\n* _J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna,\nY. Song, S. Guadarrama, K. Murphy_, [\"Speed/accuracy trade-offs for modern convolutional object detectors\"](https://arxiv.org/abs/1611.10012), CVPR 2017\n* _Tsung-Yi Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. Lawrence Zitnick, P. Dollár_, [\"Microsoft COCO: Common Objects in Context\"](https://arxiv.org/abs/1405.0312), arXiv 2015\n* _W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, A. C. Berg_, [\"SSD: Single Shot MultiBox Detector\n\"](https://arxiv.org/pdf/1512.02325), CoRR (abs/1512.02325), 2016\n* _A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam_, [\"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications\"](https://arxiv.org/abs/1704.04861), arXiv 2017\n* [TensorFlow Object Detection GitHub Repo](https://github.com/tensorflow/models/tree/master/research/object_detection)\n\n## Licenses\n\n| Component | License | Link  |\n| ------------- | --------  | -------- |\n| This repository | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | [LICENSE](LICENSE) |\n| Model Weights | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | [TensorFlow Models Repo](https://github.com/tensorflow/models/blob/master/LICENSE) |\n| Model Code (3rd party) |  [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | [TensorFlow Models Repo](https://github.com/tensorflow/models/blob/master/LICENSE) |\n| Test Samples | Various | [Samples README](samples/README.md) |\n\n## Pre-requisites:\n\n* `docker`: The [Docker](https://www.docker.com/) command-line interface. Follow the [installation instructions](https://docs.docker.com/install/) for your system.\n* The minimum recommended resources for this model is 2GB Memory and 2 CPUs.\n* If you are on x86-64/AMD64, your CPU must support [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) at the minimum.\n\n# Deployment options\n\n* [Deploy from Quay](#deploy-from-quay)\n* [Deploy on Red Hat OpenShift](#deploy-on-red-hat-openshift)\n* [Deploy on Kubernetes](#deploy-on-kubernetes)\n* [Deploy on Code Engine](#deploy-on-code-engine)\n* [Run Locally](#run-locally)\n\n## Deploy from Quay\n\nTo run the docker image, which automatically starts the model serving API, run:\n\nIntel CPUs:\n```bash\n$ docker run -it -p 5000:5000 quay.io/codait/max-object-detector\n```\n\nARM CPUs (eg Raspberry Pi):\n```bash\n$ docker run -it -p 5000:5000 quay.io/codait/max-object-detector:arm-arm32v7-latest\n```\n\nThis will pull a pre-built image from the Quay.io container registry (or use an existing image if already cached locally) and run it.\nIf you'd rather checkout and build the model locally you can follow the [run locally](#run-locally) steps below.\n\n## Deploy on Red Hat OpenShift\n\nYou can deploy the model-serving microservice on Red Hat OpenShift by following the instructions for the OpenShift web\nconsole or the OpenShift Container Platform CLI in [this tutorial](https://developer.ibm.com/tutorials/deploy-a-model-asset-exchange-microservice-on-red-hat-openshift/),\nspecifying `quay.io/codait/max-object-detector` as the image name.\n\n## Deploy on Kubernetes\n\nYou can also deploy the model on Kubernetes using the latest docker image on Quay.\n\nOn your Kubernetes cluster, run the following commands:\n\n```bash\n$ kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Object-Detector/master/max-object-detector.yaml\n```\n\nThe model will be available internally at port `5000`, but can also be accessed externally through the `NodePort`.\n\nA more elaborate tutorial on how to deploy this MAX model to production on [IBM Cloud](https://ibm.biz/Bdz2XM) can be\nfound [here](http://ibm.biz/max-to-ibm-cloud-tutorial).\n\n## Deploy on Code Engine\n\nYou can also deploy the model on IBM Cloud's [Code Engine](https://cloud.ibm.com/codeengine/) platform which is based on the Knative serverless framework. Once authenticated with your IBM Cloud account, run the commands below.\n\nCreate a Code Engine project, give it a unique name\n\n```bash\n$ ibmcloud ce project create --name sandbox\n```\n\nRun the container by pointing to the [quay.io](quay.io/codait/max-object-detector) image and exposting port 5000.\n\n```bash\n$ ibmcloud ce application create --name max-object-detector --image quay.io/codait/max-object-detector --port 5000\n```\n\nOpen the resulting URL in a browser, append `/app` to view the app instead of the API.\n\n## Run Locally\n\n1. [Build the Model](#1-build-the-model)\n2. [Deploy the Model](#2-deploy-the-model)\n3. [Use the Model](#3-use-the-model)\n4. [Run the Notebook](#4-run-the-notebook)\n5. [Development](#5-development)\n6. [Cleanup](#6-cleanup)\n\n\n### 1. Build the Model\n\nClone this repository locally. In a terminal, run the following command:\n\n```bash\n$ git clone https://github.com/IBM/MAX-Object-Detector.git\n```\n\nChange directory into the repository base folder:\n\n```bash\n$ cd MAX-Object-Detector\n```\n\nTo build the docker image locally for Intel CPUs, run:\n\n```bash\n$ docker build -t max-object-detector .\n```\n\nTo select a model, pass in the `--build-arg model=\u003cdesired-model\u003e` switch:\n\n```bash\n$ docker build --build-arg model=faster_rcnn_resnet101 -t max-object-detector .\n```\n\nCurrently we support two models, `ssd_mobilenet_v1` (default) and `faster_rcnn_resnet101`.\n\nFor ARM CPUs (eg Raspberry Pi), run:\n\n```bash\n$ docker build -f Dockerfile.arm32v7 -t max-object-detector .\n```\n\nAll required model assets will be downloaded during the build process. _Note_ that currently this docker image is CPU only (we will add support for GPU images later).\n\n\n### 2. Deploy the Model\n\nTo run the docker image, which automatically starts the model serving API, run:\n\n```bash\n$ docker run -it -p 5000:5000 max-object-detector\n```\n\n### 3. Use the Model\n\nThe API server automatically generates an interactive Swagger documentation page. Go to `http://localhost:5000` to load it. From there you can explore the API and also create test requests.\n\nUse the `model/predict` endpoint to load a test image (you can use one of the test images from the `samples` folder) and get predicted labels for the image from the API.  The coordinates of the bounding box are returned in the `detection_box` field, and contain the array of normalized coordinates (ranging from 0 to 1) in the form `[ymin, xmin, ymax, xmax]`.\n\n![Swagger Doc Screenshot](docs/swagger-screenshot.png)\n\nYou can also test it on the command line, for example:\n\n```bash\n$ curl -F \"image=@samples/dog-human.jpg\" -XPOST http://127.0.0.1:5000/model/predict\n```\n\nYou should see a JSON response like that below:\n\n```json\n{\n  \"status\": \"ok\",\n  \"predictions\": [\n      {\n          \"label_id\": \"1\",\n          \"label\": \"person\",\n          \"probability\": 0.944034993648529,\n          \"detection_box\": [\n              0.1242099404335022,\n              0.12507188320159912,\n              0.8423267006874084,\n              0.5974075794219971\n          ]\n      },\n      {\n          \"label_id\": \"18\",\n          \"label\": \"dog\",\n          \"probability\": 0.8645511865615845,\n          \"detection_box\": [\n              0.10447660088539124,\n              0.17799153923988342,\n              0.8422801494598389,\n              0.732001781463623\n          ]\n      }\n  ]\n}\n```\n\nYou can also control the probability threshold for what objects are returned using the `threshold` argument like below:\n\n```bash\n$ curl -F \"image=@samples/dog-human.jpg\" -XPOST http://127.0.0.1:5000/model/predict?threshold=0.5\n```\n\nThe optional `threshold` parameter is the minimum `probability` value for predicted labels returned by the model.\nThe default value for `threshold` is `0.7`.\n\n### 4. Run the Notebook\n\n[The demo notebook](demo.ipynb) walks through how to use the model to detect objects in an image and visualize the results. By default, the notebook uses the [hosted demo instance](http://max-object-detector.codait-prod-41208c73af8fca213512856c7a09db52-0000.us-east.containers.appdomain.cloud/), but you can use a locally running instance (see the comments in Cell 3 for details). _Note_ the demo requires `jupyter`, `matplotlib`, `Pillow`, and `requests`.\n\nRun the following command from the model repo base folder, in a new terminal window:\n\n```bash\n$ jupyter notebook\n```\n\nThis will start the notebook server. You can launch the demo notebook by clicking on `demo.ipynb`.\n\n### 5. Development\n\nTo run the Flask API app in debug mode, edit `config.py` to set `DEBUG = True` under the application settings. You will then need to rebuild the docker image (see [step 1](#1-build-the-model)).\n\n### 6. Cleanup\n\nTo stop the Docker container, type `CTRL` + `C` in your terminal.\n\n# Object Detector Web App\n\nThe latest release of the [MAX Object Detector Web App](https://github.com/IBM/MAX-Object-Detector-Web-App)\nis included in the Object Detector docker image.\n\nWhen the model API server is running, the web app can be accessed at `http://localhost:5000/app`\nand provides interactive visualization of the bounding boxes and their related labels returned by the model.\n\n![Mini Web App Screenshot](docs/mini-web-app.png)\n\nIf you wish to disable the web app, start the model serving API by running:\n\n```bash\n$ docker run -it -p 5000:5000 -e DISABLE_WEB_APP=true quay.io/codait/max-object-detector\n```\n\n## Resources and Contributions\n\nIf you are interested in contributing to the Model Asset Exchange project or have any queries, please follow the instructions [here](https://github.com/CODAIT/max-central-repo).\n\n### Links\n\n* [Object Detector Web App](https://developer.ibm.com/patterns/create-a-web-app-to-interact-with-objects-detected-using-machine-learning/): A reference application created by the IBM CODAIT team that uses the Object Detector\n","funding_links":[],"categories":["Data \u0026 AI"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FIBM%2FMAX-Object-Detector","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FIBM%2FMAX-Object-Detector","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FIBM%2FMAX-Object-Detector/lists"}