{"id":18861621,"url":"https://github.com/zoneminder/mlapi","last_synced_at":"2025-04-14T12:34:16.669Z","repository":{"id":43708117,"uuid":"183496940","full_name":"ZoneMinder/mlapi","owner":"ZoneMinder","description":"An easy to use/extend object recognition API you can locally install. Python+Flask. Also works with ZMES!","archived":false,"fork":false,"pushed_at":"2024-01-24T15:14:38.000Z","size":1670,"stargazers_count":58,"open_issues_count":5,"forks_count":34,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-04-10T03:24:21.076Z","etag":null,"topics":["api","api-gateway","api-server","cctv","face-detection","flask","iot","machine-learning","object-detection","python3","zoneminder"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ZoneMinder.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2019-04-25T19:25:00.000Z","updated_at":"2024-08-12T19:48:16.000Z","dependencies_parsed_at":"2024-01-19T15:04:07.126Z","dependency_job_id":"24871148-6c50-4743-9ec7-753a86eee9e9","html_url":"https://github.com/ZoneMinder/mlapi","commit_stats":{"total_commits":266,"total_committers":7,"mean_commits":38.0,"dds":"0.045112781954887216","last_synced_commit":"6b68dc15cc66b1fc20a587872369793dcf676fee"},"previous_names":[],"tags_count":30,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZoneMinder%2Fmlapi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZoneMinder%2Fmlapi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZoneMinder%2Fmlapi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZoneMinder%2Fmlapi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ZoneMinder","download_url":"https://codeload.github.com/ZoneMinder/mlapi/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248881989,"owners_count":21176953,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api","api-gateway","api-server","cctv","face-detection","flask","iot","machine-learning","object-detection","python3","zoneminder"],"created_at":"2024-11-08T04:30:17.496Z","updated_at":"2025-04-14T12:34:16.629Z","avatar_url":"https://github.com/ZoneMinder.png","language":"Python","readme":"Note\n=====\nRelease 2.1.1 onwards of mlapi requires ES 6.1.0 \nStarting 2.1.1, the mlapiconfig.ini file has changed to support sequence structures. Please read \n[this](https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration) to understand what is going on and how to make best use of these changes.\n\nWhat\n=====\nAn API gateway that you can install in your own server to do object and face recognition.\nEasy to extend to many/any other model. You can pass images as:\n- a local file\n- remote url\n\nThis can also be used as a remote face/recognition and object recognition server if you are using my [ZoneMinder Event Server](https://github.com/ZoneMinder/zmeventnotification)!\n\nThis is an example of invoking `python ./stream.py video.mp4` ([video courtesy of pexels](https://www.pexels.com/video/people-walking-by-on-a-sidewalk-854100/))\n\n\u003cimg src=\"https://media.giphy.com/media/YQ4f1xXHMaDLF7AZMe/giphy.gif\"/\u003e\n\n\nWhy\n=====\nWanted to learn how to write an API gateway easily. Object detection was a good use-case since I use it extensively for other things (like my event server). This is the first time I've used flask/jwt/tinydb etc. so its very likely there are improvements that can be made. Feel free to PR.\n\nTip of the Hat\n===============\nA tip of the hat to [Adrian Rosebrock](https://www.pyimagesearch.com/about/) to get me started. His articles are great.\n\nInstall\n=======\n- It's best to create a virtual environment with python3, but not mandatory \n- You need python3 for this to run\n- face recognition requires cmake/gcc/standard linux dev libraries installed (if you have gcc, you likely have everything else. You may need to install cmake on top of it if you don't already have it)\n- If you plan on using Tiny/Yolo V4, You need Open CV \u003e 4.3\n- If you plan on using the Google Coral TPU, please make sure you have all the libs\n  installed as per https://coral.ai/docs/accelerator/get-started/ \n  \n\nNote that this package also needs OpenCV which is not installed by the above step by default. This is because you may have a GPU and may want to use GPU support. If not, pip is fine. See [this page](https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#opencv-install) on how to install OpenCV\n\nThen:\n```\n git clone https://github.com/ZoneMinder/mlapi\n cd mlapi\n sudo -H pip3 install -r requirements.txt\n ```\n\nNote: By default, `mlapiconfig.ini` uses the bjoern WSGI server. On debian, the following\ndependencies are needed for bjoern:\n```\nsudo apt install libev-dev libevdev2\n```\nAlternately, you can just comment out `wsgi_server` and it will fall back to using flask.\n\nFinally, you also need to get the inferencing models. Note this step is ONLY needed if you\ndon't already have the models downloaded. If you are running mlapi on the same server ZMES is \nrunning, you likely already have the models in `/var/lib/zmeventnotification/models/`.\n\nTo download all models, except coral edgetpu models:\n```\n./get_models.sh\n```\n\nTo download all models, including coral edge tpu models:\n(Coral needs the coral device, so it is not downloaded by default):\n```\nINSTALL_CORAL_EDGETPU=yes ./get_models.sh\n```\n\n**Please make sure you edit `mlapiconfig.ini` to meet your needs**\n\n\nRunning\n========\n\nBefore you run, you need to create at least one user. Use `python3 mlapi_dbuser.py` for that. Do a `python3 mlapi_dbuser.py --help` for options.\n\nServer: Manually\n------------------\nTo run the server:\n```\npython3 ./mlapi.py -c mlapiconfig.ini\n```\n\nServer: Automatically\n-----------------------\nTake a look at `mlapi.service` and customize it for your needs\n\n\nClient Side: From zm_detect\n-----------------------------\nOne of the key uses of mlapi is to act as an API gateway for zm_detect, the ML \npython process for zmeventnotification. When run in this mode, zm_detect.py does not do local\ninferencing. Instead if invokes an API call to mlapi. The big advantage is mlapi only loads the model(s) once \nand keeps them in memory, greatly reducing total time for detection.  If you downloaded mlapi to do this,\nread ``objectconfig.ini`` in ``/etc/zm/`` to set it up. It is as simple as configuring the ``[remote]``\nsection of ``objectconfig.ini``. \n\nClient Side: From CLI\n------------------------\n\n(Note: The format of response that is returned for a CLI client is different from what is returned to zm_detect.\nzm_detect uses a different format suited for its own needs)\n\nTo invoke detection from CLI, you need to:\n\nClient Side:\n\n(General note: I use [httpie](https://httpie.org) for command line http requests. Curl, while powerful has too many quirks/oddities. That being said, given curl is everywhere, examples are in curl. See later for a programmatic way)\n\n- Get an access token\n```\ncurl -H \"Content-Type:application/json\" -XPOST -d '{\"username\":\"\u003cuser\u003e\", \"password\":\"\u003cpassword\u003e\"}' \"http://localhost:5000/api/v1/login\"\n```\nThis will return a JSON object like:\n```\n{\"access_token\":\"eyJ0eX\u003cmany more characters\u003e\",\"expires\":3600}\n```\n\nNow use that token like so:\n\n```\nexport ACCESS_TOKEN=\u003cthat access token\u003e\n```\n\nObject detection for a remote image (via url):\n\n```\ncurl -H \"Content-Type:application/json\" -H \"Authorization: Bearer ${ACCESS_TOKEN}\" -XPOST -d \"{\\\"url\\\":\\\"https://upload.wikimedia.org/wikipedia/commons/c/c4/Anna%27s_hummingbird.jpg\\\"}\" http://localhost:5000/api/v1/detect/object\n```\n\n**NOTE**: The payload shown below is when you invoke this from command line. When it is invoked by \n`zm_detect` a different format is returned that is compatible with the ES needs.\n\nreturns:\n\n```\n[{\"type\": \"bird\", \"confidence\": \"99.98%\", \"box\": [433, 466, 2441, 1660]}]\n```\n\nObject detection for a local image:\n```\ncurl  -H \"Authorization: Bearer ${ACCESS_TOKEN}\" -XPOST -F\"file=@IMG_1833.JPG\" http://localhost:5000/api/v1/detect/object -v\n```\n\nreturns:\n```\n[{\"type\": \"person\", \"confidence\": \"99.77%\", \"box\": [2034, 1076, 3030, 2344]}, {\"type\": \"person\", \"confidence\": \"97.70%\", \"box\": [463, 519, 1657, 1351]}, {\"type\": \"cup\", \"confidence\": \"97.42%\", \"box\": [1642, 978, 1780, 1198]}, {\"type\": \"dining table\", \"confidence\": \"95.78%\", \"box\": [636, 1088, 2370, 2262]}, {\"type\": \"person\", \"confidence\": \"94.44%\", \"box\": [22, 718, 640, 2292]}, {\"type\": \"person\", \"confidence\": \"93.08%\", \"box\": [408, 1002, 1254, 2016]}, {\"type\": \"cup\", \"confidence\": \"92.57%\", \"box\":[1918, 1272, 2110, 1518]}, {\"type\": \"cup\", \"confidence\": \"90.04%\", \"box\": [1384, 1768, 1564, 2044]}, {\"type\": \"bowl\", \"confidence\": \"83.41%\", \"box\": [882, 1760, 1134, 2024]}, {\"type\": \"person\", \"confidence\": \"82.64%\", \"box\": [2112, 984, 2508, 1946]}, {\"type\": \"cup\", \"confidence\": \"50.14%\", \"box\": [1854, 1520, 2072, 1752]}]\n```\n\nFace detection for the same image above:\n\n```\ncurl  -H \"Authorization: Bearer ${ACCESS_TOKEN}\" -XPOST -F\"file=@IMG_1833.JPG\" \"http://localhost:5000/api/v1/detect/object?type=face\"\n```\n\nreturns:\n\n```\n[{\"type\": \"face\", \"confidence\": \"52.87%\", \"box\": [904, 1037, 1199, 1337]}]\n```\n\nObject detection on a live Zoneminder feed:\n(Note that ampersands have to be escaped as `%26` when passed as a data parameter)\n\n```\ncurl -XPOST  \"http://localhost:5000/api/v1/detect/object?delete=false\" -d \"url=https://demo.zoneminder.com/cgi-bin-zm/nph-zms?mode=single%26maxfps=5%26buffer=1000%26monitor=18%26user=zmuser%26pass=zmpass\"\n-H \"Authorization: Bearer ${ACCESS_TOKEN}\"\n```\n\nreturns\n\n```\n[{\"type\": \"bear\", \"confidence\": \"99.40%\", \"box\": [6, 184, 352, 436]}, {\"type\": \"bear\n\", \"confidence\": \"72.66%\", \"box\": [615, 219, 659, 279]}]\n```\n\n Note that the server stores the images and the objects detected inside its `images/` folder. If you want the server to delete them after analysis add `\u0026delete=true` to the query parameters.\n\n\nLive Streams or Recorded Video files\n======================================\nThis is an image based object detection API. If you want to pass a video file or live stream,\ntake a look at the full example below.\n\n\nFull Example\n=============\nTake a look at [stream.py](https://github.com/ZoneMinder/mlapi/blob/master/examples/stream.py). This program reads any media source and/or webcam and invokes detection via the API gateway\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzoneminder%2Fmlapi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzoneminder%2Fmlapi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzoneminder%2Fmlapi/lists"}