{"id":13603199,"url":"https://github.com/SthPhoenix/InsightFace-REST","last_synced_at":"2025-04-11T14:30:39.161Z","repository":{"id":38474253,"uuid":"202560774","full_name":"SthPhoenix/InsightFace-REST","owner":"SthPhoenix","description":"InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.","archived":false,"fork":false,"pushed_at":"2024-11-02T17:06:20.000Z","size":4653,"stargazers_count":497,"open_issues_count":38,"forks_count":116,"subscribers_count":19,"default_branch":"master","last_synced_at":"2024-11-02T18:19:00.186Z","etag":null,"topics":["adaface","arcface","centerface","docker","face-detection","face-recognition","fastapi","fp16","gpu","insightface","mask-detection","onnx","retinaface","scrfd","tensorrt","tensorrt-conversion","yolov5-face"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SthPhoenix.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-08-15T14:55:43.000Z","updated_at":"2024-11-02T17:06:23.000Z","dependencies_parsed_at":"2023-12-23T12:42:31.436Z","dependency_job_id":"aa1e31c0-b800-44c4-a8b3-47015b68562b","html_url":"https://github.com/SthPhoenix/InsightFace-REST","commit_stats":null,"previous_names":[],"tags_count":6,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SthPhoenix%2FInsightFace-REST","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SthPhoenix%2FInsightFace-REST/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SthPhoenix%2FInsightFace-REST/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SthPhoenix%2FInsightFace-REST/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SthPhoenix","download_url":"https://codeload.github.com/SthPhoenix/InsightFace-REST/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248419648,"owners_count":21100213,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adaface","arcface","centerface","docker","face-detection","face-recognition","fastapi","fp16","gpu","insightface","mask-detection","onnx","retinaface","scrfd","tensorrt","tensorrt-conversion","yolov5-face"],"created_at":"2024-08-01T18:01:56.980Z","updated_at":"2025-04-11T14:30:39.151Z","avatar_url":"https://github.com/SthPhoenix.png","language":"Python","readme":"# InsightFace-REST\n\n\u003e WARNING: Latest update may cause troubles with previously compiled Numba functions.\n\u003e If you met any errors concerning 'modules not found' Run following command in repo root to remove `__pycache__`:\n\u003e \n\u003e `find . | grep -E \"(__pycache__|\\.pyc$)\" | sudo xargs rm -rf`\n\nThis repository aims to provide convenient, easy deployable and scalable\nREST API for InsightFace face detection and recognition pipeline using\nFastAPI for serving and NVIDIA TensorRT for optimized inference.\n\nCode is heavily based on API\n[code](https://github.com/deepinsight/insightface/tree/master/python-package)\nin official DeepInsight InsightFace\n[repository](https://github.com/deepinsight/insightface).\n\nThis repository provides source code for building face recognition REST\nAPI and converting models to ONNX and TensorRT using Docker.\n\n![Draw detections example](misc/images/draw_detections.jpg)\n\n\n## Key features:\n\n- Ready for deployment on NVIDIA GPU enabled systems using Docker and\n  nvidia-docker2.\n- Automatic model download at startup (using Google Drive).\n- Up to 3x performance boost over MXNet inference with help of TensorRT\n  optimizations, FP16 inference and batch inference of detected faces\n  with ArcFace model.\n- Support for older Retinaface detectors and MXNet based ArcFace models, \n  as well as newer `SCRFD` detectors and PyTorch based recognition models (`glintr100`,`w600k_r50`, `w600k_mbf`).\n- Up to 2x faster `SCRFD` postprocessing implementation.\n- Batch inference supported both for recognition and detection models \n  (currently `SCRFD` family only)\n- Inference on CPU with ONNX-Runtime.\n\n## Contents\n\n[List of supported models](#list-of-supported-models)\n- [Detection](#detection)\n- [Recognition](#recognition)\n- [Other](#other)\n\n[Prerequesites](#prerequesites)\n\n[Running with Docker](#running-with-docker)\n\n[API usage](#api-usage)\n- [/extract](#extract-endpoint)\n\n[Work in progress](#work-in-progress)\n\n[Known issues](#known-issues)\n\n[Changelog](#changelog)\n\n\n## List of supported models:\n\n### Detection:\n\n\n| Model                 | Auto download | Batch inference | Detection (ms) | Inference (ms) | GPU-Util (%) | Source                      |  ONNX File   |\n|-----------------------|:-------------:|:---------------:|:--------------:|:--------------:|:------------:|:----------------------------|:------------:|\n| retinaface_r50_v1     |     Yes*      |                 |      12.3      |      8.4       |      26      | [official package][1]       | [link][dl1]  |\n| retinaface_mnet025_v1 |     Yes*      |                 |      8.6       |      4.6       |      17      | [official package][1]       | [link][dl2]  |\n| retinaface_mnet025_v2 |     Yes*      |                 |      8.8       |      4.9       |      17      | [official package][1]       | [link][dl3]  |\n| mnet_cov2             |     Yes*      |                 |      8.7       |      4.6       |      18      | [mnet_cov2][2]              | [link][dl4]  |\n| centerface            |      Yes      |                 |      10.6      |      3.5       |      19      | [Star-Clouds/CenterFace][3] | [link][dl5]  |\n| scrfd_10g_bnkps       |     Yes*      |       Yes       |      3.3       |       2        |      16      | [SCRFD][4]                  | [link][dl6]  |\n| scrfd_2.5g_bnkps      |     Yes*      |       Yes       |      2.2       |      1.1       |      13      | [SCRFD][4]                  | [link][dl7]  |\n| scrfd_500m_bnkps      |     Yes*      |       Yes       |      1.9       |      0.8       |      13      | [SCRFD][4]                  | [link][dl15] |\n| scrfd_10g_gnkps       |     Yes*      |       Yes       |      3.3       |      2.2       |      17      | [SCRFD][4]**                | [link][dl16] |\n| scrfd_2.5g_gnkps      |     Yes*      |       Yes       |      2.3       |      1.2       |      14      | [SCRFD][4]**                | [link][dl17] |\n| scrfd_500m_gnkps      |     Yes*      |       Yes       |      2.1       |      1.3       |      14      | [SCRFD][4]**                | [link][dl18] |\n| yolov5s-face          |     Yes*      |       Yes       |                |                |              | [yolov5-face][10]           | [link][dl23] |\n| yolov5m-face          |     Yes*      |       Yes       |                |                |              | [yolov5-face][10]           | [link][dl24] |\n| yolov5l-face          |     Yes*      |       Yes       |                |                |              | [yolov5-face][10]           | [link][dl25] |\n\n\u003e Note: Performance metrics measured on NVIDIA RTX2080 SUPER + Intel Core i7-5820K (3.3Ghz * 6 cores) for \n\u003e `api/src/test_images/lumia.jpg` with `force_fp16=True`, `det_batch_size=1` and `max_size=640,640`.\n\u003e \n\u003e Detection time include inference, pre- and postprocessing, but does not include image reading, decoding and resizing.\n\n\u003e Note 2: SCRFD family models requires input image shape dividable by 32, i.e 640x640, 1024x768.\n\n### Recognition:\n\n| Model                    | Auto download | Batch inference | Inference b=1 (ms) | Inference b=64 (ms) | Source                 |  ONNX File   |\n|--------------------------|:-------------:|:---------------:|:------------------:|:-------------------:|:-----------------------|:------------:|\n| arcface_r100_v1          |     Yes*      |       Yes       |        2.6         |        54.8         | [official package][1]  | [link][dl8]  |\n| r100-arcface-msfdrop75   |      No       |       Yes       |         -          |          -          | [SubCenter-ArcFace][5] |     None     |\n| r50-arcface-msfdrop75    |      No       |       Yes       |         -          |          -          | [SubCenter-ArcFace][5] |     None     |\n| glint360k_r100FC_1.0     |      No       |       Yes       |         -          |          -          | [Partial-FC][6]        |     None     |\n| glint360k_r100FC_0.1     |      No       |       Yes       |         -          |          -          | [Partial-FC][6]        |     None     |\n| glintr100                |     Yes*      |       Yes       |        2.6         |        54.7         | [official package][1]  | [link][dl13] |\n| w600k_r50                |     Yes*      |       Yes       |        1.9         |        33.2         | [official package][1]  | [link][dl21] |\n| w600k_mbf                |     Yes*      |       Yes       |        0.7         |         9.9         | [official package][1]  | [link][dl22] |\n| adaface_ir101_webface12m |     Yes*      |       Yes       |         -          |          -          | [AdaFace repo][11]     | [link][dl26] |\n\n### Other:\n\n| Model            | Auto download | Inference code | Source                      |  ONNX File   |\n|------------------|:-------------:|:--------------:|:----------------------------|:------------:|\n| genderage_v1     |     Yes*      |      Yes       | [official package][1]       | [link][dl14] |\n| mask_detector    |     Yes*      |      Yes       | [Face-Mask-Detection][8]    | [link][dl19] |\n| mask_detector112 |     Yes*      |      Yes       | [Face-Mask-Detection][8]*** | [link][dl20] |\n| 2d106det         |      No       |       No       | [coordinateReg][9]          |     None     |\n\n`*` - Models will be downloaded from Google Drive, which might be inaccessible in some regions like China.\n\n`**` - custom models retrained for this repo. Original SCRFD models have bug \n([deepinsight/insightface#1518](https://github.com/deepinsight/insightface/issues/1518)) with \ndetecting large faces occupying \u003e40% of image. These models are retrained with Group Normalization instead of \nBatch Normalization, which fixes bug, though at cost of some accuracy. \n\nModels accuracy on WiderFace benchmark:\n\n| Model               |  Easy   |   Medium   | Hard  |\n|:--------------------|:-------:|:----------:|:-----:|\n| scrfd_10g_gnkps     |  95.51  |   94.12    | 82.14 |\n| scrfd_2.5g_gnkps    |  93.57  |   91.70    | 76.08 |\n| scrfd_500m_gnkps    |  88.70  |   86.11    | 63.57 |\n\n`***` - custom model retrained for 112x112 input size to remove excessive resize operations and\nimprove performance.\n\n\n[1]: https://github.com/deepinsight/insightface/tree/master/python-package\n[2]: https://github.com/deepinsight/insightface/tree/master/detection/RetinaFaceAntiCov\n[3]: https://github.com/Star-Clouds/CenterFace\n[4]: https://github.com/deepinsight/insightface/tree/master/detection/scrfd\n[5]: https://github.com/deepinsight/insightface/tree/master/recognition/SubCenter-ArcFace\n[6]: https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc\n[7]: https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch\n[8]: https://github.com/chandrikadeb7/Face-Mask-Detection\n[9]: https://github.com/deepinsight/insightface/tree/master/alignment/coordinateReg\n[10]: https://github.com/deepcam-cn/yolov5-face\n[11]: https://github.com/mk-minchul/AdaFace\n\n[dl1]: https://drive.google.com/file/d/1peUaq0TtNBhoXUbMqsCyQdL7t5JuhHMH/view?usp=sharing\n[dl2]: https://drive.google.com/file/d/12H4TXtGlAr1boEGtUukteolpQ9wfUTWe/view?usp=sharing\n[dl3]: https://drive.google.com/file/d/1hzgOejAfCAB8WyfF24UkfiHD2FJbaCPi/view?usp=sharing\n[dl4]: https://drive.google.com/file/d/1xPc3n_Y0jKyBONRx71UqCfcHjOGOLc2g/view?usp=sharing\n[dl5]: https://drive.google.com/file/d/10tXAXhiq06VNdTAdYt5-pjkGn7zOFMk4/view?usp=sharing\n[dl6]: https://drive.google.com/file/d/1OAXx8U8SIsBhmYYGKmD-CLXrYz_YIV-3/view?usp=sharing\n[dl7]: https://drive.google.com/file/d/1qnKTHMkuoWsCJ6iJeiFExGy5PSi8JKPL/view?usp=sharing\n[dl8]: https://drive.google.com/file/d/1sj170K3rbo5iOdjvjHw-hKWvXgH4dld3/view?usp=sharing\n[dl13]: https://drive.google.com/file/d/1TR_ImGvuY7Dt22a9BOAUAlHasFfkrJp-/view?usp=sharing\n[dl14]: https://drive.google.com/file/d/1MnkqBzQHLlIaI7gEoa9dd6CeknXMCyZH/view?usp=sharing\n[dl15]: https://drive.google.com/file/d/13mY-c6NIShu_-4AdCo3Z3YIYja4HfNaA/view?usp=sharing\n[dl16]: https://drive.google.com/file/d/1v9nhtPWMLSedueeL6c3nJEoIFlSNSCvh/view?usp=sharing\n[dl17]: https://drive.google.com/file/d/1F__ILEeCTzeR71BAV-vInuyBezYmNMsB/view?usp=sharing\n[dl18]: https://drive.google.com/file/d/13OoTQlyDI2BkuA5oJUtuuvMlxvkM_-h7/view?usp=sharing\n[dl19]: https://drive.google.com/file/d/1RsQonthhpJDwwdcB0sYsVGMTqPgGdMGV/view?usp=sharing\n[dl20]: https://drive.google.com/file/d/1ghS0LEGV70Jdb5un5fVdDO-vmonVIe6Z/view?usp=sharing\n[dl21]: https://drive.google.com/file/d/1_3WcTE64Mlt_12PZHNWdhVCRpoPiblwq/view?usp=sharing\n[dl22]: https://drive.google.com/file/d/1GtBKfGucgJDRLHvGWR3jOQovHYXY-Lpe/view?usp=sharing\n[dl23]: https://drive.google.com/file/d/14Ah6jfXJ5QuzaN2OsKE-g61x3-_hBnQV/view?usp=sharing\n[dl24]: https://drive.google.com/file/d/1degIq0DEFML97PFvfpi-mMN8mfzRzy5z/view?usp=sharing\n[dl25]: https://drive.google.com/file/d/1PL52lvybe1nJU5k09twbfKNRWw904HgS/view?usp=sharing\n[dl26]: https://drive.google.com/file/d/1dgMFOASKnaujQcCL4sSYkKOkBrmXUUU1/view?usp=sharing\n\n## Requirements:\n\n1. Docker\n2. Nvidia-container-toolkit\n3. Nvidia GPU drivers (470.x.x and above)\n\n\n## Running with Docker:\n\n1. Clone repo.\n2. Execute `deploy_trt.sh` from repo's root, edit settings if needed.\n3. Go to http://localhost:18081 to access documentation and try API\n\nIf you have multiple GPU's with enough GPU memory you can try running\nmultiple containers by editing *n_gpu* and *n_workers* parameters in\n`deploy_trt.sh`.\n\nBy default container is configured to build TRT engines without FP16\nsupport, to enable it change value of `force_fp16` to `True` in \n`deploy_trt.sh`. Keep in mind, that your GPU should support fast FP16\ninference (NVIDIA GPUs of RTX20xx series and above, or server GPUs like \nTESLA P100, T4 etc. ).\n\nAlso if you want to test API in non-GPU environment you can run service\nwith `deploy_cpu.sh` script. In this case ONNXRuntime will be used as\ninference backend.\n\n\u003e For pure MXNet based version, without TensorRT support you can check\n\u003e depreciated\n\u003e [v0.5.0](https://github.com/SthPhoenix/InsightFace-REST/tree/v0.5.0)\n\u003e branch\n\n\n## API usage:\n\nFor example of API usage example please refer to\n[demo_client.py](https://github.com/SthPhoenix/InsightFace-REST/blob/master/demo_client.py) code.\n\n\n\n## Work in progress:\n\n- Add examples of indexing and searching faces (powered by Milvus).\n- Add Triton Inference Server as execution backend\n\n\n## Known issues:\n\n- When `glintr100` recognition model is used `genderage` model returns \n  wrong predictions.\n\n## Changelog:\n\n### 2021-11-06 v0.7.0.0\n\nSince a lot of updates happened since last release version is updated straight to v0.7.0.0\n\nComparing to previous release (v0.6.2.0)  this release brings improved performance for SCRFD based detectors.\n\nHere is performance comparison on GPU `Nvidia RTX 2080 Super` for `scrfd_10g_gnkps` detector paired with \n`glintr100` recognition model (all tests are using `src/api_trt/test_images/Stallone.jpg`, 1 face per image):\n\n| Num workers | Client threads | FPS v0.6.2.0 | FPS v0.7.0.0 | Speed-up |\n|:-----------:|:--------------:|:------------:|:------------:|:--------:|\n|      1      |       1        |      56      |     103      |  83.9%   |\n|      1      |       30       |      72      |     128      |  77.7%   |\n|      6      |       30       |     145      |     179      |  23.4%   |\n\n\nAdditions:\n- Added experimental support for msgpack serializer: helps reduce network traffic for embeddings for ~2x.\n- Output names no longer required for detection models when building TRT engine - correct output order is now extracted \n  from onnx models.\n- Detection models now can be exported to TRT engine with batch size \u003e 1 - inference code doesn't support it yet, though\n  now they could be used in Triton Inference Server without issues. \n\nModel Zoo:\n- Added support for WebFace600k based recognition models from InsightFace repo: `w600k_r50` and `w600k_mbf`\n- Added md5 check for models to allow automatic re-download if models have changed.\n- All `scrfd` based models now supports batch dimension/\n\nImprovements:\n- 1.5x-2x faster SCRFD re-implementation with Numba: 4.5 ms. vs 10 ms. for `lumia.jpg` example with\n  `scrfd_10g_gnkps` and threshold = 0.3 (432 faces detected)).\n- Move image normalization step to GPU with help of CuPy (4x lower data transfer from CPU to GPU, about 6% \n  inference speedup, and some computations offloaded from CPU).\n- 4.5x Faster `face_align.norm_crop` implementation with help of Numba and removal of unused computations.\n  (Cropping 432 faces from `lumia.jpg` example tooks 45 ms. vs 205 ms.).\n- Face crops are now extracted only when needed - when face data or embeddings are requested, improving \n  detection only performance.\n- Added Numba njit cache to reduce subsequent starts time.\n- Logging timings rounded to ms for better readability.\n- Minor refactoring \n\nFixes:\n- Since gender/age estimation model is currently not supported exclude it from models preparing step.\n\n### 2021-09-09 v0.6.2.0\n\nREST-API\n- Use async `httpx` lib for retrieving images by urls instead of urllib3 (which caused \n  performance drop in multi-GPU environment under load due to excessive usage of opened sockets)\n- Update draft Triton Infernce Server support to use CUDA shared memory.\n- Minor refactoring for future change of project structure.\n\n### 2021-08-07 v0.6.1.0\n\nREST-API\n- Dropped support of MXNet inference backend and automatic MXNet-\u003eONNX models conversion, \n  since all models are now distributed as ONNX by default.\n\n### 2021-06-16 v0.6.0.0\n\nREST-API\n- Added support for newer InsightFace face detection SCRFD models:\n  `scrfd_500m_bnkps`, `scrfd_2.5g_bnkps`, `scrfd_10g_bnkps`\n- Released custom trained SCRFD models:\n  `scrfd_500m_gnkps`, `scrfd_2.5g_gnkps`, `scrfd_10g_gnkps`\n- Added support for newer InsightFace face recognition model `glintr100`  \n- Models auto download switched to Google Drive.\n- Default models switched to `glintr100` and `scrfd_10g_gnkps`\n\n### 2021-05-08 v0.5.9.9\n\nREST-API\n- Added JPEG decoding using PyTurboJPEG -  increased decoding speed for large \n  JPEGs for about 2x.\n- Support for batch inference of `genderage` model.\n- Support for limiting number of faces for recognition using `limit_faces` parameter \n  in `extract` endpoint.\n- New `/multipart/draw_detections` endpoint, supporting image upload using multipart \n  form data.\n- Support for printing face sizes and scores on image by `draw_detections` endpoints.\n- More verbose timings for `extract` endpoint for debug and logging purposes.\n\n### 2021-03-27 v0.5.9.8\n\nREST-API\n- Added v2 output format: more verbose and more suitable for logging.\n  Use `'api_ver':'2'` in request body. In future versions this parameter\n  will be moved to path, like `/v2/extract`, and will be default output\n  format.\n\nREST-API \u0026 conversion scripts:\n- MXNet version in dockerfiles locked to 1.6.0, since version 1.8.0\n  causes missing libopenblas.0 exception.\n\n### 2021-03-01 v0.5.9.7\n\nREST-API \u0026 conversion scripts:\n- Fixed issue with building TensorRT engine with batch \u003e 1 and FP16\n  support, which caused FP32 inference instead of FP16.\n- Moved to tensorrt:21.02 base image and removed workarounds for 20.12\n  image.\n- Changed behaviour of `force_fp16` flag. Now model with FP16 precision\n  is build only when set to `True`. Otherwise FP32 will be used even on\n  GPUs with fast FP16 support.\n\n\n### 2021-03-01 v0.5.9.6\n\nREST-API:\n- Add flag `embed_only` to `/extract` endpoint. When set to `true`\n  input images are processed as face crops, omitting detection phase.\n  Expects 112x112 face crops.\n- Added flag `draw_landmarks` to `/draw_detections` endpoint.\n\n\n### 2021-02-13\n\nREST-API:\n- Added Dockerfile for CPU-only inference with ONNXRuntime backend.\n- Added flag to return landmarks with `/extract` endpoint\n\n\n### 2020-12-26\n\nREST-API \u0026 conversion scripts:\n- Added support for `glint360k_r100FC_1.0` and `glint360k_r100FC_0.1`\n face recognition models.\n\n### 2020-12-26\n\nREST-API:\n- Base image updated to `TensorRT:20.12-py3`.\n- Added temporary fixes for TensortRT 7.2.2 ONNX parsing.\n- Added support for `r50-arcface-msfdrop75` face recognition model.\n\nConversion scripts:\n- Same updates as for REST-API\n\n### 2020-12-06\n\nREST-API:\n- Added draft support for batch inference of ArcFace model.\n\nConversion scripts:\n- Added draft support for batch inference of ArcFace model.\n\n\n### 2020-11-20\n\nREST API:\n- Pure MXNet version removed from master branch.\n- Added models bootstrapping before running workers, to prevent race\n  condition for building TRT engine.\n- Applied changes from conversion scripts (see below)\n\nConversion scripts:\n- Reshape ONNX models in memory to prevent writing temp files.\n- TRT engine builder now takes input name and shape, required for\n  building optimization profiles, from ONNX model intself.\n\n### 2020-11-07\n\nConversion scripts:\n- Added support for building TensorRT engine with batch input.\n- Added support for RetinaFaceAntiCov model (mnet_cov2, must be manually\n  [downloaded](https://github.com/deepinsight/insightface/tree/master/detection/RetinaFaceAntiCov)\n  and unpacked to `models/mxnet/mnet_cov2`)\n\nREST API:\n- Added support for RetinaFaceAntiCov v2\n- Added support for FP16 precision (`force_fp16` flag in\n  `deploy_trt.sh`)\n\n### 2020-10-22\n\nConversion scripts:\n- Minor refactoring\n\nREST API:\n- Added TensorRT version in `src/api_trt`\n- Added Dockerfile (`src/Dockerfile_trt`)\n- Added deployment script `deploy_trt.sh`\n- Added Centerface detector\n\nTensorRT version contains MXNet and ONNXRuntime compiled for CPU for\ntesting and conversion purposes.\n\n### 2020-10-16\n\nConversion scripts:\n- Added conversion of MXNet models to ONNX using Python\n- Added conversion of ONNX to TensorRT using Python\n- Added demo inference scripts for ArcFace and Retinaface using ONNX and\n  TensorRT backends\n\nREST API:\n- no changes\n\n### 2020-09-28\n\n- REST API code refactored to FastAPI\n- Detection/Recognition code is now based on official Insightface Python\n  package.\n- TensorFlow MTCNN replaced with PyTorch version\n- Added RetinaFace detector\n- Added InsightFace gender/age detector\n- Added support for GPU inference\n- Resize function refactored for fixed image proportions (significant\n  speed increase and memory usage optimization)\n\n\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FSthPhoenix%2FInsightFace-REST","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FSthPhoenix%2FInsightFace-REST","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FSthPhoenix%2FInsightFace-REST/lists"}