{"id":13405534,"url":"https://github.com/obss/sahi","last_synced_at":"2025-05-14T08:05:16.456Z","repository":{"id":36998222,"uuid":"334412452","full_name":"obss/sahi","owner":"obss","description":"Framework agnostic sliced/tiled inference + interactive ui + error analysis plots","archived":false,"fork":false,"pushed_at":"2025-05-10T21:51:59.000Z","size":107363,"stargazers_count":4537,"open_issues_count":14,"forks_count":648,"subscribers_count":48,"default_branch":"main","last_synced_at":"2025-05-14T08:04:23.618Z","etag":null,"topics":["coco","computer-vision","deep-learning","explainable-ai","fiftyone","huggingface","instance-segmentation","large-image","machine-learning","merge","mmdetection","object-detection","oriented-object-detection","python","pytorch","remote-sensing","satellite","small-object-detection","tiling","yolo11"],"latest_commit_sha":null,"homepage":"https://ieeexplore.ieee.org/document/9897990","language":"Python","has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/obss.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2021-01-30T12:54:53.000Z","updated_at":"2025-05-14T07:42:06.000Z","dependencies_parsed_at":"2023-01-17T12:33:15.401Z","dependency_job_id":"202e8446-8e63-4540-b579-ec29095a1fcd","html_url":"https://github.com/obss/sahi","commit_stats":{"total_commits":481,"total_committers":38,"mean_commits":"12.657894736842104","dds":"0.15384615384615385","last_synced_commit":"d91e1e6a1e62f966540abe7471f88bf792de0f93"},"previous_names":[],"tags_count":95,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obss%2Fsahi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obss%2Fsahi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obss%2Fsahi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obss%2Fsahi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/obss","download_url":"https://codeload.github.com/obss/sahi/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254101588,"owners_count":22014907,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["coco","computer-vision","deep-learning","explainable-ai","fiftyone","huggingface","instance-segmentation","large-image","machine-learning","merge","mmdetection","object-detection","oriented-object-detection","python","pytorch","remote-sensing","satellite","small-object-detection","tiling","yolo11"],"created_at":"2024-07-30T19:02:04.836Z","updated_at":"2025-05-14T08:05:16.431Z","avatar_url":"https://github.com/obss.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\u003ch1\u003e\n  SAHI: Slicing Aided Hyper Inference\n\u003c/h1\u003e\n\n\u003ch4\u003e\n  A lightweight vision library for performing large scale object detection \u0026 instance segmentation\n\u003c/h4\u003e\n\n\u003ch4\u003e\n    \u003cimg width=\"700\" alt=\"teaser\" src=\"https://raw.githubusercontent.com/obss/sahi/main/resources/sliced_inference.gif\"\u003e\n\u003c/h4\u003e\n\n\u003cdiv\u003e\n    \u003ca href=\"https://pepy.tech/project/sahi\"\u003e\u003cimg src=\"https://pepy.tech/badge/sahi\" alt=\"downloads\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pepy.tech/project/sahi\"\u003e\u003cimg src=\"https://pepy.tech/badge/sahi/month\" alt=\"downloads\"\u003e\u003c/a\u003e\n    \u003cbr\u003e\n    \u003ca href=\"https://badge.fury.io/py/sahi\"\u003e\u003cimg src=\"https://badge.fury.io/py/sahi.svg\" alt=\"pypi version\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://anaconda.org/conda-forge/sahi\"\u003e\u003cimg src=\"https://anaconda.org/conda-forge/sahi/badges/version.svg\" alt=\"conda version\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/obss/sahi/actions/workflows/ci.yml\"\u003e\u003cimg src=\"https://github.com/obss/sahi/actions/workflows/ci.yml/badge.svg\" alt=\"Continious Integration\"\u003e\u003c/a\u003e\n  \u003cbr\u003e\n    \u003ca href=\"https://ieeexplore.ieee.org/document/9897990\"\u003e\u003cimg src=\"https://img.shields.io/badge/DOI-10.1109%2FICIP46576.2022.9897990-orange.svg\" alt=\"ci\"\u003e\u003c/a\u003e\n  \u003cbr\u003e\n    \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_ultralytics.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://huggingface.co/spaces/fcakyon/sahi-yolox\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/obss/sahi/main/resources/hf_spaces_badge.svg\" alt=\"HuggingFace Spaces\"\u003e\u003c/a\u003e\n\n\u003c/div\u003e\n\u003c/div\u003e\n\n## \u003cdiv align=\"center\"\u003eOverview\u003c/div\u003e\n\nObject detection and instance segmentation are by far the most important applications in Computer Vision. However, the detection of small objects and inference on large images still need to be improved in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities.\n\n| Command  | Description  |\n|---|---|\n| [predict](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-command-usage)  | perform sliced/standard video/image prediction using any [ultralytics](https://github.com/ultralytics/ultralytics)/[mmdet](https://github.com/open-mmlab/mmdetection)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection\u0026sort=downloads)/[torchvision](https://pytorch.org/vision/stable/models.html#object-detection) model |\n| [predict-fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-fiftyone-command-usage)  | perform sliced/standard prediction using any [ultralytics](https://github.com/ultralytics/ultralytics)/[mmdet](https://github.com/open-mmlab/mmdetection)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection\u0026sort=downloads)/[torchvision](https://pytorch.org/vision/stable/models.html#object-detection) model and explore results in [fiftyone app](https://github.com/voxel51/fiftyone) |\n| [coco slice](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-slice-command-usage)  | automatically slice COCO annotation and image files |\n| [coco fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-fiftyone-command-usage)  | explore multiple prediction results on your COCO dataset with [fiftyone ui](https://github.com/voxel51/fiftyone) ordered by number of misdetections |\n| [coco evaluate](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-evaluate-command-usage)  | evaluate classwise COCO AP and AR for given predictions and ground truth |\n| [coco analyse](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-analyse-command-usage)  | calculate and export many error analysis plots |\n| [coco yolo](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-yolo-command-usage)  | automatically convert any COCO dataset to [ultralytics](https://github.com/ultralytics/ultralytics) format |\n\n## \u003cdiv align=\"center\"\u003eQuick Start Examples\u003c/div\u003e\n\n[📜 List of publications that cite SAHI (currently 300+)](https://scholar.google.com/scholar?hl=en\u0026as_sdt=2005\u0026sciodt=0,5\u0026cites=14065474760484865747\u0026scipsc=\u0026q=\u0026scisbd=1)\n\n[🏆 List of competition winners that used SAHI](https://github.com/obss/sahi/discussions/688)\n\n### Tutorials\n\n- [Introduction to SAHI](https://medium.com/codable/sahi-a-vision-library-for-performing-sliced-inference-on-large-images-small-objects-c8b086af3b80)\n\n- [Official paper](https://ieeexplore.ieee.org/document/9897990) (ICIP 2022 oral)\n\n- [Pretrained weights and ICIP 2022 paper files](https://github.com/fcakyon/small-object-detection-benchmark)\n\n- [2025 Video Tutorial](https://www.youtube.com/watch?v=ILqMBah5ZvI) (RECOMMENDED)\n\n- [Visualizing and Evaluating SAHI predictions with FiftyOne](https://voxel51.com/blog/how-to-detect-small-objects/)\n\n- ['Exploring SAHI' Research Article from 'learnopencv.com'](https://learnopencv.com/slicing-aided-hyper-inference/)\n\n- [Slicing Aided Hyper Inference Explained by Encord](https://encord.com/blog/slicing-aided-hyper-inference-explained/)\n\n- ['VIDEO TUTORIAL: Slicing Aided Hyper Inference for Small Object Detection - SAHI'](https://www.youtube.com/watch?v=UuOjJKxn-M8\u0026t=270s)\n\n- [Video inference support is live](https://github.com/obss/sahi/discussions/626)\n\n- [Kaggle notebook](https://www.kaggle.com/remekkinas/sahi-slicing-aided-hyper-inference-yv5-and-yx)\n\n- [Satellite object detection](https://blog.ml6.eu/how-to-detect-small-objects-in-very-large-images-70234bab0f98)\n\n- [Error analysis plots \u0026 evaluation](https://github.com/obss/sahi/discussions/622) (RECOMMENDED)\n\n- [Interactive result visualization and inspection](https://github.com/obss/sahi/discussions/624) (RECOMMENDED)\n\n- [COCO dataset conversion](https://medium.com/codable/convert-any-dataset-to-coco-object-detection-format-with-sahi-95349e1fe2b7)\n\n- [Slicing operation notebook](demo/slicing.ipynb)\n\n- `YOLOX` + `SAHI` demo: \u003ca href=\"https://huggingface.co/spaces/fcakyon/sahi-yolox\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/obss/sahi/main/resources/hf_spaces_badge.svg\" alt=\"sahi-yolox\"\u003e\u003c/a\u003e\n\n- `YOLO12` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_ultralytics.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-yolo12\"\u003e\u003c/a\u003e (NEW)\n\n- `YOLO11-OBB` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_ultralytics.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-yolo11-obb\"\u003e\u003c/a\u003e (NEW)\n\n- `YOLO11` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_ultralytics.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-yolo11\"\u003e\u003c/a\u003e\n\n- `RT-DETR v2` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_huggingface.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-rtdetrv2\"\u003e\u003c/a\u003e (NEW)\n\n- `RT-DETR` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_rtdetr.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-rtdetr\"\u003e\u003c/a\u003e (NEW)\n\n- `DeepSparse` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_sparse_yolov5.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-deepsparse\"\u003e\u003c/a\u003e\n\n- `HuggingFace` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_huggingface.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-huggingface\"\u003e\u003c/a\u003e\n\n- `YOLOv5` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-yolov5\"\u003e\u003c/a\u003e\n\n- `MMDetection` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_mmdetection.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-mmdetection\"\u003e\u003c/a\u003e\n\n- `TorchVision` + `SAHI` walkthrough: \u003ca href=\"https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_torchvision.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"sahi-torchvision\"\u003e\u003c/a\u003e\n\n\u003ca href=\"https://huggingface.co/spaces/fcakyon/sahi-yolox\"\u003e\u003cimg width=\"600\" src=\"https://user-images.githubusercontent.com/34196005/144092739-c1d9bade-a128-4346-947f-424ce00e5c4f.gif\" alt=\"sahi-yolox\"\u003e\u003c/a\u003e\n\n\u003c/details\u003e\n\n### Installation\n\n\u003cimg width=\"700\" alt=\"sahi-installation\" src=\"https://user-images.githubusercontent.com/34196005/149311602-b44e6fe1-f496-40f2-a7ae-5ea1f66e1550.gif\"\u003e\n\n\u003cdetails closed\u003e\n\u003csummary\u003e\n\u003cbig\u003e\u003cb\u003eInstallation details:\u003c/b\u003e\u003c/big\u003e\n\u003c/summary\u003e\n\n- Install `sahi` using pip:\n\n```console\npip install sahi\n```\n\n- On Windows, `Shapely` needs to be installed via Conda:\n\n```console\nconda install -c conda-forge shapely\n```\n\n- Install your desired version of pytorch and torchvision:\n\n```console\npip install torch==2.6.0 torchvision==0.21.0 --index-url https://download.pytorch.org/whl/cu126\n```\n\n(torch 2.1.2 is required for mmdet support):\n\n```console\npip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu121\n```\n\n- Install your desired detection framework (yolov5):\n\n```console\npip install yolov5==7.0.14 sahi==0.11.21\n```\n\n- Install your desired detection framework (ultralytics):\n\n```console\npip install ultralytics\u003e=8.3.86\n```\n\n- Install your desired detection framework (mmdet):\n\n```console\npip install mim\nmim install mmdet==3.3.0\n```\n\n- Install your desired detection framework (huggingface):\n\n```console\npip install transformers\u003e=4.42.0 timm\n```\n\n\u003c/details\u003e\n\n### Framework Agnostic Sliced/Standard Prediction\n\n\u003cimg width=\"700\" alt=\"sahi-predict\" src=\"https://user-images.githubusercontent.com/34196005/149310540-e32f504c-6c9e-4691-8afd-59f3a1a457f0.gif\"\u003e\n\nFind detailed info on `sahi predict` command at [cli.md](docs/cli.md#predict-command-usage).\n\nFind detailed info on video inference at [video inference tutorial](https://github.com/obss/sahi/discussions/626).\n\nFind detailed info on image/dataset slicing utilities at [slicing.md](docs/slicing.md).\n\n### Error Analysis Plots \u0026 Evaluation\n\n\u003cimg width=\"700\" alt=\"sahi-analyse\" src=\"https://user-images.githubusercontent.com/34196005/149537858-22b2e274-04e8-4e10-8139-6bdcea32feab.gif\"\u003e\n\nFind detailed info at [Error Analysis Plots \u0026 Evaluation](https://github.com/obss/sahi/discussions/622).\n\n### Interactive Visualization \u0026 Inspection\n\n\u003cimg width=\"700\" alt=\"sahi-fiftyone\" src=\"https://user-images.githubusercontent.com/34196005/149321540-e6ddd5f3-36dc-4267-8574-a985dd0c6578.gif\"\u003e\n\nFind detailed info at [Interactive Result Visualization and Inspection](https://github.com/obss/sahi/discussions/624).\n\n### Other utilities\n\nFind detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at [coco.md](docs/coco.md).\n\n## \u003cdiv align=\"center\"\u003eCitation\u003c/div\u003e\n\nIf you use this package in your work, please cite it as:\n\n```bibtex\n@article{akyon2022sahi,\n  title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection},\n  author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin},\n  journal={2022 IEEE International Conference on Image Processing (ICIP)},\n  doi={10.1109/ICIP46576.2022.9897990},\n  pages={966-970},\n  year={2022}\n}\n```\n\n```bibtex\n@software{obss2021sahi,\n  author       = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan},\n  title        = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}},\n  month        = nov,\n  year         = 2021,\n  publisher    = {Zenodo},\n  doi          = {10.5281/zenodo.5718950},\n  url          = {https://doi.org/10.5281/zenodo.5718950}\n}\n```\n\n## \u003cdiv align=\"center\"\u003eContributing\u003c/div\u003e\n\n### Add new frameworks\n\n`sahi` library currently supports all [Ultralytics (YOLOv8/v10/v11/RTDETR) models](https://github.com/ultralytics/ultralytics), [MMDetection models](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md), [Detectron2 models](https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md), and [HuggingFace object detection models](https://huggingface.co/models?pipeline_tag=object-detection\u0026sort=downloads). Moreover, it is easy to add new frameworks.\n\nAll you need to do is, create a new .py file under [sahi/models/](https://github.com/obss/sahi/tree/main/sahi/models) folder and create a new class in that .py file that implements [DetectionModel class](https://github.com/obss/sahi/blob/aaeb57c39780a5a32c4de2848e54df9a874df58b/sahi/models/base.py#L12). You can take the [MMDetection wrapper](https://github.com/obss/sahi/blob/aaeb57c39780a5a32c4de2848e54df9a874df58b/sahi/models/mmdet.py#L91) or [YOLOv5 wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/yolov5.py#L17) as a reference.\n\n### Open a Pull Request\n\n- Install the [uv package manager](https://docs.astral.sh/uv/) on your system.\n- Install [pre-commit](https://pre-commit.com/) hooks with `uv run pre-commit install`.\n\n## \u003cdiv align=\"center\"\u003eContributors\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/fcakyon\" target=\"_blank\"\u003eFatih Cagatay Akyon\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/sinanonur\" target=\"_blank\"\u003eSinan Onur Altinuc\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/devrimcavusoglu\" target=\"_blank\"\u003eDevrim Cavusoglu\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/cemilcengiz\" target=\"_blank\"\u003eCemil Cengiz\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/oulcan\" target=\"_blank\"\u003eOgulcan Eryuksel\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/kadirnar\" target=\"_blank\"\u003eKadir Nar\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/Dronakurl\" target=\"_blank\"\u003eDronakurl\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/madenburak\" target=\"_blank\"\u003eBurak Maden\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/PushpakBhoge\" target=\"_blank\"\u003ePushpak Bhoge\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/mcvarer\" target=\"_blank\"\u003eM. Can V.\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/ChristofferEdlund\" target=\"_blank\"\u003eChristoffer Edlund\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/ishworii\" target=\"_blank\"\u003eIshwor\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/mecevit\" target=\"_blank\"\u003eMehmet Ecevit\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/ssahinnkadir\" target=\"_blank\"\u003eKadir Sahin\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/weypro\" target=\"_blank\"\u003eWey\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/youngjae-avikus\" target=\"_blank\"\u003eYoungjae\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/tureckova\" target=\"_blank\"\u003eAlzbeta Tureckova\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/s-aiueo32\" target=\"_blank\"\u003eSo Uchida\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/developer0hye\" target=\"_blank\"\u003eYonghye Kwon\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/aphilas\" target=\"_blank\"\u003eNeville\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/mayrajeo\" target=\"_blank\"\u003eJanne Mäyrä\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/christofferedlund\" target=\"_blank\"\u003eChristoffer Edlund\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/ilkermanap\" target=\"_blank\"\u003eIlker Manap\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/nguyenthean\" target=\"_blank\"\u003eNguyễn Thế An\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/weiji14\" target=\"_blank\"\u003eWei Ji\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/aynursusuz\" target=\"_blank\"\u003eAynur Susuz\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/pranavdurai10\" target=\"_blank\"\u003ePranav Durai\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/lakshaymehra\" target=\"_blank\"\u003eLakshay Mehra\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/karl-joan\" target=\"_blank\"\u003eKarl-Joan Alesma\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/jacobmarks\" target=\"_blank\"\u003eJacob Marks\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/williamlung\" target=\"_blank\"\u003eWilliam Lung\u003c/a\u003e\n\n\u003ca align=\"left\" href=\"https://github.com/amoghdhaliwal\" target=\"_blank\"\u003eAmogh Dhaliwal\u003c/a\u003e\n\n\u003c/div\u003e\n","funding_links":[],"categories":["Python","Computer Vision","Object Detection Applications","Repos"],"sub_categories":["Classification \u0026 Detection \u0026 Tracking"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fobss%2Fsahi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fobss%2Fsahi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fobss%2Fsahi/lists"}