An open API service indexing awesome lists of open source software.

https://github.com/girinchutia/object-detection-inference-interface

Object Detection Inference Interface (ODII) : A common interface for object detection model inference
https://github.com/girinchutia/object-detection-inference-interface

artificial-intelligence artificial-intelligence-framework computer-vision inference inference-engine object-detection odii python python3 vision yolo yolov3 yolov6 yolov7 yolox

Last synced: 6 months ago
JSON representation

Object Detection Inference Interface (ODII) : A common interface for object detection model inference

Awesome Lists containing this project

README

          

# Object Detection Inference Interface (ODII)

ODII is a Python package designed to provide a unified and streamlined interface for running inference on multiple object detection models under one hood.
ODII facilitates seamless interaction with a range of popular models, including YOLOX, YOLOv3, YOLOv4, YOLOv6, and YOLOv7, without the need to manage multiple codebases or installation processes.

## โœจ Features

- ๐Ÿš€ **Unified Interface**: Interact with multiple object detection models using a single, easy-to-use interface.
- ๐Ÿงน **Reduced Boilerplate**: Simplifies the setup process by handling the installation of multiple models with varying instructions.
- ๐Ÿ“š **Lower Learning Curve**: Minimizes the complexity of understanding and writing inference code, making it easier to work with different models.
- ๐Ÿ”„ **Extensibility**: Easily extend the interface to support additional object detection models.

## ๐Ÿ“ฆ Supported Models

- YOLOX : https://github.com/Megvii-BaseDetection/YOLOX
- YOLOv3 : https://github.com/eriklindernoren/PyTorch-YOLOv3
- YOLOv4 : https://github.com/Tianxiaomo/pytorch-YOLOv4
- YOLOv6 : https://github.com/meituan/YOLOv6
- YOLOv7 : https://github.com/WongKinYiu/yolov7

## ๐Ÿ“ฆ Reference for COCO Pretrained Weights

- [YOLOX](src/odii/yolox/readme.md)
- [YOLOv3](src/odii/yolov3/readme.md)
- [YOLOv4](src/odii/yolov4/readme.md)
- [YOLOv6](src/odii/yolov6/readme.md)
- [YOLOv7](src/odii/yolov7/readme.md)

## ๐Ÿ› ๏ธ Requirements

- Python >= 3.8
- pip >= 24.2

## ๐Ÿ“ฅ Installation

1. **Install PyTorch**: Follow the instructions on the [PyTorch website](https://pytorch.org/get-started/locally/) to install the appropriate version of PyTorch for your system.

For example, using pip:

```bash
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
```

2. **Clone the Repository and Install Dependencies**:

```bash
git clone https://github.com/GirinChutia/Object-Detection-Inference-Interface.git
cd Object-Detection-Inference-Interface
python -m pip install -e .
```

## ๐Ÿ› ๏ธ Usage

Here is an example of how to use ODII to run inference on an image:

```python
from odii import INFERENCE, plot_results, load_classes, load_yaml

# Load the classnames
classnames = load_classes('coco.names') # ['person','bicycle','car', ... ]

# Set the model paths & configs
# (COCO Pretrained weights can be downloaded from links provided in "Reference for COCO Pretrained Weights" section)

model_config = {'yolov7': {'weights': 'weights/yolov7/yolov7.pt',
'config': None},
'yolov4': {'weights': 'weights/yolov4/yolov4.weights',
'config': 'weights/yolov4/yolov4.cfg'},}
# Set Device
device = 'cuda'

# Input image path
image_path = 'tests/images/test_image.jpg'

# --- Infer yolov7 model ---

model_name = 'yolov7'

INF = INFERENCE(model_name=model_name,
device=device,
model_paths={'weights': model_config[model_name]['weights'],
'config': model_config[model_name]['config']})

yolov7_result = INF.infer_image(image_path=image_path,
confidence_threshold=0.4,
nms_threshold=0.4)

# --- Infer yolov4 model ---

model_name = 'yolov4'

INF = INFERENCE(model_name=model_name,
device=device,
model_paths={'weights': model_config[model_name]['weights'],
'config': model_config[model_name]['config']})

yoloxm_result = INF.infer_image(image_path=image_path,
confidence_threshold=0.4,
nms_threshold=0.4)
```
More details for inference can be found in this notebook : [inference_demo.ipynb](inference_demo.ipynb)

## ๐Ÿ“Š Results Format
The inference results are returned as a dictionary with the following format:

```python
{
'boxes': [
[74, 11, 900, 613],
[77, 149, 245, 361],
[560, 359, 737, 565],
[139, 38, 414, 610]
],
'scores': [
0.8257260322570801,
0.8446129560470581,
0.8616959452629089,
0.9366706013679504
],
'classes': [2, 16, 28, 0]
}
```

## ๐Ÿ™ Acknowledgements

1. https://github.com/Megvii-BaseDetection/YOLOX
2. https://github.com/Tianxiaomo/pytorch-YOLOv4
3. https://github.com/meituan/YOLOv6
4. https://github.com/WongKinYiu/yolov7
5. https://github.com/eriklindernoren/PyTorch-YOLOv3