https://github.com/girinchutia/object-detection-inference-interface
Object Detection Inference Interface (ODII) : A common interface for object detection model inference
https://github.com/girinchutia/object-detection-inference-interface
artificial-intelligence artificial-intelligence-framework computer-vision inference inference-engine object-detection odii python python3 vision yolo yolov3 yolov6 yolov7 yolox
Last synced: 6 months ago
JSON representation
Object Detection Inference Interface (ODII) : A common interface for object detection model inference
- Host: GitHub
- URL: https://github.com/girinchutia/object-detection-inference-interface
- Owner: GirinChutia
- License: gpl-3.0
- Created: 2024-08-10T10:53:00.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-10T11:37:25.000Z (about 1 year ago)
- Last Synced: 2025-02-08T07:11:11.860Z (8 months ago)
- Topics: artificial-intelligence, artificial-intelligence-framework, computer-vision, inference, inference-engine, object-detection, odii, python, python3, vision, yolo, yolov3, yolov6, yolov7, yolox
- Language: Jupyter Notebook
- Homepage:
- Size: 2.09 MB
- Stars: 1
- Watchers: 1
- Forks: 1
- Open Issues: 2
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
README
# Object Detection Inference Interface (ODII)
ODII is a Python package designed to provide a unified and streamlined interface for running inference on multiple object detection models under one hood.
ODII facilitates seamless interaction with a range of popular models, including YOLOX, YOLOv3, YOLOv4, YOLOv6, and YOLOv7, without the need to manage multiple codebases or installation processes.## โจ Features
- ๐ **Unified Interface**: Interact with multiple object detection models using a single, easy-to-use interface.
- ๐งน **Reduced Boilerplate**: Simplifies the setup process by handling the installation of multiple models with varying instructions.
- ๐ **Lower Learning Curve**: Minimizes the complexity of understanding and writing inference code, making it easier to work with different models.
- ๐ **Extensibility**: Easily extend the interface to support additional object detection models.## ๐ฆ Supported Models
- YOLOX : https://github.com/Megvii-BaseDetection/YOLOX
- YOLOv3 : https://github.com/eriklindernoren/PyTorch-YOLOv3
- YOLOv4 : https://github.com/Tianxiaomo/pytorch-YOLOv4
- YOLOv6 : https://github.com/meituan/YOLOv6
- YOLOv7 : https://github.com/WongKinYiu/yolov7## ๐ฆ Reference for COCO Pretrained Weights
- [YOLOX](src/odii/yolox/readme.md)
- [YOLOv3](src/odii/yolov3/readme.md)
- [YOLOv4](src/odii/yolov4/readme.md)
- [YOLOv6](src/odii/yolov6/readme.md)
- [YOLOv7](src/odii/yolov7/readme.md)## ๐ ๏ธ Requirements
- Python >= 3.8
- pip >= 24.2## ๐ฅ Installation
1. **Install PyTorch**: Follow the instructions on the [PyTorch website](https://pytorch.org/get-started/locally/) to install the appropriate version of PyTorch for your system.
For example, using pip:
```bash
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
```2. **Clone the Repository and Install Dependencies**:
```bash
git clone https://github.com/GirinChutia/Object-Detection-Inference-Interface.git
cd Object-Detection-Inference-Interface
python -m pip install -e .
```## ๐ ๏ธ Usage
Here is an example of how to use ODII to run inference on an image:
```python
from odii import INFERENCE, plot_results, load_classes, load_yaml# Load the classnames
classnames = load_classes('coco.names') # ['person','bicycle','car', ... ]# Set the model paths & configs
# (COCO Pretrained weights can be downloaded from links provided in "Reference for COCO Pretrained Weights" section)model_config = {'yolov7': {'weights': 'weights/yolov7/yolov7.pt',
'config': None},
'yolov4': {'weights': 'weights/yolov4/yolov4.weights',
'config': 'weights/yolov4/yolov4.cfg'},}
# Set Device
device = 'cuda'# Input image path
image_path = 'tests/images/test_image.jpg'# --- Infer yolov7 model ---
model_name = 'yolov7'
INF = INFERENCE(model_name=model_name,
device=device,
model_paths={'weights': model_config[model_name]['weights'],
'config': model_config[model_name]['config']})yolov7_result = INF.infer_image(image_path=image_path,
confidence_threshold=0.4,
nms_threshold=0.4)# --- Infer yolov4 model ---
model_name = 'yolov4'
INF = INFERENCE(model_name=model_name,
device=device,
model_paths={'weights': model_config[model_name]['weights'],
'config': model_config[model_name]['config']})yoloxm_result = INF.infer_image(image_path=image_path,
confidence_threshold=0.4,
nms_threshold=0.4)
```
More details for inference can be found in this notebook : [inference_demo.ipynb](inference_demo.ipynb)## ๐ Results Format
The inference results are returned as a dictionary with the following format:```python
{
'boxes': [
[74, 11, 900, 613],
[77, 149, 245, 361],
[560, 359, 737, 565],
[139, 38, 414, 610]
],
'scores': [
0.8257260322570801,
0.8446129560470581,
0.8616959452629089,
0.9366706013679504
],
'classes': [2, 16, 28, 0]
}
```## ๐ Acknowledgements
1. https://github.com/Megvii-BaseDetection/YOLOX
2. https://github.com/Tianxiaomo/pytorch-YOLOv4
3. https://github.com/meituan/YOLOv6
4. https://github.com/WongKinYiu/yolov7
5. https://github.com/eriklindernoren/PyTorch-YOLOv3