An open API service indexing awesome lists of open source software.

https://github.com/tetutaro/object_detection_tflite

Object Detection using TensorFlow Lite
https://github.com/tetutaro/object_detection_tflite

object-detection raspberrypi tensorflow-lite tpu yolov3 yolov4 yolov5

Last synced: 6 months ago
JSON representation

Object Detection using TensorFlow Lite

Awesome Lists containing this project

README

          

# object_detection_tflite

Object Detection using TensorFlow Lite

## purpose of this repository

The purpose of this repository is to run object recognition using the TensorFlow Lite models for various media (image, video and streaming video).

If you want to get the TensorFlow binary of YOLO, see [yolo_various_framework](https://github.com/tetutaro/yolo_various_framework)

## features of this repository

- Object detection for streaming video shot by (MacBook, RaspberryPi) Camera Module
- Object detection for pre-recorded videos and photos
- Fast object detection using Google Coral Edge TPU
- You can use YOLO V3, V4 and V5
- convert TensorFlow Lite binaries using [yolo_various_framework](https://github.com/tetutaro/yolo_various_framework)

### extras

- Face detection and age,gender estimation for detected faces
- `face_agender.py`
- Image preprocessing and Classification
- Motion Detection and Classification of moving objects
- `motion_detect.py`
- Selective Search and Classificaiton of found objects
- `selective_detect.py`

## setup for object detection

- (Optional: RaspberryPi) prepare RaspberryPi and RaspberryPi Camera Module
- install required Python packages
- `> pip3 install -r requirements.txt`
- (Optional: RaspberryPi) install picamera
- `> pip3 install picamera`
- install TensorFlow lite runtime
- cf. https://www.tensorflow.org/lite/guide/python
- you can know your platform of RaspberryPi with `> uname -a`
- (Optional: TPU) prepare Google Coral Edge TPU USB Accelerator
- (Optional: TPU) install Edge TPU runtime
- cf. https://coral.ai/docs/accelerator/get-started/#1-install-the-edge-tpu-runtime
- (Optional: RaspberryPi & TPU) setup Edge TPU
- create a new file `/etc/udev/rules.d/99-edgetpu-accelerator.rules` with the following contents
```
SUBSYSTEM=="usb",ATTRS{idVendor}=="1a6e",GROUP="plugdev"
SUBSYSTEM=="usb",ATTRS{idVendor}=="18d1",GROUP="plugdev"
```
- `> sudo usermod -aG plugdev pi` (if you use the default account of RaspberryPi "pi")
- reboot RaspberryPi
- check Edge TPU is recognized by RaspberryPi (`> lsusb`)
- `ID 18d1:9302 Google Inc.`
- download pretrained TFlite weights
- `> ./download_models.sh`

### setup for YOLO

- convert pre-trained weights to TensorFlow Lite binaries using [yolo_various_framework](https://github.com/tetutaro/yolo_various_framework)
- clone that repository
- download and convert pre-trained weights according to README of that repository
- (Optional) compile TensorFLow Lite binaries for Edge TPU
- see [README of yolo_various_framework/weights](https://github.com/tetutaro/yolo_various_framework/blob/main/weights/README.md)
- I don't recommend that because they are tooooo slow (most of subgraph in that models cannot be mapped on the TPUs)
- for the same reason, I don't recommend using TensorFlow Lite model with full(int8) quantization
- copy TensorFlow Lite binaries under the `models` directory
- `> cp [directory of yolo_various_framework]/weights/yolo/*.tflite models/.`
- `> cp [directory of yolo_various_framework]/weights/yolov5/*.tflite models/.`

### setup of face detection and age,gender estimation

- see [Readme of agender](https://github.com/tetutaro/object_detection_tflite/blob/master/agender/README.md)

## usage: object detection

```
usage: detect.py [-h]
[--media MEDIA]
[--height HEIGHT]
[--width WIDTH]
[--hflip] [--vflip]
[--model {ssd,face,yolov3-tiny,yolov3,yolov3-spp,yolov4-tiny,yolov4,yolov5s,yolov5m,yolov5l,yolov5x}]
[--quant {fp32,fp16,int8,tpu}]
[--target TARGET]
[--iou-threshold IOU_THRESHOLD]
[--conf-threshold CONF_THRESHOLD]
[--fontsize FONTSIZE]
[--fastforward FASTFORWARD]

detect object from various media

optional arguments:
-h, --help show this help message and exit
--media MEDIA filename of image/video
(if not set, use streaming video from camera)
--height HEIGHT camera image height
--width WIDTH camera image width
--hflip flip horizontally
--vflip flip vertically
--model {ssd,face,yolov3-tiny,yolov3,yolov3-spp,yolov4-tiny,yolov4,yolov5s,yolov5m,yolov5l,yolov5x}
object detection model
--quant {fp32,fp16,int8,tpu}
quantization mode (or use EdgeTPU)
--target TARGET the target type of detecting object (default: all)
--iou-threshold IOU_THRESHOLD
the IoU threshold of NMS
--conf-threshold CONF_THRESHOLD
the confidence score threshold of NMS
--fontsize FONTSIZE fontsize to display
--fastforward FASTFORWARD
frame interval for object detection
(default: 1 = detect every frame)
```

### usage: face detection and age,gender estimation

```
usage: face_agender.py [-h]
[--media MEDIA]
[--height HEIGHT]
[--width WIDTH]
[--hflip]
[--vflip]
[--quant {fp32,tpu}]
[--target TARGET]
[--conf-threshold CONF_THRESHOLD]
[--fontsize FONTSIZE]
[--fastforward FASTFORWARD]

detect face and estimate age, gender

optional arguments:
-h, --help show this help message and exit
--media MEDIA filename of image/video
(if not set, use streaming video from camera)
--height HEIGHT camera image height
--width WIDTH camera image width
--hflip flip horizontally
--vflip flip vertically
--quant {fp32,tpu} quantization mode (or use EdgeTPU)
--target TARGET the target type of detecting object (default: all)
--conf-threshold CONF_THRESHOLD
the confidence score threshold of NMS
--fontsize FONTSIZE fontsize to display
--fastforward FASTFORWARD
frame interval for object detection
(default: 1 = detect every frame)
```

### usage: motion detection and object classification

```
usage: motion_detect.py [-h]
[--media MEDIA]
[--height HEIGHT]
[--width WIDTH]
[--hflip]
[--vflip]
[--model {mobilenet,bird,insect,plant}]
[--quant {fp32,tpu}]
[--target TARGET]
[--prob-threshold PROB_THRESHOLD]
[--fontsize FONTSIZE]
[--fastforward FASTFORWARD]

motion detection + image classification

optional arguments:
-h, --help show this help message and exit
--media MEDIA filename of image/video
(if not set, use streaming video from camera)
--height HEIGHT camera image height
--width WIDTH camera image width
--hflip flip horizontally
--vflip flip vertically
--model {mobilenet,bird,insect,plant}
object detection model
--quant {fp32,tpu} quantization mode (or use EdgeTPU)
--target TARGET the target type of detecting object (default: all)
--prob-threshold PROB_THRESHOLD
the hreshold of probability
--fontsize FONTSIZE fontsize to display
--fastforward FASTFORWARD
frame interval for object detection
(default: 1 = detect every frame)
```

### usage: selective search and object classification

```
usage: selective_detect.py [-h]
[--media MEDIA]
[--height HEIGHT]
[--width WIDTH]
[--hflip]
[--vflip]
[--model {mobilenet,bird,insect,plant}]
[--quant {fp32,tpu}]
[--target TARGET]
[--prob-threshold PROB_THRESHOLD]
[--iou-threshold IOU_THRESHOLD]
[--fontsize FONTSIZE]
[--fastforward FASTFORWARD]

selective search + image classification

optional arguments:
-h, --help show this help message and exit
--media MEDIA filename of image/video
(if not set, use streaming video from camera)
--height HEIGHT camera image height
--width WIDTH camera image width
--hflip flip horizontally
--vflip flip vertically
--model {mobilenet,bird,insect,plant}
object detection model
--quant {fp32,tpu} quantization mode (or use EdgeTPU)
--target TARGET the target type of detecting object (default: all)
--prob-threshold PROB_THRESHOLD
the hreshold of probability
--iou-threshold IOU_THRESHOLD
the IoU threshold of NMS
--fontsize FONTSIZE fontsize to display
--fastforward FASTFORWARD
frame interval for object detection
(default: 1 = detect every frame)
```