https://github.com/automaticdai/rpi-object-detection
Real-time object detection and tracking with Raspberry Pi and OpenCV!
https://github.com/automaticdai/rpi-object-detection
computer-vision object-detection opencv raspberry-pi
Last synced: 2 months ago
JSON representation
Real-time object detection and tracking with Raspberry Pi and OpenCV!
- Host: GitHub
- URL: https://github.com/automaticdai/rpi-object-detection
- Owner: automaticdai
- License: mit
- Created: 2015-11-30T20:13:34.000Z (over 9 years ago)
- Default Branch: master
- Last Pushed: 2025-03-07T12:34:00.000Z (3 months ago)
- Last Synced: 2025-03-07T13:35:03.655Z (3 months ago)
- Topics: computer-vision, object-detection, opencv, raspberry-pi
- Language: Python
- Homepage:
- Size: 2.56 MB
- Stars: 199
- Watchers: 10
- Forks: 50
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-robotics-ee-opensource - GitHub
README
# Raspberry Pi Real-Time Object Detection and Tracking
  
- [Raspberry Pi Real-Time Object Detection and Tracking](#raspberry-pi-real-time-object-detection-and-tracking)
* [1. Introduction](#1-introduction)
* [2. Dependency](#2-dependency)
+ [2.1. Packages requirement](#21-packages-requirement)
+ [2.2. Hardware support](#22-hardware-support)
* [3. What's in this repository](#3-what-s-in-this-repository)
+ [3.1. Camera Test](#31-camera-test)
+ [3.2. Motion Detection](#32-motion-detection)
+ [3.3. Color-based Object Detection and Tracking](#33-color-based-object-detection-and-tracking)
+ [3.4. Shape-based Object Detection and Tracking](#34-shape-based-object-detection-and-tracking)
+ [3.5. Feature-based Object Detection and Tracking (with ORB)](#35-feature-based-object-detection-and-tracking--with-orb-)
+ [3.6. Face Detection and Tracking](#36-face-detection-and-tracking)
+ [3.7. Object Detection using Neural Network (TensorFlow Lite)](#37-object-detection-using-neural-network--tensorflow-lite-)
* [4. How to Run](#4-how-to-run)
+ [4.1. Install the environment on Raspberry Pi](#41-install-the-environment-on-raspberry-pi)
+ [4.2. Install TensorFlow Lite (optional; only if you want to use the neural network example)](#42-install-tensorflow-lite--optional--only-if-you-want-to-use-the-neural-network-example-)
+ [4.3. Run the scripts](#43-run-the-scripts)
+ [4.4. Change camera resolution](#44-change-camera-resolution)
* [5. Q&A](#5-q-a)
* [License](#license)Table of contents generated with markdown-toc

## 1. Introduction
Using a Raspberry Pi and a camera module for computer vision with OpenCV, YOLO, and TensorFlow Lite. The aim of this project is to provide a starting point for using RPi & CV in your own DIY / maker projects. Computer vision based on cameras is very powerful and will bring your project to the next level. This allows you to track complicated objects that would otherwise not be possible with other types of sensors (infrared, ultrasonic, LiDAR, etc).Note the code is based on Python and OpenCV meaning it is cross-platform. You can run this on other Linux-based platforms as well, e.g. x86/x64 PC, IPC, Jetson, Banana Pi, LattaPanda, BeagleBoard, etc.
## 2. Dependency
### 2.1. Packages requirement
This project is dependent on the following packages:
- Python >= 3.5
- OpenCV-Python
- OpenCV-Contrib-Python
- NumPy
- SciPy
- Matplotlib
- YOLO (*optional*)
- TensorFlow Lite (*optional*)### 2.2. Hardware support
- Support Raspberry Pi 1 Model B, Raspberry Pi 2, Raspberry Pi Zero and Raspberry Pi 3/4/5 (preferable)
- Different boards will have very varied performances.
- RPi 3/4/5 are preferable as they have more powerful CPUs;
- RPi 1/2 may be struggling and produce very low FPS, in which case you can further reduce the camera resolution (160 x 120).
- Nvidia Jetson Nano (A01) also passed the test.
- Any USB camera supported by Raspberry Pi
- To see a list of all supportive cameras, visit http://elinux.org/RPi_USB_Webcams
- The official RPi camera module is supported through `Picamera2`.## 3. What's in this repository
Currently, the following applications are implemented:- `src/camera-test`: Test if the camera is working
- `src/motion-detection`: Detect any motion in the frame
- `src/object-tracking-color`: Object detection & tracking based on color
- `src/object-tracking-shape`: Object detection & tracking based on shape
- `src/object-tracking-feature`: Object detection & tracking based on features using ORB
- `src/face-detection`: Face detection & tracking
- (*Todo*) Object detection using YOLO (RPi 3/4/5 only)
- (*Todo*) Object detection using Neural Network (TensorFlow Lite)### 3.1. Camera Test
Test the RPi and OpenCV environment. You are expected to see a pop-up window that has video streams from your USB camera if everything is set up correctly. If the window does not appear, you need to check both of (1) your environment; (2) camera connection.
### 3.2. Motion Detection
Detect object movements in the image and print a warning message if any movement is detected. This detection is based on the mean squared error (MSE) of the difference between two images.
### 3.3. Color-based Object Detection and Tracking
Track an object based on its color in HSV and print its center position. You can choose your own color by clicking on the object of interest. Click multiple times on different points so a full color space is coveraged. You can hard code the parameters so you don't need to pick them again for the next run. The following demo shows how I track a Nintendo game controller in real-time:
### 3.4. Shape-based Object Detection and Tracking
Detect and track round objects using HoughCircles().
Support of squares is coming soon.
### 3.5. Feature-based Object Detection and Tracking (with ORB)
Detect and track an object using its feature. The algorithm I selected here is ORB (Oriented FAST and Rotated BRIEF) for its fast calculation speed to enable real-time detection. To use the example, please prepare an Arduino UNO board in hand (or replace the `simple.png`).
### 3.6. Face Detection and Tracking
Detecting face using Harr Cascade detector.
### 3.7. Object Detection using YOLO
(ongoing) Use YOLO (You Only Look Once) for object detection.
Note an alternative instruction can be found at: [Quick Start Guide: Raspberry Pi with Ultralytics YOLO11](https://docs.ultralytics.com/guides/raspberry-pi/).### 3.8. Object Detection using Neural Network (TensorFlow Lite)
(ongoing) Use TensorFlow Lite to recognise objects.## 4. How to Run
### 4.1. Install the environment on Raspberry Pi
```
sudo apt-get install -y libopencv-dev libatlas-base-dev
pip3 install virtualenv Pillow numpy scipy matplotlib opencv-python opencv-contrib-python
```or use the installation script:
```
chmod +x install.sh
./install.sh
```### 4.2. Install TensorFlow Lite (optional; only if you want to use the neural network example)
```
wget https://github.com/PINTO0309/Tensorflow-bin/raw/master/tensorflow-2.1.0-cp37-cp37m-linux_armv7l.whl
pip3 install --upgrade setuptools
pip3 install tensorflow-2.1.0-cp37-cp37m-linux_armv7l.whl
pip3 install -e .
```### 4.3. Run the scripts
Run scripts in the `/src` folder by:```
python3 src/$FOLDER_NAME$/$SCRIPT_NAME$.py
```To stop the code, press the `ESC` key on your keyboard.
### 4.4. Change camera resolution
Changing the resolution will significantly impact the FPS. By default it is set to be `320 x 240`, but you can change it to any value that your camera supports at the beginning of each source code (defined by `IMAGE_WIDTH` and `IMAGE_HEIGHT`). Typical resolutions are:- 160 x 120
- 320 x 240
- 640 x 480 (480p)
- 1280 x 720 (720p)
- 1920 x 1080 (1080p; make sure your camera supports this high resolution.)## 5. Q&A
**Q1: Does this support Nvidia Jetson?**
A1: Yes. I have tested with my Jetson Nano 4GB.**Q2: Does this support the Raspberry Pi camera?**
A2: This is implemented in [issue #16](https://github.com/automaticdai/rpi-object-detection/pull/16).**Q3: Does this support Raspberry Pi 5?**
A3: This is not officially tested (as I haven't received my Pi 5 yet) but it should work out of the box.**Q4: Can we run this project on Ubuntu server 22.04?**
A4: It is not tested but you should be able to run 90% of the things here.## License
© This source code is licensed under the [MIT License](LICENSE).