Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jwillikers/detectionator
A Raspberry Pi camera for taking pictures using object detection.
https://github.com/jwillikers/detectionator
camera computer-vision gps numpy object-detection opencv python raspberry-pi tensorflow tensorflow-lite
Last synced: about 2 months ago
JSON representation
A Raspberry Pi camera for taking pictures using object detection.
- Host: GitHub
- URL: https://github.com/jwillikers/detectionator
- Owner: jwillikers
- License: other
- Created: 2024-02-24T20:53:15.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-10-28T16:46:16.000Z (2 months ago)
- Last Synced: 2024-10-28T17:06:38.906Z (2 months ago)
- Topics: camera, computer-vision, gps, numpy, object-detection, opencv, python, raspberry-pi, tensorflow, tensorflow-lite
- Language: Python
- Homepage:
- Size: 19.5 MB
- Stars: 3
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.adoc
- License: LICENSE.adoc
- Code of conduct: CODE_OF_CONDUCT.adoc
Awesome Lists containing this project
README
= Detectionator
Jordan Williams
:experimental:
:icons: font
:keywords: camera detect detection gps object opencv photo pi picamera picamera2 python raspberry tensorflow
ifdef::env-github[]
:tip-caption: :bulb:
:note-caption: :information_source:
:important-caption: :heavy_exclamation_mark:
:caution-caption: :fire:
:warning-caption: :warning:
endif::[]
:AutoUpload: https://github.com/jwillikers/autoupload[AutoUpload]
:Exif: https://en.wikipedia.org/wiki/Exif[Exif]
:Immich: https://immich.app/[Immich]
:Immich-CLI: https://immich.app/docs/features/command-line-interface/[Immich CLI]
:just: https://github.com/casey/just[just]
:MinIO: https://min.io/[MinIO]
:picamera2: https://github.com/raspberrypi/picamera2[picamera2]
:pip-tools: https://github.com/jazzband/pip-tools[pip-tools]
:pySerial: https://github.com/pyserial/pyserial[pySerial]
:Rclone: https://rclone.org/[Rclone]
:systemd: https://systemd.io/[systemd]image:https://github.com/jwillikers/detectionator/actions/workflows/test.yaml/badge.svg[Tests, link=https://github.com/jwillikers/detectionator/actions/workflows/test.yaml]
image:https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white[pre-commit, link=https://github.com/pre-commit/pre-commit]
image:https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json[Ruff, link=https://github.com/astral-sh/ruff]A Raspberry Pi camera for taking pictures using object detection.
== Overview
The Detectionator is a fairly minimal Raspberry Pi camera build for taking pictures of particular objects like animals or people.
The Detectionator incorporates {Exif} geolocation metadata in the photographs it takes.
With the help of my {AutoUpload} project, pictures are automatically offloaded from the local storage to cloud-based storage such as S3-compatible object storage with {Rclone} or to {Immich} with the {Immich-CLI}.
The photos are then removed from local storage.
This repository contains documentation, configuration files, and the Python program for the Detectionator.
The project is largely based off of the tensorflow example in the {picamera2} project and includes the same models.== Components
The Detectionator is based off of the Raspberry Pi 5 and the Camera Module 3.
It incorporates a GPS HAT for the Raspberry Pi to obtain the geolocation data to embed in the photographs.
The Raspberry Pi AI Kit is recommended, as it reduces the CPU usage dramatically on the Pi.[NOTE]
====
The GPS HAT requires soldering on a GPIO header.
====.Detectionator Components
* https://www.raspberrypi.com/products/raspberry-pi-5-model-b/[Raspberry Pi 5 Model B] (4 GB RAM or better)
* https://www.raspberrypi.com/products/camera-module-3/[Raspberry Pi Camera Module 3]
* https://www.adafruit.com/product/1646[100mm Flex Cable for Raspberry Pi Camera]
* https://www.arducam.com/product/white-camera-enclosure-case-pi-cameras/[Camera Enclosure Case for Raspberry Pi Camera Module 3/V1/V2 and Arducam 16MP/64MP Camera]
* https://www.adafruit.com/product/2324[Adafruit Ultimate GPS HAT for Raspberry Pi]
* A CR1220 Battery for the RTC.
// todo Also add battery for the on-board RTC?
* https://shop.pimoroni.com/products/nvme-base?variant=41219587178579[NVMe Base for Raspberry Pi 5].
* A sufficiently large and performant M.2 NVMe SSD.
I prefer the SK hynix Gold P31 for lower-power and heat situations.
* Alternatively, a sufficiently large and performant microSD card can be used instead of NVMe storage.
The https://www.samsung.com/us/computing/memory-storage/memory-cards/pro-ultimate-adapter-microsdxc-128gb-mb-my128sa-am/[128GB Samsung Pro Ultimate] and https://www.samsung.com/us/computing/memory-storage/memory-cards/pro-endurance-adapter-microsdxc-128gb-mb-mj128ka-am/[128GB Samsung Pro Endurance] are two good options.
* https://www.raspberrypi.com/products/27w-power-supply/[Raspberry Pi 27W USB-C Power Supply]
* https://www.raspberrypi.com/products/active-cooler/[Raspberry Pi Active Cooler]
* https://shop.pimoroni.com/products/extended-m2-5-standoffs-for-pi-hats?variant=41206707880019[Extended M2.5 Standoffs for Pi HATs (Raspberry Pi 5) - Pack of 4]
This is 4 17mm M2.5 Hex socket-socket standoffs and 8 6mm M2.5 screws.
* https://shop.pimoroni.com/products/booster-header?variant=47414520906[Booster Header]
This header has a standard 8.5mm height with 5mm tall pins.
* https://www.raspberrypi.com/products/ai-kit/[Raspberry Pi AI Kit] This is component is optional.[TIP]
====
The Adafruit Ultimate GPS HAT's cutout for the camera flex cable doesn't line up with the connectors on the Raspberry Pi 5.
Use a drill to widen the cutout towards the edge of the board to allow the flex cable enough room to fit.
====== Install
. Install the 64-bit full version of https://www.raspberrypi.com/software/[Raspberry Pi OS] to a microSD card.
This project has been tested with Raspberry Pi OS 5 based on Debian Bookworm.
. Insert the microSD card into the Raspberry Pi.
. Boot the Raspberry Pi.
. Create the `detectionator` user account.
The detectionator service is intended to be run under a dedicated user account.
Since the autoupload mechanism for Immich CLI relies on running containers, the instructions here will configure this user account to run containers... Create a `detectionator` system group.
+
[,sh]
----
sudo groupadd --gid 818 --system detectionator
----.. Create a primary user account named `detectionator`.
+
--
[,sh]
----
sudo useradd \
--add-subids-for-system \
--comment "Primary account taking photographs with object-detection" \
--create-home \
--gid detectionator \
--groups render,systemd-journal,video \
--shell /usr/sbin/nologin \
--system \
--uid 818 \
detectionator
----[TIP]
====
The `--btrfs-subvolume-home` flag can be used to create the user's home directory on a separate Btrfs subvolume.
====
--.. Verify that the new `detectionator` user has entries in `/etc/subuid` and `/etc/subgid`.
If for some reason, there are no subuid and subgid ranges for the user, follow these steps.
I don't know why this happens, but it does sometimes.
+
[NOTE]
====
These commands use the fish shell because I can never remember how to math in Bash.footnote:[Or anything else in Bash for that matter.]
====.. Calculate the first value for the next subuid allotment.
+
--
If `/etc/subuid` is empty, use 100,000 as the initial value.[,sh]
----
set new_subuid 100000
----Otherwise, use the following function to calculate the next available subuid range.
[,sh]
----
set new_subuid (math (tail -1 /etc/subuid | awk -F ":" '{print $2}') + 65536)
----
--.. Calculate the first value for the next subuid allotment.
+
--
If `/etc/subgid` is empty, use 100,000 as the initial value.[,sh]
----
set new_subgid 100000
----Otherwise, use the following function to calculate the next available subgid range.
[,sh]
----
set new_subgid (math (tail -1 /etc/subgid | awk -F ":" '{print $2}') + 65536)
----
--.. Configure the `core` user with the calculated subuid and subgid ranges.
+
[,sh]
----
sudo usermod \
--add-subuids $new_subuid-(math $new_subuid + 65535) \
--add-subgids $new_subgid-(math $new_subgid + 65535) \
detectionator
----.. Automatically start the core user's session.
+
[,sh]
----
sudo loginctl enable-linger detectionator
----.. Open a shell as the `detectionator` user with the following command.
I prefer the fish shell, so I use that here, but substitute Bash, ZSH, etc. per your preference.
+
[,sh]
----
sudo -H -u detectionator fish -c 'cd; fish'
----.. Configure the `XDG_RUNTIME_DIR` environment variable for the user in order for sockets to be found correctly.
+
[,sh]
----
set -Ux XDG_RUNTIME_DIR /run/user/(id -u)
----. Install {just} by following the instructions in the https://github.com/casey/just?tab=readme-ov-file#installation[installation section].
. Follow the instructions to configure the storage service and autoupload {systemd} units in the {AutoUpload} README.
This should automatically upload photos in the `/home/detectionator/Pictures` directory.
The commands to enable the units should look similar to the following.
Use the _user_ unit, for Immich since it is running a container.
+
Immich::
+
// todo Fix this command to actually work.
[,sh]
----
sudo -H -u detectionator bash -c 'systemctl --user enable --now autoupload-immich@$(systemd-escape --path ~/Pictures).timer'
----Rclone::
+
[,sh]
----
sudo systemctl enable --now autoupload-rclone@$(systemd-escape --path /home/detectionator/Pictures).timer
----. For security, be sure to disable password-based SSH authentication.
After your public key has been added to the `~/.ssh/authorized_keys` file on the Pi Camera, this can be configured in the `/etc/ssh/sshd_config` file.
You can follow the instructions in my https://github.com/jwillikers/openssh-config[OpenSSH Config] repository to accomplish this and a few other optimizations.. Update the package lists.
+
[,sh]
----
sudo apt-get update
----. Upgrade everything.
+
[,sh]
----
sudo apt-get --yes full-upgrade
----. Make the `~/Projects` directory.
+
[,sh]
----
mkdir --parents ~/Projects
----. Clone this project's repository to the `~/Projects` directory.
+
[,sh]
----
git -C ~/Projects clone https://github.com/jwillikers/detectionator.git
----. Change to the project's directory.
+
[,sh]
----
cd ~/Projects/detectionator
----. Set up the environment with `just init`.
This will install dependencies and initialize the virtual environment.
It also installs a special udev rule for the Adafruit Ultimate GPS to give it a static device name.
+
[,sh]
----
just init
----. Enable the serial port hardware and better PCIe speeds in `config.txt`.
+
./boot/firmware/config.txt
[,ini]
----
[all]
dtparam=pciex1_gen=3
dtoverlay=pps-gpio,gpiopin=4
dtparam=uart0=on# Allow USB devices to pull the maximum current possible.
usb_max_current_enable=1# Recharge the RTC battery.
dtparam=rtc_bbat_vchg=3000000# Reduce clockspeeds and temperature limit to reduce the likelihood of crashes.
# Crashes seem particularly common when recording video.
#arm_freq = 2000
#core_freq = 900
#v3d_freq = 900
#temp_limit = 80disable_camera_led = 1
----// todo eeprom config
// UART_BAUD=9600. Ensure that `console=tty1` is in `/boot/firmware/cmdline.txt` and not `console=ttyAMA0` or `console=serial0`.
+
./boot/firmware/cmdline.txt
[source]
----
console=tty1 root=PARTUUID=c64d4099-02 rootfstype=ext4 fsck.repair=yes rootwait cfg80211.ieee80211_regdom=US
----. Configure the `pps-gpio` module to be loaded.
+
[,sh]
----
echo 'pps-gpio' | sudo tee /etc/modules-load.d/pps-gpio.conf
----. Configure gpsd to use the GPS HAT.
The serial port `ttyAMA0` is used and since the Raspberry Pi 5 has a built-in RTC, `pps1` is used instead of `pps0` here.
+
./etc/default/gpsd
[,ini]
----
# Devices gpsd should collect to at boot time.
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES="/dev/ttyAMA0 /dev/pps1"# Other options you want to pass to gpsd
GPSD_OPTIONS="--nowait"# Automatically hot add/remove USB GPS devices via gpsdctl
USBAUTO="true"
----. Configure chrony to use the GPS HAT for time.
+
./etc/chrony/conf.d/gpsd.conf
[source]
----
# set larger delay to allow the NMEA source to overlap with
# the other sources and avoid the falseticker status
refclock SOCK /run/chrony.ttyAMA10.sock refid GPS precision 1e-1 offset 0.9999
refclock SOCK /run/chrony.pps1.sock refid PPS precision 1e-7
----. Reboot for the new udev rules to take effect.
+
[,sh]
----
sudo systemctl reboot
----. Use `just run` to run the `detectionator.py` Python script inside the virtual environment.
+
[,sh]
----
just run
----. Install and activate the systemd service with `just install`.
+
[,sh]
----
just install
----[TIP]
====
Send the application the `SIGUSR1` signal to capture sample photographs for both the high and low resolution modes.[,sh]
----
kill --signal SIGUSR1 $(pgrep python)
----
====. Check the status of the `detectionator.service` unit with the `systemctl status` command.
+
[,sh]
----
sudo systemctl status detectionator.service
----. Check the logs of the `detectionator.service` unit with `journalctl`.
+
[,sh]
----
sudo journalctl -xeu detectionator.service
----. Set up unattended upgrades to automatically update the system.
I document how to do this in my blog post https://www.jwillikers.com/unattended-upgrades[Unattended Upgrades].== HDR
The Raspberry Pi Camera Module 3 supports HDR, but only at a lower resolution.
HDR support has to toggled when `detectionator.py` isn't running.. Show the available V4L subdevices.
+
[,sh]
----
ls /dev/v4l-subdev*
/dev/v4l-subdev0 /dev/v4l-subdev1 /dev/v4l-subdev2 /dev/v4l-subdev3
----. To enable HDR support for the Raspberry Pi Camera Module 3, use the following command on one of the V4L subdevices.
In my case, this ended up being `/dev/v4l-subdev2`.
+
[,sh]
----
just hdr /dev/v4l-subdev2
----. To disable HDR support for the Raspberry Pi Camera Module 3, use this command with the corresponding V4L subdevice.
+
[,sh]
----
just hdr /dev/v4l-subdev2 disable
----== Development
. Run `just init-dev` to initialize the virtual environment for development.
This will install all of the necessary dependencies and the {pre-commit} hooks.
+
[,sh]
----
just init-dev
----. Run the tests with https://docs.pytest.org/en/latest/[pytest] by running `just test`.
+
[,sh]
----
just test
----. To update dependencies, run `just update`.
+
[,sh]
----
just update
----. Use `just --list` to list other available tasks.
+
[,sh]
----
just --list
----== Limitations
The Raspberry Pi Camera Module 3 uses a rolling shutter.
Rolling shutter can make object detection less accurate and produce graphical distortions for fast-moving objects.
Global shutter would be ideal, but it's not as easy to find high resolution cameras using this technology for an embedded system.
Lower resolution cameras, such as the Raspberry Pi Global Shutter Camera exist, which could be used to improve the object detection of fast moving objects.
I might try to use a dual-camera system in the future which could take advantage of the lower-resolution global shutter camera for detections while still capturing pictures with the a higher resolution rolling shutter camera.== todo
- [] Add better configuration parameter checking, particularly around enum type options.
- [] Add a configuration parameter to configure the focus range.
This could help speed up autofocus.
- [] Add a configuration parameter to configure the rest time in-between runs of the detection algorithm when there was no detection.
- [] Add a configuration parameter to control whether or not to refocus when capturing burst shots.
- [x] Use https://gpsd.gitlab.io/gpsd/gpsd-time-service-howto.html[gpsd].
- [] Create classes for different data types to better organize things.
- [x] Cache the GPS data to reduce the time to capture pictures between detections?
Use TTLCache from https://github.com/tkem/cachetools/[cachetools].
- [x] Add support for a TOML config file with https://github.com/bw2/ConfigArgParse[ConfigArgParse].
- [] Switch from picamera2 to gstreamer to work with more hardware.
- [] mypy
- [] async
- [] Create a weatherproof enclosure for the camera.
- [] Add a NixOS configuration and build SD card images.
- [] Should I be processing the images in grayscale?== See Also
* https://docs.circuitpython.org/projects/gps/en/latest/[Adafruit GPS Library Documentation]
* https://learn.adafruit.com/adafruit-ultimate-gps-hat-for-raspberry-pi[Adafruit Ultimate GPS HAT for Raspberry Pi Documentation]
* https://www.cipa.jp/std/documents/e/DC-008-2012_E.pdf[Exchangeable image file format for digital still cameras Exif Version 2.3]
* https://pyserial.readthedocs.io/en/latest/index.html[pySerial Documentation]
* https://www.raspberrypi.com/news/using-the-picamera2-library-with-tensorflow-lite/[Using the Picamera2 library with TensorFlow Lite]== Code of Conduct
The project's Code of Conduct is available in the link:CODE_OF_CONDUCT.adoc[Code of Conduct] file.
== License
The models are from the {picamera2} project's TensorFlow example, and are likely subject to their own licenses.
This repository is licensed under the https://www.gnu.org/licenses/gpl-3.0.html[GPLv3], available in the link:LICENSE.adoc[license file].© 2024 Jordan Williams
== Authors
mailto:{email}[{author}]