{"id":17051802,"url":"https://github.com/jeffbass/imagenode","last_synced_at":"2025-09-05T21:35:58.281Z","repository":{"id":56407198,"uuid":"157180242","full_name":"jeffbass/imagenode","owner":"jeffbass","description":" Capture and Selectively Send Images and Sensor Data; detect Motion; detect Light","archived":false,"fork":false,"pushed_at":"2024-07-23T23:56:35.000Z","size":986,"stargazers_count":39,"open_issues_count":3,"forks_count":19,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-06-02T22:59:00.697Z","etag":null,"topics":["imagezmq","python","python-opencv","pyzmq","raspberry-pi"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jeffbass.png","metadata":{"files":{"readme":"README.rst","changelog":"HISTORY.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-11-12T08:28:16.000Z","updated_at":"2024-11-25T05:39:04.000Z","dependencies_parsed_at":"2024-11-07T14:00:52.615Z","dependency_job_id":"92b8da1d-4131-47df-b70e-66384e6329cc","html_url":"https://github.com/jeffbass/imagenode","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/jeffbass/imagenode","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jeffbass%2Fimagenode","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jeffbass%2Fimagenode/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jeffbass%2Fimagenode/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jeffbass%2Fimagenode/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jeffbass","download_url":"https://codeload.github.com/jeffbass/imagenode/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jeffbass%2Fimagenode/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":273826322,"owners_count":25175232,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-05T02:00:09.113Z","response_time":402,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["imagezmq","python","python-opencv","pyzmq","raspberry-pi"],"created_at":"2024-10-14T10:07:31.311Z","updated_at":"2025-09-05T21:35:58.242Z","avatar_url":"https://github.com/jeffbass.png","language":"Python","readme":"===================================================\nimagenode: Capture and Send Images and Sensor Data\n===================================================\n\nIntroduction\n============\n\n**imagenode** enables Raspberry Pi computers to capture images with the\nPiCamera, perform image transformations and send them to a central **imagehub** for\nfurther processing. It can also send other sensor data such as temperature data\nand GPIO data. The processing power of the Raspberry Pi is used to detect\nevents (like the water meter flowing or a coyote crossing the back yard), and\nthen send a limited number of images of the event. It also works on other types\nof (non Raspberry Pi) computers with USB cams or webcams.\n\nHere are a couple of screenshots showing images sent by a Raspberry Pi PiCamera\nand displayed on a Mac. In the top screenshot, a ballpoint pen hanging from a\nstring is still. In the bottom screenshot, the ballpoint pen is swinging back\nand forth. The largest image in each screenshot is the full frame sent by the\nPiCamera. The smaller windows are showing the **imagenode** motion detector\nparameter tuning displays including the detected motion state of \"still\" and\n\"moving\":\n\n.. image:: docs/images/still_and_moving.png\n\n.. contents::\n\nOverview\n========\n\n**imagenode** is the image capture and sending portion of a computer vision\npipeline that is typically run on multiple computers. For example, a Raspberry\nPi computer runs **imagenode** to capture images with a PiCamera and perform\nsome simple image processing. The images are transferred by **imageZMQ** (see\nreference) to a hub computer running **imagehub** (often a Mac) for further\nimage processing. The real benefit of **imagenode** is that it can use the\nthe processing power of the Raspberry Pi to:\n\n- Continuously capture images (around 10 frames a second is typical)\n- Analyze the images to detect events (e.g., water meter started flowing)\n- When a detected event occurs:\n\n  - Send an event message about the event to the **imagehub**\n  - Send a select few \"detected state change\" images to the **imagehub**\n\nSo, instead of 36,000 images an hour being sent from our water meter cam to our\n**imagehub**, only about 20 images are sent each time the water starts flowing\nor stops flowing. Instead of many thousands of images an hour showing a mostly\nunmoving farm area, our critter cams spot coyotes, raccoons and rabbits and only\nsend event messages and images when something is actually seen moving about.\n\n**imagenode** provides image capture, event detection and transmission services\nas part of a distributed computer vision system that includes multiple\ncomputers with cameras, sensors, database hubs and communication links.\nSee `Using imagenode in distributed computer vision projects \u003cdocs/imagenode-uses.rst\u003e`_\nfor a more detailed explanation of the overall project design. See the\n`Yin Yang Ranch project \u003chttps://github.com/jeffbass/yin-yang-ranch\u003e`_\nfor more details about the architecture of the\n**imagenode** \u003c--\u003e **imageZMQ** \u003c--\u003e **imagehub** system.\n\nImagenode Capabilities\n======================\n\n- Continuously captures images using PiCameras or USB webcams.\n- Performs image transformation and motion, light or color detection.\n- Sends detected events and relevant images to an image hub using **imageZMQ**.\n- Can capture and send other sensor data gathered using the GPIO pins.\n- Can control lighting (e.g., white LED or Infrared LED area lights).\n- Sends event messages (e.g., water is flowing) as well as images.\n\nDependencies and Installation\n=============================\n\n**imagenode** has been tested with:\n\n- Python 3.6 and newer\n- OpenCV 3.3 and 4.0 and newer\n- Raspberry Pi OS Buster, Raspbian Stretch and Raspbian Jessie\n\n  - NOT yet tested with Raspberry Pi OS Bullseye. Waiting for a production\n    replacement for the Python PiCamera module.\n- PyZMQ 16.0 and newer\n- RPi.GPIO 0.6 and newer (imported only if using GPIO pins)\n- picamera 1.13 (imported only if using PiCamera)\n- imageZMQ 1.1.1 and newer\n- imutils 0.4.3 and newer (used get to images from PiCamera)\n- psutil 5.7.2 and newer\n- PyYAML 5.3 and newer\n- w1thermsensor 1.3 (if using DS18S20 temperature sensor)\n\n  - NOT yet compatible with w1thermsensor version 2 which uses a new API\n- adafruit-circuitpython-dht 3.4.2 and newer (if using DHT11 or DHT22 sensor)\n\n**imagenode** captures images and uses **imageZMQ** to transfer the images.\nIt is best to install and test **imageZMQ** before installing **imagenode**.\nThe instructions for installing and testing **imageZMQ** are in the\n`imageZMQ GitHub repository \u003chttps://github.com/jeffbass/imagezmq.git\u003e`_.\n\n**imagenode** is still in early development, so it is not yet in PyPI. Get it by\ncloning the GitHub repository::\n\n    git clone https://github.com/jeffbass/imagenode.git\n\nOnce you have cloned **imagenode** to a directory on your local machine,\nyou can run the tests using the instructions below. The instructions assume you\nhave cloned **imagehub** to the user home directory.\n\nImagenode settings via YAML files\n=================================\n\n**imagenode** requires a *LOT* of settings: settings for the camera, settings\nfor the GPIO pins, settings for each detector and each ROI, etc. The settings are\nkept in a YAML file and are changed to \"tune\" the image capture, ROIs, motion\ndetection and computer vision parameters. An example YAML file is included in\nthe \"yaml\" directory. An explanation of the yaml file and how to adjust the settings\nis in `imagenode Settings and YAML files \u003cdocs/settings-yaml.rst\u003e`_.\n\nRunning the Tests\n=================\n\n**imagenode** should be tested in stages, with each stage testing a little more\nfunctionality. The tests are numbered in the order in which they should be run\nto determine if **imagenode** is running correctly on your systems.\n\nTest **imagenode** in the same virtualenv in which you tested **imagenZMQ**. For\nthe **imageZMQ** testing and for the **imagenode** testing, my virtualenv is\ncalled py3cv3.\n\n**imagenode** requires **imageZMQ** be installed and working. Before running any\ntests with **imagenode**, be sure you have successfully installed **imageZMQ**\nand run all of its tests. The **imageZMQ** tests must run successfully on every\ncomputer you will be using **imagenode** on. You can use pip to install\n**imageZMQ**.\n\nDirectory Structure for running the tests\n-----------------------------------------\n**imagenode** is not far enough along in development\nto be pip installable. So it should both be git-cloned to any computer that\nit will be running on. I have done all testing at the user home\ndirectory of every computer. Here is a simplified directory layout::\n\n  ~ # user home directory\n  +--- imagenode.yaml  # copied from one of the imagenode yaml files \u0026 edited\n  |\n  +--- imagenode    # the git-cloned directory for imagenode\n       +--- sub directories include docs, imagenode, tests, yaml\n\nThis directory arrangement, including docs, imagenode code, tests, etc. is a\ncommon development directory arrangement on GitHub. Using git clone from your\nuser home directory (either on a Mac, a RPi or other Linux computer) will\nput the **imagenode** directories in the right place for testing. Each test\ndescribed below requires you to copy the appropriate ``testN.yaml`` file to\n``imagenode.yaml`` in the user home directory as shown in the above directory\ndiagram. The ``receive_test.py`` program acts as the image hub test receiver for\neach imagenode test. It must be started and running before running\n``imagenode.py.``\n\nTest 1: Running **imagenode** and **imageZMQ** together on a Mac\n-----------------------------------------------------------------\n**The first test** runs both the sending program **imagenode** and the receiving\nprogram ``receive_test.py`` (acting as a test hub) on\na Mac (or linux computer) with a webcam. It tests that the **imagenode** software\nis installed correctly and that the ``imagenode.yaml`` file has been copied and\nedited in a way that works. It uses the webcam on the Mac for testing. It uses a\n\"lighted\" versus \"dark\" detector applied to a specified ROI.\n\nTest 2: Sending a light detector stream of images from RPi PiCamera to a Mac\n----------------------------------------------------------------------------\n**The second test** runs **imagenode** on a Raspberry Pi, using ``receive_test.py``\n(acting as a test hub) on a Mac (or Linux computer). It tests that the\n**imagenode** software is installed correctly on the RPi and that\nthe ``imagenode.yaml`` file has been copied and edited in a way that works.\nIt tests that the **imageZMQ** communication is working between the Raspberry Pi\nand the Mac. It also tests the Picamera. It uses a \"lighted\" versus \"dark\"\ndetector applied to a specified ROI.\n\nTest 3: Sending a motion detector stream of images from RPi PiCamera to a Mac\n-----------------------------------------------------------------------------\n**The third test** runs **imagenode** on a Raspberry Pi, using ``receive_test.py``\n(acting as a test hub) on a Mac (or Linux computer). It is very similar to Test\n2, except that it uses a \"moving\" versus \"still\" motion detector applied to a\nspecified ROI.\n\nTest 4: Sending temperature readings from RPi temperature sensor to a Mac\n-------------------------------------------------------------------------\n**The fourth test** runs **imagenode** on a Raspberry Pi, using ``receive_test.py``\n(acting as a test hub) on a Mac (or Linux computer). It allows testing of the\ntemperature sensor capabilities of **imagenode**. It requires setting up a\nDS18B20 temperature sensor and connecting it appropriately to RPi GPIO pin 4.\n\nThe details of running the 4 tests are `here \u003cdocs/testing.rst\u003e`_.\n\nRunning **imagenode** in production\n===================================\nRunning the test programs requires that you leave a terminal window open, which\nis helpful for testing, but not for production runs. I use systemctl / systemd\nto start **imagenode** in production. I have provided an example\n``imagenode.service`` unit configuration file that shows how I start\n**imagenode** for the production programs observing my small farm. I have found\nthe systemctl / systemd system to be best way to start / stop / restart and\ncheck the running status of **imagenode** over several years of testing. For\nthose who prefer using a shell script to start **imagenode**, I have included an\nexample ``imagenode.sh``. It is important to run **imagenode** in the right\nvirtualenv in production, regardless of your choice of program startup tools.\n\nIn production, you would want to set the test options used to print settings\nto ``False``; they are only helpful during testing. All errors and **imagenode**\nevent messages are saved in the file ``imagehub.log`` which defaults to the\nsame directory as imagenode.py. You might want the log to be in a different\ndirectory for production; the log file location can be set by changing it in the\nlogging function at the bottom of the imagenode.py program file.\n\nAdditional Documentation\n========================\n- `More details on running the tests \u003cdocs/testing.rst\u003e`_.\n- `How imagenode works \u003cdocs/imagenode-details.rst\u003e`_.\n- `How imagenode is used in a larger project \u003cdocs/imagenode-uses.rst\u003e`_.\n- `Version History and Changelog \u003cHISTORY.md\u003e`_.\n- `Research and Development Roadmap \u003cdocs/research-roadmap.rst\u003e`_.\n- `The imageZMQ classes that allow transfer of images \u003chttps://github.com/jeffbass/imagezmq\u003e`_.\n- `The imagehub software that saves events and images \u003chttps://github.com/jeffbass/imagehub\u003e`_.\n- `The larger farm automation / computer vision project \u003chttps://github.com/jeffbass/yin-yang-ranch\u003e`_.\n  This project shows the overall system architecture. It also contains\n  links to my **PyCon 2020** talk video and slides explaining the project.\n\nContributing\n============\n**imagenode** is in early development and testing. I welcome open issues and\npull requests, but because the programs are still rapidly evolving, it is best\nto open an issue for some discussion before submitting pull requests. We can\nexchange ideas about your potential pull request and how to best test your code.\n\nContributors\n============\nThanks for all contributions big and small. Some significant ones:\n\n+--------------------------+-----------------+----------------------------------------------+\n| **Contribution**         | **Name**        | **GitHub**                                   |\n+--------------------------+-----------------+----------------------------------------------+\n| Initial code \u0026 docs      | Jeff Bass       | `@jeffbass \u003chttps://github.com/jeffbass\u003e`_   |\n+--------------------------+-----------------+----------------------------------------------+\n| Added code and           |                 |                                              |\n| documentation for        |                 |                                              |\n| PiCamera settings        | Stephen Kirby   | `@sbkirby \u003chttps://github.com/sbkirby\u003e`_     |\n+--------------------------+-----------------+----------------------------------------------+\n| Added DHT11 \u0026 DHT22      |                 |                                              |\n| sensor capability        | Stephen Kirby   | `@sbkirby \u003chttps://github.com/sbkirby\u003e`_     |\n+--------------------------+-----------------+----------------------------------------------+\n| Added multiple detectors |                 |                                              |\n| per camera capability    | Stephen Kirby   | `@sbkirby \u003chttps://github.com/sbkirby\u003e`_     |\n+--------------------------+-----------------+----------------------------------------------+\n\n\nAcknowledgments\n===============\n- **ZeroMQ** is a great messaging library with great documentation\n  at `ZeroMQ.org \u003chttp://zeromq.org/\u003e`_.\n- **PyZMQ** serialization examples provided a starting point for **imageZMQ**.\n  See the\n  `PyZMQ documentation \u003chttps://pyzmq.readthedocs.io/en/latest/index.html\u003e`_.\n- **OpenCV** and its Python bindings provide great scaffolding for computer\n  vision projects large or small: `OpenCV.org \u003chttps://opencv.org/\u003e`_.\n- The motion detection function detect_motion() borrowed a lot of helpful code\n  from a motion detector\n  `tutorial post \u003chttps://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv/\u003e`_\n  by Adrian Rosebrock of PyImageSearch.com.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjeffbass%2Fimagenode","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjeffbass%2Fimagenode","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjeffbass%2Fimagenode/lists"}