https://github.com/rishit-dagli/people-counter-on-edge
Low latency resilient people detection and counting for edge devices
https://github.com/rishit-dagli/people-counter-on-edge
ai deep-learning edge-ai edge-computing ffmpeg-server machine-learning mqtt openvino-toolkit people-counter
Last synced: 5 months ago
JSON representation
Low latency resilient people detection and counting for edge devices
- Host: GitHub
- URL: https://github.com/rishit-dagli/people-counter-on-edge
- Owner: Rishit-dagli
- License: apache-2.0
- Created: 2020-05-14T14:55:55.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-06-09T00:03:20.000Z (over 5 years ago)
- Last Synced: 2025-03-30T23:04:57.203Z (6 months ago)
- Topics: ai, deep-learning, edge-ai, edge-computing, ffmpeg-server, machine-learning, mqtt, openvino-toolkit, people-counter
- Language: JavaScript
- Homepage: https://link.springer.com/article/10.1007%2Fs00500-021-05891-2
- Size: 5.46 MB
- Stars: 23
- Watchers: 3
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Deploy a People Counter App at the Edge
[](https://www.rishit.tech)

[](https://shields.io/)



[](http://makeapullrequest.com)
[](https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity)[](https://www.youtube.com/watch?v=gjxRXuFpfgU)
Click the image above to see a video of the demo or use the link [here](https://www.youtube.com/watch?v=gjxRXuFpfgU)
## Table of Contents
- [What it Does](#what-it-does)
- [How it Works](#how-it-works)
- [Requirements](#requirements)
* [Hardware](#hardware)
* [Software](#software)
- [Setup](#setup)
* [Install Intel® Distribution of OpenVINO™ toolkit](#install-intel--distribution-of-openvino--toolkit)
* [Install Nodejs and its dependencies](#install-nodejs-and-its-dependencies)
* [Install npm](#install-npm)
- [Run the application](#run-the-application)
* [Step 1 - Start the Mosca server](#step-1---start-the-mosca-server)
* [Step 2 - Start the GUI](#step-2---start-the-gui)
* [Step 3 - FFmpeg Server](#step-3---ffmpeg-server)
* [Step 4 - Run the code](#step-4---run-the-code)
+ [Setup the environment](#setup-the-environment)
+ [Running on the CPU](#running-on-the-cpu)
+ [Running on the Intel® Neural Compute Stick](#running-on-the-intel--neural-compute-stick)
+ [Using a camera stream instead of a video file](#using-a-camera-stream-instead-of-a-video-file)
- [A Note on Running Locally](#a-note-on-running-locally)## What it Does
The people counter application will demonstrate how to create a smart video IoT solution using Intel® hardware and software tools. The app will detect people in a designated area, providing the number of people in the frame, average duration of people in frame, and total count. **I strongly recommend you to read the [WRITEUP](https://github.com/Rishit-dagli/People-Counter-On-Edge/blob/master/WRITEUP.md)**
## How it Works
The counter will use the Inference Engine included in the Intel® Distribution of OpenVINO™ Toolkit. The model used should be able to identify people in a video frame. The app should count the number of people in the current frame, the duration that a person is in the frame (time elapsed between entering and exiting a frame) and the total count of people. It then sends the data to a local web server using the Paho MQTT Python package.
You will choose a model to use and convert it with the Model Optimizer.

## Requirements
### Hardware
* 6th to 10th generation Intel® Core™ processor with Iris® Pro graphics or Intel® HD Graphics.
* OR use of Intel® Neural Compute Stick 2 (NCS2)
* OR Udacity classroom workspace for the related course### Software
* Intel® Distribution of OpenVINO™ toolkit 2019 R3 release
* Node v6.17.1
* Npm v3.10.10
* CMake
* MQTT Mosca server
## Setup### Install Intel® Distribution of OpenVINO™ toolkit
Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step.
- [Linux/Ubuntu](./linux-setup.md)
- [Mac](./mac-setup.md)
- [Windows](./windows-setup.md)### Install Nodejs and its dependencies
Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step.
- [Linux/Ubuntu](./linux-setup.md)
- [Mac](./mac-setup.md)
- [Windows](./windows-setup.md)### Install npm
There are three components that need to be running in separate terminals for this application to work:
- MQTT Mosca server
- Node.js* Web server
- FFmpeg server
From the main directory:* For MQTT/Mosca server:
```
cd webservice/server
npm install
```* For Web server:
```
cd ../ui
npm install
```
**Note:** If any configuration errors occur in mosca server or Web server while using **npm install**, use the below commands:
```
sudo npm install npm -g
rm -rf node_modules
npm cache clean
npm config set registry "http://registry.npmjs.org"
npm install
```## Run the application
From the main directory:
### Step 1 - Start the Mosca server
```
cd webservice/server/node-server
node ./server.js
```You should see the following message, if successful:
```
Mosca server started.
```### Step 2 - Start the GUI
Open new terminal and run below commands.
```
cd webservice/ui
npm run dev
```You should see the following message in the terminal.
```
webpack: Compiled successfully
```### Step 3 - FFmpeg Server
Open new terminal and run the below commands.
```
sudo ffserver -f ./ffmpeg/server.conf
```### Step 4 - Run the code
Open a new terminal to run the code.
#### Setup the environment
You must configure the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command:
```
source /opt/intel/openvino/bin/setupvars.sh -pyver 3.5
```You should also be able to run the application with Python 3.6, although newer versions of Python will not work with the app.
#### Running on the CPU
When running Intel® Distribution of OpenVINO™ toolkit Python applications on the CPU, the CPU extension library is required. This can be found at:
```
/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/
```*Depending on whether you are using Linux or Mac, the filename will be either `libcpu_extension_sse4.so` or `libcpu_extension.dylib`, respectively.* (The Linux filename may be different if you are using a AVX architecture)
Though by default application runs on CPU, this can also be explicitly specified by ```-d CPU``` command-line argument:
```
python main.py -i resources/Pedestrian_Detect_2_1_1.mp4 -m your-model.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm
```
If you are in the classroom workspace, use the “Open App” button to view the output. If working locally, to see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser.#### Running on the Intel® Neural Compute Stick
To run on the Intel® Neural Compute Stick, use the ```-d MYRIAD``` command-line argument:
```
python3.5 main.py -d MYRIAD -i resources/Pedestrian_Detect_2_1_1.mp4 -m your-model.xml -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm
```To see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser.
**Note:** The Intel® Neural Compute Stick can only run FP16 models at this time. The model that is passed to the application, through the `-m ` command-line argument, must be of data type FP16.
#### Using a camera stream instead of a video file
To get the input video from the camera, use the `-i CAM` command-line argument. Specify the resolution of the camera using the `-video_size` command line argument.
For example:
```
python main.py -i CAM -m your-model.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm
```To see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser.
**Note:**
User has to give `-video_size` command line argument according to the input as it is used to specify the resolution of the video or image file.## A Note on Running Locally
To run on a local machine you would need to change these files:
```
webservice/ui/src/constants/constants.js
```The `CAMERA_FEED_SERVER` and `MQTT_SERVER` both use the workspace configuration.
You can change each of these as follows:```
CAMERA_FEED_SERVER: "http://localhost:3004"
...
MQTT_SERVER: "ws://localhost:3002"
```