https://github.com/qengineering/tensorflow_lite_classification_rpi_32-bits
TensorFlow Lite classification on a bare Raspberry Pi 4 at 33 FPS
https://github.com/qengineering/tensorflow_lite_classification_rpi_32-bits
armv7 bare-raspberry-pi cpp deep-learning frame-rate high-fps inception inceptionv2 inceptionv4 lite mobilenet raspberry-pi-4 tensorflow-examples tensorflow-lite testtensorflow-lite
Last synced: 7 months ago
JSON representation
TensorFlow Lite classification on a bare Raspberry Pi 4 at 33 FPS
- Host: GitHub
- URL: https://github.com/qengineering/tensorflow_lite_classification_rpi_32-bits
- Owner: Qengineering
- License: bsd-3-clause
- Created: 2020-05-02T09:57:07.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2021-12-27T11:59:17.000Z (almost 4 years ago)
- Last Synced: 2025-01-26T03:45:46.199Z (8 months ago)
- Topics: armv7, bare-raspberry-pi, cpp, deep-learning, frame-rate, high-fps, inception, inceptionv2, inceptionv4, lite, mobilenet, raspberry-pi-4, tensorflow-examples, tensorflow-lite, testtensorflow-lite
- Language: C++
- Homepage: https://qengineering.eu/install-tensorflow-2-lite-on-raspberry-pi-4.html
- Size: 870 KB
- Stars: 3
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# TensorFlow_Lite_Classification_RPi_32-bits

## TensorFlow Lite classification running on a bare Raspberry Pi 32-bit OS
[](https://opensource.org/licenses/BSD-3-Clause)
A fast C++ implementation of TensorFlow Lite classification on a bare Raspberry Pi 4.
Once overclocked to 1950 MHz, your app runs an amazing 33 FPS without any hardware accelerator.
Special made for a bare Raspberry Pi 4 see [Q-engineering deep learning examples](https://qengineering.eu/deep-learning-examples-on-raspberry-32-64-os.html)------------
Papers: https://arxiv.org/pdf/1712.05877.pdf
Training set: COCO with 1000 objects
Size: 224x224------------
## Benchmark.
Frame rate Mobile_V1 Lite : 33 FPS (RPi 4 @ 1950 MHz - 32 bits OS)
Frame rate Mobile_V2 Lite : 36.2 FPS (RPi 4 @ 1950 MHz - 32 bits OS)
Frame rate Inception_V2 Lite : 8.9 FPS (RPi 4 @ 1950 MHz - 32 bits OS)
Frame rate Inception_V4Lite : 1.6 FPS (RPi 4 @ 1950 MHz - 32 bits OS)
With a 64 bits OS you get higher frame rates see: https://github.com/Qengineering/TensorFlow_Lite_Classification_RPi_64-bits------------
## Dependencies.
To run the application, you have to:
- TensorFlow Lite framework installed. [Install TensorFlow Lite](https://qengineering.eu/install-tensorflow-2-lite-on-raspberry-pi-4.html)
- OpenCV installed. [Install OpenCV 4.5](https://qengineering.eu/install-opencv-4.5-on-raspberry-pi-4.html)
- Code::Blocks installed. (```$ sudo apt-get install codeblocks```)------------
## Installing the app.
To extract and run the network in Code::Blocks
$ mkdir *MyDir*
$ cd *MyDir*
$ wget https://github.com/Qengineering/TensorFlow_Lite_Classification_RPi_32-bits/archive/refs/heads/master.zip
$ unzip -j master.zip
Remove master.zip and README.md as they are no longer needed.
$ rm master.zip
$ rm README.md
Your *MyDir* folder must now look like this:
tabby.jpeg
schoolbus.jpg
grace_hopper.bmp
Labels.txt
TensorFlow_Lite_Mobile.cpb
TensorFlow_Lite_Class.cpp
Next, choose your model from TensorFlow: https://www.tensorflow.org/lite/guide/hosted_models
Download a quantized model, extract the .tflite from the tarball and place it in your *MyDir*.
Now your *MyDir* folder may contain: mobilenet_v1_1.0_224_quant.tflite.
Or: inception_v4_299_quant.tflite. Or both of course.
Enter the .tflite file of your choice on line 54 in TensorFlow_Lite_Class.cpp
The image to be tested is given a line 84, also in TensorFlow_Lite_Class.cpp
------------
## Running the app.
Run TestTensorFlow_Lite.cpb with Code::Blocks. More info or
if you want to connect a camera to the app, follow the instructions at [Hands-On](https://qengineering.eu/deep-learning-examples-on-raspberry-32-64-os.html#HandsOn).------------
[](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=CPZTM5BB3FCYL)