https://github.com/marph91/pocket-bnn
BNN-to-FPGA framework, written in VHDL and Python
https://github.com/marph91/pocket-bnn
bnn cnn cnn-architecture deep-learning fpga hardware image-processing larq python ulx3s vhdl
Last synced: 2 months ago
JSON representation
BNN-to-FPGA framework, written in VHDL and Python
- Host: GitHub
- URL: https://github.com/marph91/pocket-bnn
- Owner: marph91
- License: mpl-2.0
- Created: 2021-05-15T08:53:24.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2021-07-16T11:52:30.000Z (almost 4 years ago)
- Last Synced: 2025-03-29T04:05:00.836Z (3 months ago)
- Topics: bnn, cnn, cnn-architecture, deep-learning, fpga, hardware, image-processing, larq, python, ulx3s, vhdl
- Language: Python
- Homepage:
- Size: 154 KB
- Stars: 7
- Watchers: 2
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# pocket-bnn
pocket-bnn is a framework to map small Binarized Neural Networks (BNN) on a FPGA. It is based on the experience gained in [pocket-cnn](https://github.com/marph91/pocket-cnn). This is no processor, but rather the BNN is mapped directly on the FPGA. There is no communication needed, except of providing the image and reading the result.
## Installation and Usage
To run a simple demo, execute the following commands:
```bash
# train a bnn
make model# generate a vhdl toplevel from the model
# synthesize, PnR, generate bitstream
make bnn.bit# program the board
make prog
```The BNN will be accessible through UART. There is an example script, which can be used: `python playground/06_test_uart.py`. The result should be corresponding to the BNN test.
There are a few programs and python modules that need to be installed, like [LARQ](https://github.com/larq/larq) and the open source toolchain to program the ULX3S. For now, they need to be installed manually.
A few stats for the example are:
- Accuracy on Mnist: 75 %
- Resource usage: 17276/41820 (41%) of TRELLIS_SLICE
- Frequency: 25 MHz (Max. frequency: 132 MHz)In simulation, the full BNN inference is done in less than 10 us at 100 MHz. More stats will follow, since this is the first example.
## Documentation
For now, there is not much documentation. Some design decisions are documented at the [documentation folder](https://github.com/marph91/pocket-bnn/tree/master/doc). The [tests](https://github.com/marph91/pocket-bnn/tree/master/sim) might be useful, too.