https://github.com/19801201/SpinalHDL_CNN_Accelerator
CNN accelerator implemented with Spinal HDL
https://github.com/19801201/SpinalHDL_CNN_Accelerator
cnn fpga object-detection spinalhdl xilinx yolo
Last synced: 5 months ago
JSON representation
CNN accelerator implemented with Spinal HDL
- Host: GitHub
- URL: https://github.com/19801201/SpinalHDL_CNN_Accelerator
- Owner: 19801201
- License: gpl-3.0
- Created: 2021-12-27T02:24:12.000Z (over 3 years ago)
- Default Branch: dev
- Last Pushed: 2024-01-29T12:46:19.000Z (about 1 year ago)
- Last Synced: 2024-08-02T01:20:46.832Z (9 months ago)
- Topics: cnn, fpga, object-detection, spinalhdl, xilinx, yolo
- Language: Scala
- Homepage:
- Size: 2.25 MB
- Stars: 126
- Watchers: 6
- Forks: 33
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-yolo-object-detection - 19801201/SpinalHDL_CNN_Accelerator
- awesome-yolo-object-detection - 19801201/SpinalHDL_CNN_Accelerator
README
# About SpinalHDL_CNN_Accelerator

[中文说明](./README_CN.md)
## Implementing accelerator supporting various neural networks in SpinalHDL
- This repository is implementing common operators in CNN in SpinalHDL 1.7.3.It can be guided by compiler to generate accelerators for various neural networks.
- This repository aims to implement a general accelerator at the SpinalHDL level and a special accelerator at the Verilog level.
- This repository implements an Npu module, which can be put into the project,as shown in the figure.
- The resource occupation in FPGA is shown in the figure below:

## Description
This repository hosts a CNN Accelerator implementation written in SpinalHDL. Here are some specs :
- Easy to use
- Complete implementation of convolution, quantization and shape
- Configurable parametric interface
- Optional convolution kernel type
- The FPGA resource occupation is optimized at the code level
- Implementation of rich tool class code
- Access control of DMA and AXI
- Automatically generate TCL files for instantiating Xilinx IP
- Top-level files are easy to run
- Implementation of the simulation of convolution## RTL code generation
You can find three runnable top modules in:
- `src/main/scala/top.scala`
- `src/main/scala/Npu.scala`
- `src/test/scala/TbConv.scala`NOTES:
- It could take time when you run it.
- `top.scala` is used to implement convolution.
- `Npu.scala` is used to implement complete project process.
- `TbConv.scala` is used to implement the simulation of convolution.
- Before you run the `TbConv.scala`,you should prepare files with feature and weight in advance.