Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/charmve/accann
๐ A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration for *AdderNet*
https://github.com/charmve/accann
accelerator addernet asic charmve cnn deep-learning fpga fpga-hardware ghostnet gpu-acceleration hardware hardware-acceleration neurips paper verilog
Last synced: 6 days ago
JSON representation
๐ A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration for *AdderNet*
- Host: GitHub
- URL: https://github.com/charmve/accann
- Owner: Charmve
- Created: 2020-12-11T02:46:54.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2023-02-25T02:45:44.000Z (almost 2 years ago)
- Last Synced: 2024-04-15T09:05:26.474Z (9 months ago)
- Topics: accelerator, addernet, asic, charmve, cnn, deep-learning, fpga, fpga-hardware, ghostnet, gpu-acceleration, hardware, hardware-acceleration, neurips, paper, verilog
- Homepage:
- Size: 395 KB
- Stars: 15
- Watchers: 2
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# AccANN
A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration for AdderNet.
Fig 1. Visualization of features in AdderNets and CNNs. [1]
Fig 2. Visualization of features in different neural networks on MNIST dataset. [3]## ๐ฎ Community
- Github discussions ๐ฌ or issues ๐ญ- QQ Group: 697948168 (password๏ผAccANN)
- Email: yidazhang#gmail.com## ๐ Related Works
[1] AdderNet: Do We Really Need Multiplications in Deep Learning? Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu. CVPR, 2020. [๐[paper](https://arxiv.org/abs/1912.13200) | [code](https://github.com/huawei-noah/AdderNet)]
[2] AdderSR: Towards Energy Efficient Image Super-Resolution. Dehua Song, Yunhe Wang, Hanting Chen, Chang Xu, Chunjing Xu, Dacheng Tao. Arxiv, 2020. [๐[paper](https://arxiv.org/abs/2009.08891) | code]
[3] ShiftAddNet: A Hardware-Inspired Deep Network. Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin. NeurIPS, 2020. [๐[paper](https://arxiv.org/abs/2010.12785) | [code](https://github.com/RICE-EIC/ShiftAddNet)]
[4] Kernel Based Progressive Distillation for Adder Neural Networks. Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing XU, Yunhe Wang. NeurIPS, 2020. [๐[paper](https://arxiv.org/abs/2009.13044) | [code]()]
[5] GhostNet: More Features from Cheap Operations [๐[paper](https://arxiv.org/abs/1911.11907) | [code](https://github.com/huawei-noah/ghostnet)]
[6] MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [๐[paper](https://arxiv.org/abs/1704.04861) | [code](https://github.com/Zehaos/MobileNet)]
[7] VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing. Qian Zhang, Jianjun Li, Meng Yao. [๐[paper](https://arxiv.org/pdf/1907.05653v1.pdf) | [code](https://github.com/zma-c-137/VarGFaceNet)]
[8] And the bit goes down: Revisiting the quantization of neural networks (ICLR 2020). Pierre Stock, Armand Joulin, Remi Gribonval. [๐[paper](https://arxiv.org/pdf/1907.05686.pdf) | [code](https://github.com/facebookresearch/kill-the-bits)]
[9] DNNBuilder: an Automated Tool for Building High-Performance DNN Hardware Accelerators for FPGAs [๐[paper](https://docs.wixstatic.com/ugd/c50250_77e06b7f02b44eacb76c05e8fbe01e08.pdf) | [code](https://github.com/IBM/AccDNN)]
[10] AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence. Yunhe Wang, Mingqiang Huang, Kai Han, et.al. [๐[paper](https://arxiv.org/pdf/2101.10015.pdf) | code]
[11] PipeCNN: An OpenCL-Based Open-Source FPGA Accelerator for Convolution Neural Networks. FPT 2017. Dong Wang, Ke Xu and Diankun Jiang. [๐[paper](https://arxiv.org/abs/1611.02450) | [code](https://github.com/doonny/PipeCNN)]