Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/csarron/awesome-emdl

Embedded and mobile deep learning research resources
https://github.com/csarron/awesome-emdl

List: awesome-emdl

deep-learning deep-neural-networks efficient-neural-networks embedded-ai inference mobile-ai mobile-deep-learning mobile-inference neural-network-compression pruning quantization

Last synced: 2 months ago
JSON representation

Embedded and mobile deep learning research resources

Lists

README

        

# Awesome EMDL

Embedded and mobile deep learning research notes.

## Papers

### Survey

1. [EfficientDNNs](https://github.com/MingSun-Tse/EfficientDNNs) [Repo]
1. [Awesome ML Model Compression](https://github.com/cedrickchee/awesome-ml-model-compression) [Repo]
1. [TinyML Papers and Projects](https://github.com/gigwegbe/tinyml-papers-and-projects) [Repo]
1. [TinyML Platforms Benchmarking](https://arxiv.org/abs/2112.01319) [arXiv '21]
1. [TinyML: A Systematic Review and Synthesis of Existing Research](https://ieeexplore.ieee.org/abstract/document/9722636) [ICAIIC '21]
1. [TinyML Meets IoT: A Comprehensive Survey](https://www.sciencedirect.com/science/article/abs/pii/S2542660521001025) [Internet of Things '21]
1. [A review on TinyML: State-of-the-art and prospects](https://www.sciencedirect.com/science/article/pii/S1319157821003335) [Journal of King Saud Univ. '21]
1. [TinyML Benchmark: Executing Fully Connected Neural Networks on Commodity Microcontrollers](https://aran.library.nuigalway.ie/handle/10379/16770) [IEEE '21]
1. [Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better](https://arxiv.org/abs/2106.08962) [arXiv '21]
1. [Benchmarking TinyML Systems: Challenges and Direction](https://arxiv.org/abs/2003.04821) [arXiv '20]
1. [Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey](https://ieeexplore.ieee.org/abstract/document/9043731) [IEEE '20]
1. [The Deep Learning Compiler: A Comprehensive Survey](https://arxiv.org/abs/2002.03794) [arXiv '20]
1. [Recent Advances in Efficient Computation of Deep Convolutional Neural Networks](https://arxiv.org/abs/1802.00939) [arXiv '18]
1. [A Survey of Model Compression and Acceleration for Deep Neural Networks](https://arxiv.org/abs/1710.09282) [arXiv '17]

### Model

1. [EtinyNet: Extremely Tiny Network for TinyML](https://www.aaai.org/AAAI22Papers/AAAI-4889.XuK.pdf) [AAAI '21]
1. [MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning](https://arxiv.org/abs/2110.15352) [NeurIPS '21, MIT]
1. [SkyNet: a Hardware-Efficient Method for Object Detection and Tracking on Embedded Systems](https://proceedings.mlsys.org/papers/2020/86) [MLSys '20, IBM]
1. [Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets](https://arxiv.org/abs/2010.14819) [NeurIPS '20, Huawei]
1. [MCUNet: Tiny Deep Learning on IoT Devices](https://arxiv.org/abs/2007.10319) [NeurIPS '20, MIT]
1. [GhostNet: More Features from Cheap Operations](https://arxiv.org/abs/1911.11907) [CVPR '20, Huawei]
1. [MicroNet for Efficient Language Modeling](https://arxiv.org/abs/2005.07877) [NeurIPS '19, MIT]
1. [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244) [ICCV '19, Google]
1. [MobilenetV2: Inverted Residuals and Linear Bottlenecks: Mobile Networks for
Classification, Detection and Segmentation](https://arxiv.org/pdf/1801.04381.pdf) [CVPR '18, Google]
1. [ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https://arxiv.org/abs/1812.00332) [arXiv '18, MIT]
1. [DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices](https://arxiv.org/abs/1708.04728) [AAAI'18, Samsung]
1. [NasNet: Learning Transferable Architectures for Scalable Image Recognition](https://arxiv.org/pdf/1707.07012.pdf) [arXiv '17, Google]
1. [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://arxiv.org/abs/1707.01083) [arXiv '17, Megvii]
1. [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) [arXiv '17, Google]
1. [CondenseNet: An Efficient DenseNet using Learned Group Convolutions](https://arxiv.org/abs/1711.09224) [arXiv '17]

### System

1. [BSC: Block-based Stochastic Computing to Enable Accurate and Efficient TinyML](https://arxiv.org/pdf/2111.06686.pdf?ref=https://githubhelp.com) [ASP-DAC '22]
1. [CFU Playground: Full-Stack Open-Source Framework for Tiny Machine Learning (tinyML) Acceleration on FPGAs](https://arxiv.org/abs/2201.01863) [arXiv '22, Google]
1. [UDC: Unified DNAS for Compressible TinyML Models](https://arxiv.org/abs/2201.05842) [arXiv '22, Arm]
1. [AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator](https://arxiv.org/abs/2111.06503) [arXiv '21, Arm]
1. [TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning](https://arxiv.org/abs/2007.11622) [NeurIPS '20, MIT]
1. [Once for All: Train One Network and Specialize it for Efficient Deployment](https://arxiv.org/abs/1908.09791) [ICLR '20, MIT]
1. [DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications](https://www.sigmobile.org/mobisys/2017/accepted.php) [MobiSys '17]
1. [DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware](http://fahim-kawsar.net/papers/Mathur.MobiSys2017-Camera.pdf) [MobiSys '17]
1. [MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU](https://arxiv.org/abs/1706.00878) [EMDL '17]
1. [fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs](https://arxiv.org/abs/1711.08740) [NIPS '17]
1. [DeepSense: A GPU-based deep convolutional neural network framework on commodity mobile devices](http://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=4278&context=sis_research) [WearSys '16]
1. [DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices](http://niclane.org/pubs/deepx_ipsn.pdf) [IPSN '16]
1. [EIE: Efficient Inference Engine on Compressed Deep Neural Network](https://arxiv.org/abs/1602.01528) [ISCA '16]
1. [MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints](http://haneul.github.io/papers/mcdnn.pdf) [MobiSys '16]
1. [DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit](http://niclane.org/pubs/dxtk_mobicase.pdf) [MobiCASE '16]
1. [Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables](http://niclane.org/pubs/sparsesep_sensys.pdf) [SenSys ’16]
1. [An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices](http://niclane.org/pubs/iotapp15_early.pdf) [IoT-App ’15]
1. [CNNdroid: GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android](https://arxiv.org/abs/1511.07376) [MM '16]

### Quantization

1. [Quantizing deep convolutional networks for efficient inference: A whitepaper](https://arxiv.org/abs/1806.08342) [arXiv '18]
1. [LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks](https://arxiv.org/pdf/1807.10029.pdf) [ECCV'18]
1. [Training and Inference with Integers in Deep Neural Networks](https://openreview.net/forum?id=HJGXzmspb) [ICLR'18]
1. [The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning](https://arxiv.org/abs/1611.05402) [ICML'17]
1. [Loss-aware Binarization of Deep Networks](https://arxiv.org/abs/1611.01600) [ICLR'17]
1. [Towards the Limit of Network Quantization](https://arxiv.org/abs/1612.01543) [ICLR'17]
1. [Deep Learning with Low Precision by Half-wave Gaussian Quantization](https://arxiv.org/abs/1702.00953) [CVPR'17]
1. [ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks](https://arxiv.org/abs/1706.02393) [arXiv'17]
1. [Quantized Convolutional Neural Networks for Mobile Devices](https://arxiv.org/abs/1512.06473) [CVPR '16]
1. [Fixed-Point Performance Analysis of Recurrent Neural Networks](https://arxiv.org/abs/1512.01322) [ICASSP'16]
1. [Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations](https://arxiv.org/abs/1609.07061) [arXiv'16]
1. [Compressing Deep Convolutional Networks using Vector Quantization](https://arxiv.org/abs/1412.6115) [arXiv'14]

### Pruning

1. [Awesome-Pruning](https://github.com/he-y/Awesome-Pruning) [Repo]
1. [Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration](https://arxiv.org/abs/1811.00250) [CVPR'19]
1. [To prune, or not to prune: exploring the efficacy of pruning for model compression](https://arxiv.org/abs/1710.01878) [ICLR'18]
1. [Pruning Filters for Efficient ConvNets](https://arxiv.org/abs/1608.08710) [ICLR'17]
1. [Pruning Convolutional Neural Networks for Resource Efficient Inference](https://arxiv.org/abs/1611.06440) [ICLR'17]
1. [Soft Weight-Sharing for Neural Network Compression](https://arxiv.org/abs/1702.04008) [ICLR'17]
1. [Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning](https://arxiv.org/abs/1611.05128) [CVPR'17]
1. [ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression](https://arxiv.org/abs/1707.06342) [ICCV'17]
1. [Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding](https://arxiv.org/abs/1510.00149) [ICLR'16]
1. [Dynamic Network Surgery for Efficient DNNs](https://arxiv.org/abs/1608.04493) [NIPS'16]
1. [Learning both Weights and Connections for Efficient Neural Networks](https://arxiv.org/abs/1506.02626) [NIPS'15]

### Approximation

1. [High performance ultra-low-precision convolutions on mobile devices](https://arxiv.org/abs/1712.02427) [NIPS'17]
1. [Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications](https://arxiv.org/abs/1511.06530) [ICLR'16]
1. [Efficient and Accurate Approximations of Nonlinear Convolutional Networks](https://arxiv.org/abs/1411.4229) [CVPR'15]
1. [Accelerating Very Deep Convolutional Networks for Classification and Detection](https://arxiv.org/abs/1505.06798) (Extended version of above one)
1. [Convolutional neural networks with low-rank regularization](https://arxiv.org/abs/1511.06067) [arXiv'15]
1. [Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation](https://arxiv.org/abs/1404.0736) [NIPS'14]

### Characterization

1. [A First Look at Deep Learning Apps on Smartphones](https://arxiv.org/abs/1812.05448) [WWW'19]
1. [Machine Learning at Facebook: Understanding Inference at the Edge](https://research.fb.com/publications/machine-learning-at-facebook-understanding-inference-at-the-edge/) [HPCA'19]
1. [NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications](https://arxiv.org/abs/1804.03230) [ECCV 2018]
1. [Latency and Throughput Characterization of Convolutional Neural Networks for Mobile Computer Vision](https://arxiv.org/abs/1803.09492) [MMSys’18]

## Libraries

### Inference Framework

1. [Alibaba - MNN](https://github.com/alibaba/MNN) - is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba.
1. [Apple - CoreML](https://developer.apple.com/documentation/coreml) - is integrate machine learning models into your app. [BERT and GPT-2 on iPhone](https://github.com/huggingface/swift-coreml-transformers)
1. [Arm - ComputeLibrary](https://github.com/ARM-software/ComputeLibrary) - is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies. [Intro](https://developer.arm.com/technologies/compute-library)
1. [Arm - Arm NN](https://github.com/ARM-software/armnn) - is the most performant machine learning (ML) inference engine for Android and Linux, accelerating ML on Arm Cortex-A CPUs and Arm Mali GPUs.
1. [Baidu - Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite) - is multi-platform high performance deep learning inference engine.
1. [DeepLearningKit](https://github.com/DeepLearningKit/DeepLearningKit) - is Open Source Deep Learning Framework for Apple's iOS, OS X and tvOS.
1. [Edge Impulse](https://edgeimpulse.com) - Interactive platform to generate models that can run in microcontrollers. They are also quite active on social netwoks talking about recent news on EdgeAI/TinyML.
1. [Google - TensorFlow Lite](https://www.tensorflow.org/lite/performance/gpu) - is an open source deep learning framework for on-device inference.
1. [Intel - OpenVINO](https://github.com/openvinotoolkit/openvino) - Comprehensive toolkit to optimize your processes for faster inference.
1. [JDAI Computer Vision - dabnn](https://github.com/JDAI-CV/dabnn) - is an accelerated binary neural networks inference framework for mobile platform.
1. [Meta - PyTorch Mobile](https://pytorch.org/mobile/home) - is a new framework for helping mobile developers and machine learning engineers embed PyTorch ML models on-device.
1. [Microsoft - DeepSpeed](https://github.com/microsoft/DeepSpeed) - is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
1. [Microsoft - ELL](https://github.com/Microsoft/ELL) - allows you to design and deploy intelligent machine-learned models onto resource constrained platforms and small single-board computers, like Raspberry Pi, Arduino, and micro:bit.
1. [Microsoft - ONNX RUntime](https://github.com/microsoft/onnxruntime) - cross-platform, high performance ML inferencing and training accelerator.
1. [Nvidia - TensorRT](https://github.com/NVIDIA/TensorRT) - is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
1. [OAID - Tengine](https://github.com/OAID/Tengine) - is a lite, high performance, modular inference engine for embedded device
1. [Qualcomm - Neural Processing SDK for AI](https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk) - Libraries to developers run NN models on Snapdragon mobile platforms taking advantage of the CPU, GPU and/or DSP.
1. [Tencent - ncnn](https://github.com/Tencent/ncnn) - is a high-performance neural network inference framework optimized for the mobile platform.
1. [uTensor](https://github.com/uTensor/uTensor) - AI inference library based on mbed (an RTOS for ARM chipsets) and TensorFlow.
1. [XiaoMi - Mace](https://github.com/XiaoMi/mace) - is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
1. [xmartlabs - Bender](https://github.com/xmartlabs/Bender) - Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.

### Optimization Tools

1. [Neural Network Distiller](https://github.com/NervanaSystems/distiller) - Python package for neural network compression research.
1. [PocketFlow](https://github.com/Tencent/PocketFlow) - An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.

### Research Demos

1. [RSTensorFlow](https://nesl.github.io/RSTensorFlow) - GPU Accelerated TensorFlow for Commodity Android Devices.

### Web

1. [mil-tokyo/webdnn](https://github.com/mil-tokyo/webdnn) - Fastest DNN Execution Framework on Web Browser.

## General

1. [Caffe2 AICamera](https://github.com/bwasti/AICamera)
1. [TensorFlow Android Camera Demo](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android)
1. [TensorFlow iOS Example](https://github.com/hollance/TensorFlow-iOS-Example)
1. [TensorFlow OpenMV Camera Module](https://github.com/openmv/openmv)

### Edge / Tiny MLOps

1. [Tiny-MLOps: a framework for orchestrating ML applications at the far edge of IoT systems](https://ieeexplore.ieee.org/abstract/document/9787703/authors#authors) [EAIS '22]
1. [MLOps for TinyML: Challenges & Directions in Operationalizing TinyML at Scale](https://cms.tinyml.org/wp-content/uploads/talks2022/tinyML_Talks_Vijay_Janapa_Reddi_220524.pdf) [TinyML Talks '22]
1. [TinyMLOps: Operational Challenges for Widespread Edge AI Adoption](https://arxiv.org/pdf/2203.10923.pdf) [arXiv '22]
1. [A TinyMLaaS Ecosystem for Machine Learning in IoT: Overview and Research Challenges](https://ieeexplore.ieee.org/document/9427352) [VLSI-DAT '21]
1. [SOLIS: The MLOps journey from data acquisition to actionable insights](https://arxiv.org/abs/2112.11925) [arXiv '21]
1. [Edge MLOps: An Automation Framework for AIoT Applications](https://www.computer.org/csdl/proceedings-article/ic2e/2021/497000a191/1yJZ8cHPTkQ) [IC2E '21]
1. [SensiX++: Bringing MLOPs and Multi-tenant Model Serving to Sensory Edge Devices](https://arxiv.org/abs/2109.03947) [arXiv '21, Nokia]

### Vulkan

1. [Vulkan API Examples and Demos](https://github.com/SaschaWillems/Vulkan)
1. [Neural Machine Translation on Android](https://github.com/harvardnlp/nmt-android)

### OpenCL

1. [DeepMon](https://github.com/JC1DA/DeepMon)

### RenderScript

1. [Mobile_ConvNet: RenderScript CNN for Android](https://github.com/mtmd/Mobile_ConvNet)

## Tutorials

### General

1. [Squeezing Deep Learning Into Mobile Phones](https://www.slideshare.net/anirudhkoul/squeezing-deep-learning-into-mobile-phones)
1. [Deep Learning – Tutorial and Recent Trends](https://www.dropbox.com/s/p7lvelt0aihrwtl/FPGA%2717%20tutorial%20Song%20Han.pdf?dl=0)
1. [Tutorial on Hardware Architectures for Deep Neural Networks](http://eyeriss.mit.edu/tutorial.html)
1. [Efficient Convolutional Neural Network Inference on Mobile GPUs](https://www.slideshare.net/embeddedvision/efficient-convolutional-neural-network-inference-on-mobile-gpus-a-presentation-from-imagination-technologies)

### NEON

1. [NEON™ Programmer’s Guide](https://developer.arm.com/docs/den0018/latest/neontm-version-10-programmers-guide)

### OpenCL

1. [ARM® Mali™ GPU OpenCL Developer Guide](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100614_0303_00_en/ada1432742770595.html), [pdf](http://infocenter.arm.com/help/topic/com.arm.doc.100614_0303_00_en/arm_mali_gpu_opencl_developer_guide_100614_0303_00_en.pdf)
1. [Optimal Compute on ARM Mali™ GPUs](http://www.cs.bris.ac.uk/home/simonm/montblanc/OpenCL_on_Mali.pdf)
1. [GPU Compute for Mobile Devices](http://www.iwocl.org/wp-content/uploads/iwocl-2014-workshop-Tim-Hartley.pdf)
1. [Compute for Mobile Devices Performance focused](http://kesen.realtimerendering.com/Compute_for_Mobile_Devices5.pdf)
1. [Hands On OpenCL](https://handsonopencl.github.io/)
1. [Adreno OpenCL Programming Guide](https://developer.qualcomm.com/download/adrenosdk/adreno-opencl-programming-guide.pdf)
1. [Better OpenCL Performance on Qualcomm Adreno GPU](https://developer.qualcomm.com/blog/better-opencl-performance-qualcomm-adreno-gpu-memory-optimization)

## Courses

1. [UW Deep learning **systems**](http://dlsys.cs.washington.edu/schedule)
1. [Berkeley Machine Learning Systems](https://ucbrise.github.io/cs294-ai-sys-fa19/)

## Tools

### GPU

1. [Bifrost GPU architecture and ARM Mali-G71 GPU](https://www.hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.22-Monday-Epub/HC28.22.10-GPU-HPC-Epub/HC28.22.110-Bifrost-JemDavies-ARM-v04-9.pdf)
1. [Midgard GPU Architecture](http://malideveloper.arm.com/downloads/ARM_Game_Developer_Days/PDFs/2-Mali-GPU-architecture-overview-and-tile-local-storage.pdf), [ARM Mali-T880 GPU](https://www.hotchips.org/wp-content/uploads/hc_archives/hc27/HC27.25-Tuesday-Epub/HC27.25.50-GPU-Epub/HC27.25.531-Mali-T880-Bratt-ARM-2015_08_23.pdf)
1. [Mobile GPU market share](https://hwstats.unity3d.com/mobile/gpu.html)

### Driver

1. [Adreno] [csarron/qcom_vendor_binaries: Common Proprietary Qualcomm Binaries](https://github.com/csarron/qcom_vendor_binaries)
1. [Mali] [Fevax/vendor_samsung_hero2ltexx: Blobs from s7 Edge G935F](https://github.com/Fevax/vendor_samsung_hero2ltexx)

## Related Repos
+ [EfficientDNNs](https://github.com/MingSun-Tse/EfficientDNNs) by @MingSun-Tse ![GitHub stars](https://img.shields.io/github/stars/MingSun-Tse/EfficientDNNs?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/MingSun-Tse/EfficientDNNs.svg)
+ [Awesome ML Model Compression](https://github.com/cedrickchee/awesome-ml-model-compression) by @cedrickchee ![GitHub stars](https://img.shields.io/github/stars/cedrickchee/awesome-ml-model-compression?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/cedrickchee/awesome-ml-model-compression.svg)
+ [Awesome Pruning](https://github.com/he-y/Awesome-Pruning) by @he-y ![GitHub stars](https://img.shields.io/github/stars/cedrickchee/awesome-ml-model-compression?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/cedrickchee/awesome-ml-model-compression.svg)
+ [Model Compression](https://github.com/j-marple-dev/model_compression) by @j-marple-dev ![GitHub stars](https://img.shields.io/github/stars/j-marple-dev/model_compression?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/j-marple-dev/model_compression.svg)
+ [awesome-AutoML-and-Lightweight-Models](https://github.com/guan-yuan/awesome-AutoML-and-Lightweight-Models) by @guan-yuan ![GitHub stars](https://img.shields.io/github/stars/guan-yuan/awesome-AutoML-and-Lightweight-Models?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/guan-yuan/awesome-AutoML-and-Lightweight-Models.svg)
+ [knowledge-distillation-papers](https://github.com/lhyfst/knowledge-distillation-papers) by @lhyfst ![GitHub stars](https://img.shields.io/github/stars/lhyfst/knowledge-distillation-papers?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/lhyfst/knowledge-distillation-papers.svg)
+ [Awesome-model-compression-and-acceleration](https://github.com/memoiry/Awesome-model-compression-and-acceleration) by @memoiry ![GitHub stars](https://img.shields.io/github/stars/memoiry/Awesome-model-compression-and-acceleration?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/memoiry/Awesome-model-compression-and-acceleration.svg)
+ [Embedded Neural Network](https://github.com/ZhishengWang/Embedded-Neural-Network) by @ZhishengWang ![GitHub stars](https://img.shields.io/github/stars/ZhishengWang/Embedded-Neural-Network?style=social) ![GitHub last commit](https://img.shields.io/github/last-commit/ZhishengWang/Embedded-Neural-Network.svg)