Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/megengine/megcc
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
https://github.com/megengine/megcc
arm compiler deep-learning high-performance machine-learning mlir model-compiler
Last synced: 4 days ago
JSON representation
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
- Host: GitHub
- URL: https://github.com/megengine/megcc
- Owner: MegEngine
- License: apache-2.0
- Created: 2022-09-27T03:07:53.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-10-23T12:55:06.000Z (2 months ago)
- Last Synced: 2024-12-14T20:01:56.791Z (11 days ago)
- Topics: arm, compiler, deep-learning, high-performance, machine-learning, mlir, model-compiler
- Language: C++
- Homepage:
- Size: 63.1 MB
- Stars: 474
- Watchers: 18
- Forks: 57
- Open Issues: 9
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[Chinese README](./README_ZH_CN.md)
## What is MegCC
MegCC is a deep-learning model compiler with the following features:
* **Extremely Lightweight Runtime**: Only keep the required computation kernel in your binary. e.g., **81KB** runtime for MobileNet v1
* **High Performance**: Every operation is carefully optimized by experts
* **Portable**: generate nothing but computation code, easy to compile and use on Linux, Android, TEE, BareMetal
* **Low Memory Usage while Boot Instantly**: Model optimization and memory planning are generated at compile time. Get State-of-the-art level memory usage and spend no extra CPU during inference## MegCC Structure
![megcc_struct](doc/picture/megcc.png)MegCC compiler is developed based on MLIR infrastructure. Most of the code generated by the compiler is optimized by hand. MegCC supports neural networks that contain tensors in static shape or dynamic shape.
To help achieve the minimum binary size, it also supports generating the necessary CV operators so that you don't need to link another giant CV lib.When compiling a model:
* MegCC generates both the kernels used by the model and user-required CV kernels
* MegCC does several optimizations, such as static memory planning and model optimization
* MegCC dumps the data above into the final modelMegCC runtime loads the model and uses the generated kernels to finish the model inference. Only 81KB binary size is required to inference MobileNetV1 (in fp32).
MegCC supports Arm64/ArmV7/X86/BareMatal backend. You may want to check [supported operator lists](doc/opr.md).
## Documentation
#### Get MegCC
* Download release compiler suit from [release page](https://github.com/MegEngine/MegCC/releases)
* Compiler from source, please fellow the [compiler doc](compiler/README.md)
* Build the release tar, please fellow the [release doc](doc/how-to-release.md)
* Get benchmark of different model please reference [benchmark](benchmark/README.md)#### How to use MegCC
* Read [how-to-use](doc/how-to-use.md) to see how to compile your models and deploy them,also there is a Chinese doc [如何使用](doc/how-to-use-chinese.md).
* MegCC runtime is easy to run in standard OS, even no OS([example](runtime/example/README.md)).## License
MegCC is licensed under the Apache License, Version 2.0
**Thanks a lot, please enjoy it**