Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/XiaoMi/mace
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
https://github.com/XiaoMi/mace
deep-learning hvx machine-learning neon neural-network opencl
Last synced: 7 days ago
JSON representation
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
- Host: GitHub
- URL: https://github.com/XiaoMi/mace
- Owner: XiaoMi
- License: apache-2.0
- Created: 2018-06-27T03:50:12.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2024-05-14T07:28:25.000Z (6 months ago)
- Last Synced: 2024-05-19T14:32:18.056Z (6 months ago)
- Topics: deep-learning, hvx, machine-learning, neon, neural-network, opencl
- Language: C++
- Homepage:
- Size: 30.3 MB
- Stars: 4,881
- Watchers: 230
- Forks: 816
- Open Issues: 61
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Roadmap: ROADMAP.md
Awesome Lists containing this project
- awesome - mace - MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms. (C++)
- awesome-notes - mace `MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.`
- awesome-python-machine-learning-resources - GitHub - 7% open · ⏱️ 30.05.2022): (机器学习框架)
- awesome-edge-machine-learning - https://github.com/XiaoMi/mace
- awesome-list - MACE - A deep learning inference framework optimized for mobile heterogeneous computing by XiaoMi. (Deep Learning Framework / High-Level DL APIs)
- AwesomeCppGameDev - mace
README
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
[![Build Status](https://travis-ci.org/XiaoMi/mace.svg?branch=master)](https://travis-ci.org/XiaoMi/mace)
[![pipeline status](https://gitlab.com/llhe/mace/badges/master/pipeline.svg)](https://gitlab.com/llhe/mace/pipelines)
[![doc build status](https://readthedocs.org/projects/mace/badge/?version=latest)](https://readthedocs.org/projects/mace/badge/?version=latest)[Documentation](https://mace.readthedocs.io) |
[FAQ](https://mace.readthedocs.io/en/latest/faq.html) |
[Release Notes](RELEASE.md) |
[Roadmap](ROADMAP.md) |
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) |
[Demo](examples/android) |
[Join Us](JOBS.md) |
[中文](README_zh.md)**Mobile AI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for
mobile heterogeneous computing on Android, iOS, Linux and Windows devices. The design focuses on the following
targets:
* Performance
* Runtime is optimized with NEON, OpenCL and Hexagon, and
[Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
speed up convolution operations. The initialization is also optimized to be faster.
* Power consumption
* Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
included as advanced APIs.
* Responsiveness
* UI responsiveness guarantee is sometimes obligatory when running a model.
Mechanism like automatically breaking OpenCL kernel into small units is
introduced to allow better preemption for the UI rendering task.
* Memory usage and library footprint
* Graph level memory allocation optimization and buffer reuse are supported.
The core library tries to keep minimum external dependencies to keep the
library footprint small.
* Model protection
* Model protection has been the highest priority since the beginning of
the design. Various techniques are introduced like converting models to C++
code and literal obfuscations.
* Platform coverage
* Good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
chips. CPU runtime supports Android, iOS and Linux.
* Rich model formats support
* [TensorFlow](https://github.com/tensorflow/tensorflow),
[Caffe](https://github.com/BVLC/caffe) and
[ONNX](https://github.com/onnx/onnx) model formats are supported.## Getting Started
* [Introduction](https://mace.readthedocs.io/en/latest/introduction.html)
* [Installation](https://mace.readthedocs.io/en/latest/installation/env_requirement.html)
* [Basic Usage](https://mace.readthedocs.io/en/latest/user_guide/basic_usage.html)
* [Advanced Usage](https://mace.readthedocs.io/en/latest/user_guide/advanced_usage.html)## Performance
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) contains
several common neural networks and models which will be built daily against a list of mobile
phones. The benchmark results can be found in [the CI result page](https://gitlab.com/llhe/mace-models/pipelines)
(choose the latest passed pipeline, click *release* step and you will see the benchmark results).
To get the comparison results with other frameworks, you can take a look at
[MobileAIBench](https://github.com/XiaoMi/mobile-ai-bench) project.## Communication
* GitHub issues: bug reports, usage issues, feature requests
* Slack: [mace-users.slack.com](https://join.slack.com/t/mace-users/shared_invite/enQtMzkzNjM3MzMxODYwLTAyZTAzMzQyNjc0ZGI5YjU3MjI1N2Q2OWI1ODgwZjAwOWVlNzFlMjFmMTgwYzhjNzU4MDMwZWQ1MjhiM2Y4OTE)
* QQ群: 756046893## Contributing
Any kind of contribution is welcome. For bug reports, feature requests,
please just open an issue without any hesitation. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
please refer to [the contribution guide](https://mace.readthedocs.io/en/latest/development/contributing.html).## License
[Apache License 2.0](LICENSE).## Acknowledgement
MACE depends on several open source projects located in the
[third_party](third_party) directory. Particularly, we learned a lot from
the following projects during the development:
* [Qualcomm Hexagon NN Offload Framework](https://developer.qualcomm.com/software/hexagon-dsp-sdk): the Hexagon DSP runtime
depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
[Caffe](https://github.com/BVLC/caffe),
[SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
[ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
[ncnn](https://github.com/Tencent/ncnn),
[ONNX](https://github.com/onnx/onnx) and many others: we learned many best
practices from these projects.Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
their help.