An open API service indexing awesome lists of open source software.

awesome-ai-edge-computing

A curated list of awesome tools, frameworks, libraries, and resources for running AI models on edge devices, including smartphones, IoT devices, embedded systems, and hardware accelerators. Edge AI focuses on processing data locally on the device, reducing latency and enhancing privacy.
https://github.com/awesomelistsio/awesome-ai-edge-computing

Last synced: 2 days ago
JSON representation

  • Frameworks and Libraries

    • DeepC - A framework for deploying deep learning models on microcontrollers and edge devices with limited resources.
    • ONNX Runtime - A cross-platform, high-performance scoring engine for running ONNX models on edge devices.
    • Apache TVM - An open-source deep learning compiler stack for running machine learning models on edge devices.
    • TensorFlow Lite - A lightweight version of TensorFlow designed for mobile and embedded devices.
    • Edge Impulse SDK - A toolkit for building, optimizing, and deploying machine learning models on edge devices.
    • Edge Impulse SDK - A toolkit for building, optimizing, and deploying machine learning models on edge devices.
  • Deployment Platforms

    • Google Cloud IoT Edge - A service by Google for running AI inference on edge devices using TensorFlow Lite and Edge TPU.
    • AWS IoT Greengrass - A service for running local compute, messaging, data caching, sync, and ML inference on edge devices.
    • EdgeX Foundry - An open-source platform for building interoperable edge computing solutions.
    • Balena - A platform for building, deploying, and managing containerized applications on edge devices.
    • Azure IoT Edge - A platform by Microsoft for deploying cloud intelligence on local edge devices.
  • Optimization Tools

    • TinyML - A community and set of tools focused on running machine learning models on microcontrollers and other low-power devices.
    • ONNX Quantization - Tools for optimizing ONNX models through quantization for faster inference on edge hardware.
    • NVIDIA TensorRT - A high-performance deep learning inference optimizer and runtime for NVIDIA GPUs, including Jetson devices.
    • TensorFlow Model Optimization Toolkit - Tools for model pruning, quantization, and optimization to run efficiently on edge devices.
  • Learning Resources

  • Hardware and Accelerators

    • Intel Movidius Neural Compute Stick - A USB-based neural compute accelerator for running AI models at the edge.
    • NVIDIA Jetson - A family of embedded AI computing platforms for edge devices, offering powerful GPU acceleration.
    • Intel Movidius Neural Compute Stick - A USB-based neural compute accelerator for running AI models at the edge.
    • Raspberry Pi - A popular, low-cost single-board computer that can run AI models locally with the help of libraries like TensorFlow Lite.
    • Arduino Nano 33 BLE Sense - An Arduino board designed for AI and machine learning projects at the edge.
    • Xilinx Edge AI - AI-enabled FPGAs for real-time processing on edge devices.
    • Google Coral - Edge AI hardware by Google, featuring the Edge TPU for fast, efficient inference.
  • Community