awesome-ai-edge-computing
A curated list of awesome tools, frameworks, libraries, and resources for running AI models on edge devices, including smartphones, IoT devices, embedded systems, and hardware accelerators. Edge AI focuses on processing data locally on the device, reducing latency and enhancing privacy.
https://github.com/awesomelistsio/awesome-ai-edge-computing
Last synced: 4 days ago
JSON representation
-
Hardware and Accelerators
- Intel Movidius Neural Compute Stick - A USB-based neural compute accelerator for running AI models at the edge.
- NVIDIA Jetson - A family of embedded AI computing platforms for edge devices, offering powerful GPU acceleration.
- Intel Movidius Neural Compute Stick - A USB-based neural compute accelerator for running AI models at the edge.
- Intel Movidius Neural Compute Stick - A USB-based neural compute accelerator for running AI models at the edge.
- Raspberry Pi - A popular, low-cost single-board computer that can run AI models locally with the help of libraries like TensorFlow Lite.
- Arduino Nano 33 BLE Sense - An Arduino board designed for AI and machine learning projects at the edge.
- Xilinx Edge AI - AI-enabled FPGAs for real-time processing on edge devices.
- NVIDIA Jetson - A family of embedded AI computing platforms for edge devices, offering powerful GPU acceleration.
- Google Coral - Edge AI hardware by Google, featuring the Edge TPU for fast, efficient inference.
-
Learning Resources
- Google Coral Tutorials - Tutorials for running AI inference on Google Coral hardware.
- Coursera: Edge AI - Courses on deploying AI models on edge devices.
- Google Coral Tutorials - Tutorials for running AI inference on Google Coral hardware.
- NVIDIA Jetson Tutorials - Getting started guides and tutorials for the NVIDIA Jetson platform.
- PyTorch Mobile Documentation - Official documentation for deploying PyTorch models on mobile devices.
-
Community
- NVIDIA Developer Forums - A community for discussing NVIDIA’s edge AI hardware and software.
- Edge AI and Vision Alliance - A group dedicated to advancing the use of computer vision and AI at the edge.
- AI on the Edge Forum - A forum for discussing AI applications and deployments on edge devices.
- Reddit: r/TinyML - A subreddit for discussing TinyML and edge AI projects.
- NVIDIA Developer Forums - A community for discussing NVIDIA’s edge AI hardware and software.
-
Optimization Tools
- ONNX Quantization - Tools for optimizing ONNX models through quantization for faster inference on edge hardware.
- TensorFlow Model Optimization Toolkit - Tools for model pruning, quantization, and optimization to run efficiently on edge devices.
- ONNX Quantization - Tools for optimizing ONNX models through quantization for faster inference on edge hardware.
- NVIDIA TensorRT - A high-performance deep learning inference optimizer and runtime for NVIDIA GPUs, including Jetson devices.
-
Deployment Platforms
- AWS IoT Greengrass - A service for running local compute, messaging, data caching, sync, and ML inference on edge devices.
- AWS IoT Greengrass - A service for running local compute, messaging, data caching, sync, and ML inference on edge devices.
- EdgeX Foundry - An open-source platform for building interoperable edge computing solutions.
- Balena - A platform for building, deploying, and managing containerized applications on edge devices.
-
Frameworks and Libraries
- ONNX Runtime - A cross-platform, high-performance scoring engine for running ONNX models on edge devices.
- Apache TVM - An open-source deep learning compiler stack for running machine learning models on edge devices.
- TensorFlow Lite - A lightweight version of TensorFlow designed for mobile and embedded devices.