https://github.com/pytorch/executorch
On-device AI across mobile, embedded and edge for PyTorch
https://github.com/pytorch/executorch
deep-learning embedded gpu machine-learning mobile neural-network tensor
Last synced: 12 days ago
JSON representation
On-device AI across mobile, embedded and edge for PyTorch
- Host: GitHub
- URL: https://github.com/pytorch/executorch
- Owner: pytorch
- License: other
- Created: 2022-02-25T17:58:31.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2025-04-16T21:02:26.000Z (18 days ago)
- Last Synced: 2025-04-16T21:02:45.347Z (18 days ago)
- Topics: deep-learning, embedded, gpu, machine-learning, mobile, neural-network, tensor
- Language: Python
- Homepage: https://pytorch.org/executorch/
- Size: 182 MB
- Stars: 2,734
- Watchers: 67
- Forks: 519
- Open Issues: 993
-
Metadata Files:
- Readme: README-wheel.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: CODEOWNERS
Awesome Lists containing this project
- AiTreasureBox - pytorch/executorch - 04-30_2780_3](https://img.shields.io/github/stars/pytorch/executorch.svg)|On-device AI across mobile, embedded and edge for PyTorch| (Repos)
- awesome-repositories - pytorch/executorch - On-device AI across mobile, embedded and edge for PyTorch (C++)
- Awesome-LLMs-on-device - [Github
README
**ExecuTorch** is a [PyTorch](https://pytorch.org/) platform that provides
infrastructure to run PyTorch programs everywhere from AR/VR wearables to
standard on-device iOS and Android mobile deployments. One of the main goals for
ExecuTorch is to enable wider customization and deployment capabilities of the
PyTorch programs.The `executorch` pip package is in beta.
* Supported python versions: 3.10, 3.11, 3.12
* Compatible systems: Linux x86_64, macOS aarch64The prebuilt `executorch.runtime` module included in this package provides a way
to run ExecuTorch `.pte` files, with some restrictions:
* Only [core ATen operators](docs/source/ir-ops-set-definition.md) are linked into the prebuilt module
* Only the [XNNPACK backend delegate](docs/source/backends-xnnpack.md) is linked into the prebuilt module.
* \[macOS only] [Core ML](docs/source/backends-coreml.md) and [MPS](docs/source/backends-mps.md) backend
are also linked into the prebuilt module.Please visit the [ExecuTorch website](https://pytorch.org/executorch) for
tutorials and documentation. Here are some starting points:
* [Getting Started](https://pytorch.org/executorch/main/getting-started-setup)
* Set up the ExecuTorch environment and run PyTorch models locally.
* [Working with local LLMs](docs/source/llm/getting-started.md)
* Learn how to use ExecuTorch to export and accelerate a large-language model
from scratch.
* [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial)
* Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and
optimizing its performance using quantization and hardware delegation.
* Running LLaMA on [iOS](docs/source/llm/llama-demo-ios.md) and [Android](docs/source/llm/llama-demo-android.md) devices.
* Build and run LLaMA in a demo mobile app, and learn how to integrate models
with your own apps.