Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/voize-gmbh/pytorch-lite-multiplatform
A Kotlin multi-platform wrapper around the PyTorch Lite libraries on Android and iOS.
https://github.com/voize-gmbh/pytorch-lite-multiplatform
kotlin kotlin-multiplatform-mobile pytorch pytorch-lite pytorch-mobile
Last synced: about 1 month ago
JSON representation
A Kotlin multi-platform wrapper around the PyTorch Lite libraries on Android and iOS.
- Host: GitHub
- URL: https://github.com/voize-gmbh/pytorch-lite-multiplatform
- Owner: voize-gmbh
- License: apache-2.0
- Created: 2022-03-26T10:42:37.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-07-23T19:45:50.000Z (4 months ago)
- Last Synced: 2024-09-26T02:04:32.497Z (about 2 months ago)
- Topics: kotlin, kotlin-multiplatform-mobile, pytorch, pytorch-lite, pytorch-mobile
- Language: Kotlin
- Homepage:
- Size: 391 KB
- Stars: 35
- Watchers: 4
- Forks: 4
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# pytorch-lite-multiplatform
![CI](https://github.com/voize-gmbh/pytorch-lite-multiplatform/actions/workflows/test.yml/badge.svg)
![Maven Central](https://img.shields.io/maven-central/v/de.voize/pytorch-lite-multiplatform)
![Cocoapods](https://img.shields.io/cocoapods/v/PLMLibTorchWrapper)A Kotlin multi-platform wrapper around the PyTorch Lite libraries on [Android](https://pytorch.org/mobile/android/) and [iOS](https://pytorch.org/mobile/ios/).
You can use this library in your Kotlin multi-platform project to write mobile inference code for PyTorch Lite models. The API is very close to the Android API of PyTorch Lite. A high level introduction is available [in our blog post](https://tech.voize.de/posts/pytorch-lite-multiplatform).## Installation
Add the following to your `shared/build.gradle.kts` as a `commonMain` dependency.
```kotlin
implementation("de.voize:pytorch-lite-multiplatform:")
```Add the `PLMLibTorchWrapper` pod to your `cocoapods` plugin block in `shared/build.gradle.kts` and add `useLibraries()` because the `PLMLibTorchWrapper` pod has a dependency on the `LibTorch-Lite` pod which contains static libraries.
```kotlin
cocoapods {
...pod("PLMLibTorchWrapper") {
version = ""
headers = "LibTorchWrapper.h"
}useLibraries()
}
```If you use Kotlin version < 1.8.0 the `headers` property is not available. Instead, you have to add the following to your `shared/build.gradle.kts` (see [this issue](https://youtrack.jetbrains.com/issue/KT-44155/Cocoapods-doesnt-support-pods-without-module-map-file-inside) for more information):
```kotlin
tasks.named("generateDefPLMLibTorchWrapper").configure {
doLast {
outputFile.writeText("""
language = Objective-C
headers = LibTorchWrapper.h
""".trimIndent())
}
}
```Additional steps:
- make sure bitcode is disabled in your iOS XCode project
- make sure that your iOS app's Podfile does **not** include `use_frameworks!`
- your `framework` block should probably declare `isStatic = true`## Usage
First, [export your PyTorch model for the lite interpreter](https://pytorch.org/tutorials/recipes/mobile_interpreter.html).
Manage in your application how the exported model file is stored on device, e.g. bundled with your app, downloaded from a server during app initialization or something else.
Then you can initialize the `TorchModule` with the path to the model file.```kotlin
import de.voize.pytorch_lite_multiplatform.TorchModuleval module = TorchModule(path = "")
```Once you initialized the model you are ready to run inference.
Just like in the Android API of PyTorch Lite, you can use `IValue` and `Tensor` to pass input data into your model and to process the model output. To manage the memory allocated for your tensors you need to use `plmScoped` to specify up to which point you need to keep the memory allocated.
```kotlin
import de.voize.pytorch_lite_multiplatform.*plmScoped {
val inputTensor = Tensor.fromBlob(
data = floatArrayOf(...),
shape = longArrayOf(...),
scope = this
)val inputIValue = IValue.fromTensor(inputTensor)
val output = module.forward(inputIValue)
// you could also use
// module.runMethod("forward", inputIValue)val outputTensor = output.toTensor()
val outputData = outputTensor.getDataAsFloatArray()...
}
````IValue`s are very flexible to construct the input you need for your model, e.g. tensors, scalars, flags, dicts, tuples etc. Refer to the [`IValue`]() interface for all available options and browse [PyTorch's Android Demo](https://github.com/pytorch/android-demo-app) for examples on inferences using `IValue`.
## Memory Management
To make management of resources allocated for your inference across Android and iOS simpler we introduced the `PLMScope` and the `plmScoped` util. On Android, the JVM garbage collection and PyTorch Lite manage the allocated memory nicely so `plmScoped` is a noop. But on iOS, memory is allocated in Kotlin and exchanged with native Objective-C code and vice-versa without automatic deallocation of resources. This is where `plmScoped` comes in and frees the memory allocated for your inference. So it is important that you properly define the scope in which resources need to stay allocated to avoid memory leaks or memory being lost that is needed later.
## Running tests
### iOS
To run the tests on iOS, execute the `iosSimulatorX64Test` gradle task:
```
./gradlew iosSimulatorX64Test
```This will automatically call `build_dummy_model.py` to create the dummy torchscript module for testing, copy it into the simulator files directory and execute the tests.
Make sure to select a Python environment where the torch dependency is available.