Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/denissimon/distributed-model-training
Approach to implementing distributed training of an ML model: server/device training for iOS.
https://github.com/denissimon/distributed-model-training
ai app colab core-ml distributed edge-computing ios ios-app machine-learning ml model-personalization model-training neural-network on-device-ml s4tf swift swift-for-tensorflow tensorflow transfer-learning xcode
Last synced: about 2 months ago
JSON representation
Approach to implementing distributed training of an ML model: server/device training for iOS.
- Host: GitHub
- URL: https://github.com/denissimon/distributed-model-training
- Owner: denissimon
- License: mit
- Created: 2020-06-25T16:15:47.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2024-09-16T16:53:14.000Z (4 months ago)
- Last Synced: 2024-09-17T15:23:15.205Z (4 months ago)
- Topics: ai, app, colab, core-ml, distributed, edge-computing, ios, ios-app, machine-learning, ml, model-personalization, model-training, neural-network, on-device-ml, s4tf, swift, swift-for-tensorflow, tensorflow, transfer-learning, xcode
- Language: Swift
- Homepage:
- Size: 690 KB
- Stars: 5
- Watchers: 1
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- fucking-open-source-ios-apps - Distributed Model Training
README
# distributed-model-training
This project aims to show an approach and mechanisms for implementing distributed training of a machine learning model - server/device training for iOS.
[`Swift for TensorFlow`](https://github.com/tensorflow/swift) is used for creating a pre-trained [foundation](https://en.wikipedia.org/wiki/Foundation_model) ML model on [shared/proxy data](https://github.com/denissimon/distributed-model-training/blob/master/1.%20macOS%20app/S4TF/housing.csv). This training takes place on a server or local Mac. Then `Google Colab` and `protobuf` are used for recreating (by reusing part of weights), making `updatable`, and exporting the pre-trained model in [`.mlmodel`](https://apple.github.io/coremltools/docs-guides/source/mlmodel.html) format. The updatable pre-trained model is delivered to devices with new versions of the app. [`Core ML`](https://developer.apple.com/documentation/coreml) is used for on-device retraining on user data, so they do not leave the device, ensuring a high level of privacy, and also for inference (making predictions). [`Transfer learning`](https://en.wikipedia.org/wiki/Transfer_learning), [`online learning`](https://en.wikipedia.org/wiki/Online_machine_learning) and [`model personalization`](https://developer.apple.com/documentation/coreml/model-personalization) concepts are used for this process as well.
The implementation process is structured so that only one `Swift` programming language is used at all stages of working with the model, making this approach even more convenient and reliable thanks to a single codebase, i.e. reusing of preprocessing, featurization and validation code in different parts of the distributed system.
In addition, backup and restoration of personal ML model (as a `.mlmodel` file) is implemented. This is particularly useful when the user reinstalls the app or changes the device.
***
Update: originally, `Swift for TensorFlow` was used (now in archived mode), but other tools such as `TensorFlow`, `PyTorch` or `Turi Create` can also be used instead. In this case, more testing will be required, since the pre-trained model will have to be written in `Python`, and the code related to data processing during on-device retraining (preprocessing, featurization and validation code) will have to be written in `Swift`. The transformation of user data from the moment it is created to when it is sent to `Core ML` should be algorithmically exactly the same as in the `Python` code.
***
This approach can also be an alternative to `Federated Learning` (FL) due to the significant difficulties in [production use](https://www.tensorflow.org/federated/faq) of FL currently, especially in combination with mobile devices.
In this case, the above process of distributed training is supplemented with `partial sharing` of user training data under sertain conditions, such as only a part of it is sent to the server periodically, and only in `modified`, [`pseudonymized`](https://en.wikipedia.org/wiki/Pseudonymization), [`anonymized`](https://en.wikipedia.org/wiki/Data_anonymization) (e.g. using [differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) and/or [k-anonymity](https://en.wikipedia.org/wiki/K-anonymity)) or `encrypted` form (depending on how sensitive the data is and the project requirements). With that:
1. Data storage and model training on the server occur in the same modified form
2. When an improved pre-trained model is received on the device, replacing the personal model, it is then first retrained and re-personalized on all or part of user data (unshared data only) stored in the local DB (in plain form)
3. If necessary, the data batch needs to be transformed in real time into the same modified form before being sent to `Core ML`.The benefit of this, besides automating and speeding up data collection for the foundation model, is that both global knowledge transfer (server->devices) and on-device knowledge transfer (devices->server) occur simultaneously, ultimately resulting in cross-device knowledge transfer that will improve subsequent predictions. At the same time, as in FL, the overall high level of privacy of user data is still maintained.
It is production-ready and can be implemented using existing established tools and technologies now.