Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/awslabs/renate

Library for automatic retraining and continual learning
https://github.com/awslabs/renate

aws continual-learning hyperparameter-tuning hyperparameters-optimization machine-learning machine-learning-algorithms neural neural-network pytorch pytorch-lightning sagemaker

Last synced: about 1 month ago
JSON representation

Library for automatic retraining and continual learning

Awesome Lists containing this project

README

        

.. image:: https://img.shields.io/pypi/status/Renate
:target: #
:alt: PyPI - Status
.. image:: https://img.shields.io/github/v/release/awslabs/Renate
:target: https://github.com/awslabs/Renate/releases/tag/v0.5.1
:alt: Latest Release
.. image:: https://img.shields.io/pypi/dm/Renate
:target: https://pypistats.org/packages/renate
:alt: PyPI - Downloads
.. image:: https://img.shields.io/github/license/awslabs/Renate
:target: https://github.com/awslabs/Renate/blob/main/LICENSE
:alt: License
.. image:: https://readthedocs.org/projects/renate/badge/?version=latest
:target: https://renate.readthedocs.io
:alt: Documentation Status
.. image:: https://raw.githubusercontent.com/awslabs/Renate/python-coverage-comment-action-data/badge.svg
:target: https://htmlpreview.github.io/?https://github.com/awslabs/Renate/blob/python-coverage-comment-action-data/htmlcov/index.html
:alt: Coverage Badge

Renate: Automatic Neural Networks Retraining and Continual Learning in Python
******************************************************************************

Renate is a Python package for automatic retraining of neural networks models.
It uses advanced Continual Learning and Lifelong Learning algorithms to achieve this purpose.
The implementation is based on `PyTorch `_
and `Lightning `_ for deep learning, and
`Syne Tune `_ for hyperparameter optimization.

Quick links
===========
* Install renate with ``pip install renate`` or look at `these instructions `_
* Examples for `local training `_ and `training on Amazon SageMaker `_.
* `Documentation `_
* `Supported Algorithms `_

Who needs Renate?
=================

In many applications data is made available over time and retraining from scratch for
every new batch of data is prohibitively expensive. In these cases, we would like to use
the new batch of data provided to update our previous model with limited costs.
Unfortunately, since data in different chunks is not sampled according to the same distribution,
just fine-tuning the old model creates problems like *catastrophic forgetting*.
The algorithms in Renate help mitigating the negative impact of forgetting and increase the
model performance overall.

.. figure:: https://raw.githubusercontent.com/awslabs/Renate/main/doc/_images/improvement_renate.svg
:align: center
:alt: Renate vs Model Fine-Tuning.

Renate's update mechanisms improve over naive fine-tuning approaches. [#]_

Renate also offers hyperparameter optimization (HPO), a functionality that can heavily impact
the performance of the model when continuously updated. To do so, Renate employs
`Syne Tune `_ under the hood, and can offer
advanced HPO methods such multi-fidelity algorithms (ASHA) and transfer learning algorithms
(useful for speeding up the retuning).

.. figure:: https://raw.githubusercontent.com/awslabs/Renate/main/doc/_images/improvement_tuning.svg
:align: center
:alt: Impact of HPO on Renate's Updating Algorithms.

Renate will benefit from hyperparameter tuning compared to Renate with default settings. [#]_

Key features
============

* Easy to scale and run in the cloud
* Designed for real-world retraining pipelines
* Advanced HPO functionalities available out-of-the-box
* Open for experimentation

Resources
=========

* (blog) `Automatically retrain neural networks with Renate `_
* (paper) `Renate: A Library for Real-World Continual Learning `_

Cite Renate
===========

.. code-block:: bibtex

@misc{renate2023,
title = {Renate: A Library for Real-World Continual Learning},
author = {Martin Wistuba and
Martin Ferianc and
Lukas Balles and
Cedric Archambeau and
Giovanni Zappella},
year = {2023},
eprint = {2304.12067},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}

What are you looking for?
=========================

* `Installation Instructions `_
.. code-block:: bash

pip install renate

* Examples:
* `Train an MLP locally on MNIST `_
* `Train a ResNet on SageMaker `_
* `Documentation website with API doc and examples `_
* `List of the supported algorithms `_
* `How to run continual learning experiments using Renate `_
* `Guidelines for Contributors `_

If you did not find what you were looking for, open an `issue `_ and
we will do our best to improve the documentation.

.. [#] To create this plot, we simulated domain-incremental learning with `CLEAR-100 `_.
The training data was divided by year, and we trained sequentially on them.
Fine-tuning refers to the strategy to learn on the first partition from scratch, and
train on each of the subsequent partitions for few epochs only.
We compare to Experience Replay with an infinite memory size.
For both methods we use the same amount of training time and choose the best checkpoint
using a validation set.
Results reported are on the test set.

.. [#] In this experiment, we consider class-incremental learning on CIFAR-10. We compare
Experience Replay against a version in which its hyperparameters were tuned.