Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nengo/keras-lmu
Keras implementation of Legendre Memory Units
https://github.com/nengo/keras-lmu
keras legendre lmu lstm nengo recurrent-neural-networks tensorflow
Last synced: 26 days ago
JSON representation
Keras implementation of Legendre Memory Units
- Host: GitHub
- URL: https://github.com/nengo/keras-lmu
- Owner: nengo
- License: other
- Created: 2019-10-23T19:28:38.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2024-04-13T02:03:21.000Z (7 months ago)
- Last Synced: 2024-04-14T10:15:37.682Z (7 months ago)
- Topics: keras, legendre, lmu, lstm, nengo, recurrent-neural-networks, tensorflow
- Language: Python
- Homepage: https://www.nengo.ai/keras-lmu/
- Size: 15.4 MB
- Stars: 203
- Watchers: 22
- Forks: 34
- Open Issues: 17
-
Metadata Files:
- Readme: README.rst
- Changelog: CHANGES.rst
- Contributing: CONTRIBUTING.rst
- License: LICENSE.rst
Awesome Lists containing this project
README
KerasLMU: Recurrent neural networks using Legendre Memory Units
---------------------------------------------------------------`Paper `_
This is a Keras-based implementation of the
Legendre Memory Unit (LMU). The LMU is a novel memory cell for recurrent neural
networks that dynamically maintains information across long windows of time using
relatively few resources. It has been shown to perform as well as standard LSTM or
other RNN-based models in a variety of tasks, generally with fewer internal parameters
(see `this paper
`_ for more details). For the Permuted Sequential MNIST (psMNIST) task in particular, it has been demonstrated to outperform the current state-of-the-art results. See the note below for instructions on how to get access to this model.The LMU is mathematically derived to orthogonalize its continuous-time history – doing
so by solving *d* coupled ordinary differential equations (ODEs), whose phase space
linearly maps onto sliding windows of time via the Legendre polynomials up to degree
*d* − 1 (the example for *d* = 12 is shown below)... image:: https://i.imgur.com/Uvl6tj5.png
:target: https://i.imgur.com/Uvl6tj5.png
:alt: Legendre polynomialsA single LMU cell expresses the following computational graph, which takes in an input
signal, **x**, and couples a optimal linear memory, **m**, with a nonlinear hidden
state, **h**. By default, this coupling is trained via backpropagation, while the
dynamics of the memory remain fixed... image:: https://i.imgur.com/IJGUVg6.png
:target: https://i.imgur.com/IJGUVg6.png
:alt: Computational graphThe discretized **A** and **B** matrices are initialized according to the LMU's
mathematical derivation with respect to some chosen window length, **θ**.
Backpropagation can be used to learn this time-scale, or fine-tune **A** and **B**,
if necessary.Both the kernels, **W**, and the encoders, **e**, are learned. Intuitively, the kernels
learn to compute nonlinear functions across the memory, while the encoders learn to
project the relevant information into the memory (see `paper
`_ for details)... note::
The ``paper`` branch in the ``lmu`` GitHub repository includes a pre-trained
Keras/TensorFlow model, located at ``models/psMNIST-standard.hdf5``, which obtains
a psMNIST result of **97.15%**. Note that the network is using fewer internal
state-variables and neurons than there are pixels in the input sequence.
To reproduce the results from `this paper
`_,
run the notebooks in the ``experiments`` directory within the ``paper`` branch.Nengo Examples
--------------* `LMUs in Nengo (with online learning)
`_
* `Spiking LMUs in Nengo Loihi (with online learning)
`_
* `LMUs in NengoDL (reproducing SotA on psMNIST)
`_Citation
--------.. code-block::
@inproceedings{voelker2019lmu,
title={Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks},
author={Aaron R. Voelker and Ivana Kaji\'c and Chris Eliasmith},
booktitle={Advances in Neural Information Processing Systems},
pages={15544--15553},
year={2019}
}