https://github.com/hunar4321/rls-neural-net
Recursive Leasting Squares (RLS) with Neural Network for fast learning
https://github.com/hunar4321/rls-neural-net
artificial-intelligence kalman-filter machine-learning multiple-regression neural-network oneshot-learning oneshotlearning recursive-least-squares regression-analysis
Last synced: about 2 months ago
JSON representation
Recursive Leasting Squares (RLS) with Neural Network for fast learning
- Host: GitHub
- URL: https://github.com/hunar4321/rls-neural-net
- Owner: hunar4321
- License: mit
- Created: 2020-08-16T11:36:20.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2023-11-16T14:07:28.000Z (over 1 year ago)
- Last Synced: 2025-03-24T06:22:26.436Z (2 months ago)
- Topics: artificial-intelligence, kalman-filter, machine-learning, multiple-regression, neural-network, oneshot-learning, oneshotlearning, recursive-least-squares, regression-analysis
- Language: Jupyter Notebook
- Homepage:
- Size: 510 KB
- Stars: 53
- Watchers: 3
- Forks: 9
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README

# 1. Recursive Least Squares by predicting errors
This is a simple intuitive method to solve linear equations using recursive least squaresCheckout the step by step video tutorial here: https://youtu.be/4vGaN1dTVhw
------------
#### Illustration - RLS Error Prediction:

#### Comparision between how errors are shared among the inputs in Gradient based methods vs. RLS based methods
Algorithmically, this method is faster than matrix inversion due to the fewer operations required. However, in practice, it's hard to fairly compare this method with the already established linear solvers because many optimalization tricks have been done at the level of the hardware for matrix operations. We added a simple C++ implementation using Eigen library to compare the performance of this method to the matrix inversion method.
Inspired from the following post by whuber: https://stats.stackexchange.com/q/166718
# 2. Fast Learning in Neural Networks (Real time optimization)
-----------------------------------
There is an example usage at the end of *RLS_Neural_Network.py* which showcases how this network can learn XOR data in a single iteration. Run the code and see the output.**Advantages of using RLS for learning instead of gradient descent**
1. Fast learning and sample efficiency (can learn in one-shot).
2. Online Learning (suitable for real time learning).
3. No worries about local minima.**Disadvantages:**
1. Computationally inefficient if the size of the input is big (Quadratic Complexity).
2. Sensitive to overflow and underflow and this can lead to unstability in some cases
3. The current implementation works with a single hidden unit neural network. It is not clear if adding more layers will be useful since learning only happens in the last layer