Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/gokadin/ai-simplest-network
The simplest form of an artificial neural network explained and demonstrated.
https://github.com/gokadin/ai-simplest-network
artificial-intelligence artificial-neural-networks golang gradient-descent machine-learning tutorial
Last synced: 18 days ago
JSON representation
The simplest form of an artificial neural network explained and demonstrated.
- Host: GitHub
- URL: https://github.com/gokadin/ai-simplest-network
- Owner: gokadin
- Created: 2019-07-31T01:05:31.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-05-31T02:49:54.000Z (over 4 years ago)
- Last Synced: 2024-08-01T00:43:13.748Z (3 months ago)
- Topics: artificial-intelligence, artificial-neural-networks, golang, gradient-descent, machine-learning, tutorial
- Language: Go
- Homepage:
- Size: 259 KB
- Stars: 397
- Watchers: 23
- Forks: 31
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Simplest artificial neural network
This is the simplest artificial neural network possible explained and demonstrated.
## This is part 1 of a series of github repos on neural networks
- part 1 - simplest network (**you are here**)
- [part 2 - backpropagation](https://github.com/gokadin/ai-backpropagation)
- [part 3 - backpropagation-continued](https://github.com/gokadin/ai-backpropagation-continued)## Table of Contents
- [Theory](#theory)
- [Mimicking neurons](#mimicking-neurons)
- [A simple example](#a-simple-example)
- [The error](#the-error)
- [Gradient descent](#gradient-descent)
- [Code example](#code-example)
- [References](#references)## Theory
### Mimicking neurons
Artificial neural networks are inspired by the brain by having interconnected artificial neurons store patterns and communicate with each other.
The simplest form of an artificial neuron has one or multiple inputs each having a specific weight and one output .![alt text](readme-images/perceptron.jpg)
At the simplest level, the output is the sum of its inputs times its weights.
### A simple example
The purpose of a network is to learn a certain output given certain input(s) by approximating a complex function with many parameters that we couldn't come up with ourselves.
Say we have a network with two inputs and and two weights and .
The idea is to adjust the weights in such a way that the given inputs produce the desired output.
Weights are normally initialized randomly since we can't know their optimal value ahead of time, however for simplicity we will initialize them both to .
![alt text](readme-images/perceptron-example.jpg)
If we calculate the output of this network, we will get
### The error
If the output doesn't match the expected target value, then we have an error.
For example, if we wanted to get a target value of then we would have a difference ofOne common way to measure the error (also referred to as the cost function) is to use the mean squared error:
If we had multiple associations of inputs and target values, then the error becomes the average sum of each association.
We use the mean squared error to measure how far away the results are from our desired target. The squaring removes negative signs and gives more weight to bigger differences between output and target.
To rectify the error, we would need to adjust the weights in a way that the output matches our target. In our example, lowering from to would do the trick, since
However, in order to adjust the weights of our neural networks for many different inputs and target values, we need a *learning algorithm* to do this for us automatically.
### Gradient descent
The idea is to use the error to understand how each weight should be adjusted so that the error is minimized, but first, we need to learn about gradients.
##### What is a gradient?
It's essentially a vector pointing to the direction of the steepest ascent of a function. The gradient is denoted with and is simply the partial derivative of each variable of a function expressed as a vector.
It looks like this for a two variable function:
Let's inject some numbers and calculate the gradient with a simple example.
Say we have a function , then the gradient would be##### What is gradient descent?
The *descent* part simply means using the gradient to find the direction of steepest ascent of our function and then going in the opposite direction by a *small* amount many times to find the function *global (or sometimes local) minimum*.
We use a constant called the **learning rate**, denoted with to define how small of a step to take in that direction.
If is too large, then we risk overshooting the function minimum, but if it's too low then the network will take longer to learn and we risk getting stuck in a shallow local minimum.
![alt text](readme-images/gradient-descent.jpg)
##### Gradient descent applied to our example network
For our two weights and we need to find the gradient of those weights with respect to the error function
For both and , we can find the gradient by using the chain rule
From now on we will denote the as the term for simplicity.
Once we have the gradient, we can update our weights by subtracting the calculated gradient times the learning rate.
And we repeat this process until the error is minimized and is close enough to zero.
## Code example
The included example teaches the following dataset to a neural network with two inputs and one output using gradient descent:
Once learned, the network should output ~0 when given two s and ~ when given a and a .
### How to run
#### Online on repl.it
[![Run on Repl.it](https://repl.it/badge/github/gokadin/ai-simplest-network)](https://repl.it/github/gokadin/ai-simplest-network)
#### Docker
``` bash
docker build -t simplest-network .
docker run --rm simplest-network
```## References
1. Artificial intelligence engines by James V Stone (2019)
2. Complete guide on deep learning: http://neuralnetworksanddeeplearning.com/chap2.html