https://github.com/saman-nia/neural-network-decision-boundary
Neural Network Decision Boundary
https://github.com/saman-nia/neural-network-decision-boundary
Last synced: 2 months ago
JSON representation
Neural Network Decision Boundary
- Host: GitHub
- URL: https://github.com/saman-nia/neural-network-decision-boundary
- Owner: saman-nia
- Created: 2018-01-16T18:18:58.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2018-04-18T09:17:45.000Z (about 7 years ago)
- Last Synced: 2025-01-12T18:11:26.615Z (4 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 3.35 MB
- Stars: 2
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Image Processing Analysis
# Neural Network Decision Boundary:
I will explain the question through a simpler representation called a Threshold Logic Unit.
1. A set of connection between neurons gives an activations from other neurons.
2. In the circle of above shape, the unit sums the inputs, and then applies a non-linear activation function.
3. The output line transfer the result to other neurons. The McCulloch-Pitts model was an extremely simple artificial neuron. The inputs could be either a zero or a one. And the output was a zero or a one. And each input could be either excitatory or inhibitory. Now the whole point was to sum the inputs. If an input is one, and is excitatory in nature, it added one. If it was one, and was inhibitory, it subtracted one from the sum. This is done for all inputs, and a final sum is calculated. An arrangement of one input layer of McCulloch-Pitts neurons feeding forward to one output layer of McCulloch-Pitts neurons is known as a Perceptron.# Gradient Descent
The biggest advantages of the linear activation function over the unit step function is that it is differentiable.
This property allows us to define a cost function J(w)J(w) that we can minimize in order to update our weights.
In the case of the linear activation function, we can define the cost function J(w)J(w) as the sum of squared errors , which is similar to the cost function that is minimized in ordinary least squares linear regression.