https://github.com/snwfdhmp/neural-networks
Artificial neural networks that performs number recognition functions using FANN library
https://github.com/snwfdhmp/neural-networks
ai ann artificial-intelligence artificial-neural-networks fann intelligence neurons weights-possibilities
Last synced: 3 months ago
JSON representation
Artificial neural networks that performs number recognition functions using FANN library
- Host: GitHub
- URL: https://github.com/snwfdhmp/neural-networks
- Owner: snwfdhmp
- Created: 2016-12-29T12:13:49.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2016-12-30T16:12:54.000Z (over 8 years ago)
- Last Synced: 2025-02-26T09:08:28.547Z (3 months ago)
- Topics: ai, ann, artificial-intelligence, artificial-neural-networks, fann, intelligence, neurons, weights-possibilities
- Language: C++
- Homepage:
- Size: 65.4 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# IA repository
My first tries with Artificial Intelligence.
I'm actually using the FANN library to emulate Neural Networks.
Observations :
- The "intelligence" of a trained ANN (Artificial Neural Network) highly depends on the number of hidden neurons layers and their number of neurons. And more isn't necessarily better.
For a simple problem (a NOT) (with input 1, expected is 0, with input 0, expect is 1), a uniq layer with a uniq neuron is sufficient to get a very nice result (no errors on 200k tests), with 37 Epochs of training.
But with 50 layers of 5 neurons each, the ANN has still an error rate of 25% after 2000 Epochs of training.
Maybe with a lot more training, this configuration will work, but we can do it simply with only 1 neuron, so ... ;)1-2-2-1 configuration seems more efficient than 1-4-1 ...
What's the reason of that ? Let's try to figure it out.
- How many "synapse" can we have on the first config ?
first
first[0]->second[0]
first[0]->second[1]then
second[0]->third[0]
second[0]->third[1]
second[1]->third[0]
second[1]->third[1]final
third[0]->first[0]
third[1]->first[0]So we get a total amount of 2 + 4 + 2 = 8 synapses. (and so 8 different weights possibilities).
- What about the second configuration ?
first
first[0]->second[0]
first[0]->second[1]
first[0]->second[2]
first[0]->second[3]final
second[0]->third[0]
second[1]->third[0]
second[2]->third[0]
second[3]->third[0]So we get a total of 4 + 4 = 8 synapses. (still 8 different weights possibilities).
The numbers of activation functions is still the same in both : 4.
How can we get a significative difference of capability ?
Good question.