https://github.com/planetis-m/neuralnet-examples
Neural network examples
https://github.com/planetis-m/neuralnet-examples
Last synced: 2 months ago
JSON representation
Neural network examples
- Host: GitHub
- URL: https://github.com/planetis-m/neuralnet-examples
- Owner: planetis-m
- Created: 2020-06-16T10:49:29.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2024-06-10T11:57:43.000Z (about 1 year ago)
- Last Synced: 2024-06-10T13:33:59.267Z (about 1 year ago)
- Language: Nim
- Size: 192 KB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Neural net examples
In this repo you will find examples of neural networks implimented from scratch using
my matrix [library](https://github.com/b3liever/manu).## Examples included
### 1. [perceptron](perceptron.nim)
The [perceptron](https://en.wikipedia.org/wiki/Perceptron) algorithm.
The activation function is a binary step function called "heaviside step
function". Hinge loss as a loss function. Capable of binary
classification. The example functions as an OR gate.```json
{
"layers": [2, 1],
"activation_function": ["heaviside"]
}
```### 2. [neural](neural.nim)
Two layer neural network with gradient descent. Both layers use sigmoid as
activation function. XOR gate. Weights are initialized from a uniform
distribution ``U(-sqrt(6 / (in + out)), sqrt(6 / (in + out)))`` (Xavier initialization).```json
{
"layers": [2, 3, 1],
"activation_function": ["sigmoid", "sigmoid"]
}
```### 3. [momentums](momentums.nim)
Same as previous except implemented the momentum method. It impovers training
speed and accuracy (avoid getting stuck in a local minima). XOR gate```json
{
"layers": [2, 5, 1],
"activation_function": ["sigmoid", "sigmoid"]
}
```### 4. [exseimion](exseimion.nim)
Handwritten digit classification is a multi-label classification problem.
The data set used is [semeion.data](http://archive.ics.uci.edu/ml/machine-learning-databases/semeion/semeion.data).```json
{
"layers": [256, 51, 10],
"activation_function": ["sigmoid", "softmax"]
}
```### 5. [minibatches](minibatches.nim)
Same as the previous, data is split in small batches (subsets). Impoves memory
efficiency, accuracy for a trade-off in compute efficiency. Uses root mean squared propagation,
instead of SGD. And L2 regularization is applied for the model's weights.```json
{
"layers": [256, 51, 10],
"activation_function": ["leaky_relu", "softmax"]
}
```### Bonus: [Cross Validation](cross_validation.nim)
Same as the previous, but with cross validation. Implements accuracy,
precision, recall and f1-score metrics.```json
{
"layers": [256, 51, 10],
"activation_function": ["sigmoid", "softmax"]
}
```**DISCLAIMER**: Only for learning purposes. Nim has its own machine learning
framework [Arraymancer](https://github.com/mratsim/Arraymancer) as well as
[torch bindings](https://github.com/SciNim/flambeau).## Acknowledgments
- [UFLDL Tutorial](http://ufldl.stanford.edu/tutorial/)
- [How to implement a neural network](https://peterroelants.github.io/posts/neural-network-implementation-part01/)
- [Machine Learning FAQ](https://sebastianraschka.com/faq/index.html)## License
This library is distributed under the [MIT license](LICENSE).