https://github.com/rohan-varma/neuralnets
🤖 bare-bones implementation of a neural network with numpy
https://github.com/rohan-varma/neuralnets
hyperparameters momentum neural-network
Last synced: 5 months ago
JSON representation
🤖 bare-bones implementation of a neural network with numpy
- Host: GitHub
- URL: https://github.com/rohan-varma/neuralnets
- Owner: rohan-varma
- Created: 2016-09-05T11:23:15.000Z (about 9 years ago)
- Default Branch: master
- Last Pushed: 2019-01-04T04:41:37.000Z (almost 7 years ago)
- Last Synced: 2025-03-31T04:01:53.697Z (6 months ago)
- Topics: hyperparameters, momentum, neural-network
- Language: Jupyter Notebook
- Homepage:
- Size: 13.1 MB
- Stars: 16
- Watchers: 5
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# neuralnets
Implementations and experiments with Neural Networks.
All of this code will be ported to Python 3 shortly.
The main file of importance is neuralnetwork.py. It contains a from-scratch implementation of a neural network with a single hidden layer, without the use of any external libraries other than numpy.The network achieves about 96.5% accuracy on MNIST if the hyperparameters are tuned.
The implementation has a few of the bells and whistles that help neural networks learn better:
- An option for using stochastic gradient descent with minibatch learning
- Decaying the learning rate during training
- Implementation of the momentum method for better convergence & less oscillations
- Nesterov momentum
- L2 regularization to promote smaller weights
- The dropout method that discards hidden layer activations to prevent overfitting.
Another interesting file is utils/utils.py.
It contains several of my machine learning utilities for one-hot encoding, k-fold cross-validation, splitting datasets, and hyperparameter tuning.