https://github.com/yell/mnist-challenge
My solution to TUM's Machine Learning MNIST challenge 2016-2017 [winner]
https://github.com/yell/mnist-challenge
data-augmentation deep-learning deep-neural-networks gaussian-processes k-nn kernel logistic-regression machine-learning mnist neural-network pca python rbm
Last synced: about 2 months ago
JSON representation
My solution to TUM's Machine Learning MNIST challenge 2016-2017 [winner]
- Host: GitHub
- URL: https://github.com/yell/mnist-challenge
- Owner: yell
- License: mit
- Created: 2017-02-24T20:14:32.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2019-10-09T10:47:02.000Z (over 5 years ago)
- Last Synced: 2025-02-25T06:25:11.718Z (2 months ago)
- Topics: data-augmentation, deep-learning, deep-neural-networks, gaussian-processes, k-nn, kernel, logistic-regression, machine-learning, mnist, neural-network, pca, python, rbm
- Language: Jupyter Notebook
- Homepage:
- Size: 13.8 MB
- Stars: 70
- Watchers: 2
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ML MNIST Challenge
This contest was offered within TU Munich's course Machine Learning (IN2064).
The goal was to implement k-NN, Neural Network, Logistic Regression and Gaussian Process Classifier in
python from scratch and achieve minimal average test error among these classifiers on well-known MNIST dataset,
without ensemble learning.## Results
| Algorithm |Description| Test Error, % |
| :---: | :--- | :---: |
| ***k-NN*** | 3-NN, Euclidean distance, uniform weights.
*Preprocessing*: Feature vectors extracted from ***NN***. | **1.13** |
| ***k-NN2*** | 3-NN, Euclidean distance, uniform weights.
*Preprocessing*: Augment (training) data (×9) by using random rotations,
shifts, Gaussian blur and dropout pixels; PCA-35 whitening and multiplying
each feature vector by e11.6 · ***s***, where ***s*** – normalized explained
variance by the respective principal axis. (equivalent to applying PCA
whitening with accordingly weighted Euclidean distance). | **2.06** |
| ***NN*** | MLP 784-1337-D(0.05)-911-D(0.1)-666-333-128-10 (D – dropout);
hidden activations – LeakyReLU(0.01), output – softmax; loss – categorical
cross-entropy; 1024 batches; 42 epochs; optimizer – *Adam* (learning rate
5 · 10–5, rest – defaults from paper).
*Preprocessing*: Augment (training) data (×5) by using random rotations,
shifts, Gaussian blur. | **1.04** |
| ***LogReg*** | 32 batches; 91 epoch; L2-penalty, λ = 3.16 · 10–4; optimizer – *Adam* (learning
rate 10–3, rest – defaults from paper)
*Preprocessing*: Feature vectors extracted from ***NN***. | **1.01** |
| ***GPC*** | 794 random data points were used for training; σn = 0; RBF kernel (σf = 0.4217,
γ = 1/2l2 = 0.0008511); Newton iterations for Laplace approximation till
ΔLog-Marginal-Likelihood ≤ 10–7; solve linear systems iteratively using CG with
10–7 tolerance; for prediction generate 2000 samples for each test point.
*Preprocessing*: Feature vectors extracted from ***NN***. | **1.59** |## Visualizations

And more available in `experiments/plots/`.## How to install
```bash
git clone https://github.com/yell/mnist-challenge
cd mnist-challenge/
pip install -r requirements.txt
```
After installation, tests can be run with:
```bash
make test
```## How to run
Check [main.py](main.py) to reproduce training and testing the final models:
```bash
usage: main.py [-h] [--load-nn] modelpositional arguments:
model which model to run, {'gp', 'knn', 'knn-without-nn', 'logreg',
'nn'}optional arguments:
-h, --help show this help message and exit
--load-nn whether to use pretrained neural network, ignored if 'knn-
without-nn' is used (default: False)
```## Experiments
Check also [this notebook](experiments/cross_validations.ipynb) to see what I've tried.
**Note**: the approach RBM + LogReg gave only at most `91.8%` test accuracy since RBM takes too long to train with given pure python code, thus it was only trained on small subset of data (and still underfitted). However, with properly trained RBM on the whole training set, this approach can give `1.83%` test error (see my [Boltzmann machines project](https://github.com/yell/boltzmann-machines))## Features
* Apart from specified algorithms, there are also PCA and RBM implementations
* Most of the classes contain doctests so they are easy to understand
* All randomness in algorithms or functions is reproducible (seeds)
* Support of simple readable serialization (JSON)
* There are also some infrastructure for model selection, feature selection, data augmentation, metrics, plots etc.)
* Support for ***MNIST*** or ***Fashion MNIST*** (both have the same structure thus both can be loaded using the [same routine](mnist-challenge/utils/dataset.py)), haven't tried the latter yet, though## System
All computations and time measurements were made on laptop `i7-5500U CPU @ 2.40GHz x 4` `12GB RAM`## Possible future work
Here the list of what can also be tried regarding these particular 4 ML algorithms (didn't have time to check it, or it was forbidden by the rules, e.g. ensemble learning):
* Model averaging for k-NN: train a group of k-NNs with different parameter *k* (say, 2, 4, ..., 128) and average their predictions;
* More sophisticated metrics (say, from `scipy.spatial.distance`) for k-NN;
* Weighting metrics according to some other functions of explained variance from PCA;
* NCA;
* Different kernels or compound kernels for k-NN;
* Commitee of MLPs, CNN, commitee of CNNs or more advanced NNs;
* Unsupervised pretraining for MLP/CNN;
* Different kernels or compound kernels for GPCs;
* 10 one-vs-rest GPCs;
* Use derivatives of Log-Marginal-Likelihood for multiclass Laplace approximation w.r.t kernel parameters for more efficient gradient-based optimization;
* Model averaging for GPCs: train a collection of GPCs on different parts of the data and then average their predictions (or bagging);
* IVM.