https://github.com/saikat-roy/vision-systems-lab
MLPs, DCNNs, Deep Convolutional Autoencoders, LSTM, GRU, ResNets, DCGAN - CudaVision Lab at University of Bonn (SS19)
https://github.com/saikat-roy/vision-systems-lab
deep-learning deep-neural-networks jupyter-notebook python3 pytorch pytorch-implmention university-of-bonn
Last synced: 27 days ago
JSON representation
MLPs, DCNNs, Deep Convolutional Autoencoders, LSTM, GRU, ResNets, DCGAN - CudaVision Lab at University of Bonn (SS19)
- Host: GitHub
- URL: https://github.com/saikat-roy/vision-systems-lab
- Owner: saikat-roy
- Created: 2019-04-30T23:32:54.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2020-11-15T01:11:27.000Z (over 4 years ago)
- Last Synced: 2025-03-21T22:21:41.660Z (about 1 month ago)
- Topics: deep-learning, deep-neural-networks, jupyter-notebook, python3, pytorch, pytorch-implmention, university-of-bonn
- Language: Jupyter Notebook
- Homepage:
- Size: 14.6 MB
- Stars: 10
- Watchers: 2
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Vision Systems Lab: Learning Computer Vision on GPUs [Readme not updated regularly]
Authors: Saikat Roy, [Albert Gubaidullin](https://github.com/Olbert)
Repository of the CudaVision Lab at University of Bonn (SS19) implemented (mostly) on PyTorch, Python3 and Jupyter notebooks. The project begins from the basics of neural networks and continues to deeper models. The following projects are contained in the respective folders:
### Project 1: Softmax Regression (without autograd/Pytorch Tensors)
Involves using softmax regression with manual gradient calculation for classifying the MNIST dataset. Training and test set accuracies after a simple 5 iteration run was `0.8931` and `0.8866` respectively.### Project 2: Multilayer Neural Network
Involves training simple multilayer neural networks using vanilla SGD on PyTorch with k-fold monte-carlo cross validation for hyperparameter (learning rate and batch size) search. Classification was done on the CIFAR-10 dataset. A confusion matrix after a simple 50 iteration run on a `3072-128-128-10` architecture is given below, with a training and test set accuracy of `0.6647` and `0.5117` respectively.
### Project 3: Different Optimizers for MLP
The project involved applying `SGD`, `Adam`, `RMSprop`, `Adagrad` and `Adadelta` to the CIFAR-10 dataset. A `3072-128-128-10` architecture with 0.2 Dropout between hidden units was used for all algorithms.
Also tested were different non-linearities for the hidden units.

In each case, the results were kind of counterintuitive as `SGD` and `sigmoid` performed the best. However, it might be possible that the convergence rates simply might be different. Additionally, with the network being this shallow, the benefits of the non-linearities used typically in 'Deep' networks might simply not reflect on a network of this scale. And if run long enough, `SGD` generally converges to a better minima than other optimizers like `Adam`.
### Project 4: (Deep) Convolutional Neural Networks
The project introduces one to convolutional neural networks and applied a CNN on the CIFAR-10 dataset.### Project 5: Deep(er) Networks using Residual Connections
### Project 6: Autoencoders (Convolutional, Denoising)
### Project 7: LSTM and GRU (Manual Implementation using PyTorch)
### Project 8: Deep Convolutional GAN
### Project 9: Humanoid Robot Body Part Detection with Pretrained Encoders