https://github.com/datasystemsgrouput/dlbench
A repository for benchmarking deep learning frameworks on different data sets
https://github.com/datasystemsgrouput/dlbench
Last synced: 8 days ago
JSON representation
A repository for benchmarking deep learning frameworks on different data sets
- Host: GitHub
- URL: https://github.com/datasystemsgrouput/dlbench
- Owner: DataSystemsGroupUT
- Created: 2018-11-04T16:21:57.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2023-01-06T00:52:23.000Z (about 3 years ago)
- Last Synced: 2023-08-04T05:56:20.802Z (over 2 years ago)
- Language: Jupyter Notebook
- Size: 66.9 MB
- Stars: 4
- Watchers: 7
- Forks: 2
- Open Issues: 40
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Benchmarking-Deep-Learning-Frameworks
## The frameworks that are included in this benchmark are:
1-[Keras](https://keras.io/)
2-[Chainer](https://docs.chainer.org/en/stable/glance.html)
3-[Tensorflow](https://www.tensorflow.org/)
4-[Pytorch](https://pytorch.org/)
5-[Theano](http://deeplearning.net/software/theano/)
6-[Mxnet](https://mxnet.apache.org/)
## The CNN Experiments comparing those frameworks over 4 datasets:
1- [MNIST](http://yann.lecun.com/exdb/mnist/)
2- [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html)
3- [CIFAR100](https://www.cs.toronto.edu/~kriz/cifar.html)
4- [SVHN](http://ufldl.stanford.edu/housenumbers/)
## The LSTM Experiments comparing those frameworks over 3 datasets:
1- [PTB](https://corochann.com/penn-tree-bank-ptb-dataset-introduction-1456.html)
2- [Manythings](https://www.manythings.org/anki/)
3- [IMDB](https://keras.io/api/datasets/imdb/)
## The FRCNN Experiment comparing those frameworks over 1 dataset:
1- [VOC12](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/)
There are two experiments one of them uses CPU and the other uses GPU.
## This repository is divided into 3 folders:
### 1- CPU Experiment
* The CPU experiments are performed on a single machine running on Centos release 7.5.1804 with 32 core Intel Xeon Processor (Skylake, IBRS) @ 2.00GHz;64 GB DIMM memory; and 240 GB SSD hard drive.
* For Keras, version 2.2.4 is used on Tensorflow 1.11.0.
* For Chainer, version 4.5.0 is used.
* For Tensorflow, version 1.11 is used.
* For Pytorch, version 0.4.1 is used
* For Theano, version 1.0.2 is used.
* For MXNet, version 1.3.0 is used.
* It contain the CPU source code
* The Generated graphs
* The logs of the experiment
### 2- GPU Experiment
* The GPU experiments are performed on a single machine running on Debian GNU/Linux 9 (stretch) with 8 core Intel(R) Xeon(R) CPU @ 2.00GHz; NVIDIA Tesla P4;36 GB DIMM memory; and 300 GB SSD hard drive.
* For Keras, version 2.2.4 is used and run on Tensorflow version 1.11.0.
* For Chainer, version 4.5.0 is used.
* For Tensorflow, version 1.11.0 is used.
* For Pytorch, version 0.4.1 is used, For Theano, version 1.0.2 is used.
* For MXNet, version 1.3.0 is used.
* For Chainer, MXNet and Pytorch, we used CUDA 9.2 and cuDNN 7.2.1, and for Tensorflow, Keras, and Theano, we used CUDA 10.0 and cuDNN 7.3
* It contain the GPU experiment
* The Generated graphs
* The logs of the experiment
### 3- Installation Guide
* It contain file for each framework with the commands needed for installation
* It's recommended to create different environmnet for each framework using conda or virtualenvs
## Experiment Logging:
There exist 3 files for logging the resources during the experiment: CPU log, GPU Log, memory Log.
## How to run?
1- Install the environment for each framework using the installation guide
2- Clone the project
3- For running the experiment over MNIST datset for Keras framework for example, you will find in CPU folder the source code, There is a file for each framework.
4- There is a method in the main function that is called runModel, this methods holds the name of the dataset and the number of epochs needed for this run.
## Optional hyperparameters:
There are many other optional parameters with the following default values, these paramters including:
1- Learning Rate=0.01
2- momentum=0.5
3- Weight Decay=1e-6
4- batch size = 128
## #Installation steps:
create a new environment with conda
$ conda create -n [my-env-name]
activate the environment you created
$ source activate [my-env-name]
install pip in the virtual environment
$ conda install pip