Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dnouri/cuda-convnet
My fork of Alex Krizhevsky's cuda-convnet from 2013 where I added dropout, among other features.
https://github.com/dnouri/cuda-convnet
Last synced: 7 days ago
JSON representation
My fork of Alex Krizhevsky's cuda-convnet from 2013 where I added dropout, among other features.
- Host: GitHub
- URL: https://github.com/dnouri/cuda-convnet
- Owner: dnouri
- Created: 2013-06-08T21:19:06.000Z (over 11 years ago)
- Default Branch: master
- Last Pushed: 2015-01-23T20:38:44.000Z (almost 10 years ago)
- Last Synced: 2024-12-09T04:36:38.034Z (16 days ago)
- Language: Cuda
- Homepage: http://code.google.com/p/cuda-convnet/
- Size: 1.12 MB
- Stars: 256
- Watchers: 30
- Forks: 146
- Open Issues: 7
-
Metadata Files:
- Readme: README.rst
Awesome Lists containing this project
README
This is my fork of the ``cuda-convnet`` convolutional neural network
implementation written by Alex Krizhevsky.``cuda-convnet`` has quite extensive documentation itself. Find the
`MAIN DOCUMENTATION HERE `_.**Update**: A newer version, `cuda-convnet 2
`_, has been released by
Alex. This fork is still based on the original cuda-convnet.===================
Additional features
===================This document will only describe the small differences between
``cuda-convnet`` as hosted on Google Code and this version.Dropout
=======Dropout is a relatively new regularization technique for neural
networks. See the `Improving neural networks by preventing
co-adaptation of feature detectors `_
and `Improving Neural Networks with Dropout
`_ papers for
details.To set a dropout rate for one of our layers, we use the ``dropout``
parameter in our model's ``layer-params`` configuration file. For
example, we could use dropout for the last layer in the CIFAR example
by modifying the section for the fc10 layer to look like so::[fc10]
epsW=0.001
epsB=0.002
# ...
dropout=0.5In practice, you'll probably also want to double the number of
``outputs`` in that layer.CURAND random seeding
=====================An environment variable ``CONVNET_RANDOM_SEED``, if set, will be used
to set the CURAND library's random seed. This is important in order
to get reproducable results.Updated to work with CUDA via CMake
===================================The build configuration and code has been updated to work with CUDA
via CMake. Run ``cmake .`` and then ``make``. If you have an alternative
BLAS library just set it with for example ``cmake -DBLAS_LIBRARIES=/usr/lib/libcblas.so .``.