Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pppw/deep-learning-random-explore
https://github.com/pppw/deep-learning-random-explore
backward-propagation cnn-architecture deep-learning fastai keras lstm
Last synced: 5 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/pppw/deep-learning-random-explore
- Owner: PPPW
- Created: 2018-10-20T15:02:55.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2019-08-09T03:02:15.000Z (about 5 years ago)
- Last Synced: 2024-10-23T01:13:32.286Z (14 days ago)
- Topics: backward-propagation, cnn-architecture, deep-learning, fastai, keras, lstm
- Language: Jupyter Notebook
- Size: 3.55 MB
- Stars: 194
- Watchers: 17
- Forks: 34
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Deep Learning: Random Explore
A set of notebooks explore deep learning related topics.
* [CNN architectures](CNN_archs/cnn_archs.ipynb): look into the structures of common CNN architectures, such as ResNet, [ResNeXt](https://arxiv.org/abs/1611.05431), [SENet](https://arxiv.org/pdf/1709.01507.pdf), [Densenet](https://arxiv.org/pdf/1608.06993.pdf), [Inception V4](https://arxiv.org/pdf/1602.07261.pdf), [WRN](https://arxiv.org/pdf/1605.07146.pdf), [Xception](https://arxiv.org/pdf/1610.02357.pdf), [Dual Path Networks](https://arxiv.org/abs/1707.01629), [NASNet](https://arxiv.org/abs/1707.07012), [Progressive Neural Architecture Search](https://arxiv.org/abs/1712.00559), [VGG](https://arxiv.org/pdf/1409.1556.pdf), etc., and how to use them in [fastai](https://docs.fast.ai/).
* [EfficientNet paper study](efficientnet/EfficientNet.ipynb): study the official implementation. As the building blocks, the mobile inverted residual blocks and the Squeeze-and-Excitation networks are also studied here.
* [WGAN paper study](wgan/wgan.ipynb): replicate some results in the WGAN paper.
* [An easy way to do the backward propagation math](backward_propagation_for_all/README.md): use a simple rule to derive the backward propagation for all different kinds of neural networks, such as LSTM, CNN, etc.
* [Resume interrupted 1cycle policy training](divide_1cycle/README.md): divide the long training process into smaller ones and resume the training.
* [How the LSTM's memory works?](LSTM_memory_cells/README.md): dig into the LSTM's internal states to see how it manages to generate valid XML texts.
* [Parameter counts for popular CNN architectures](CNN_archs_param_counts/README.md)
* _To be continued ..._