Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-very-deep-learning
♾A curated list of papers and code about very deep neural networks
https://github.com/daviddao/awesome-very-deep-learning
Last synced: about 14 hours ago
JSON representation
-
Neural Ordinary Differential Equations
-
Papers
- Neural Ordinary Differential Equations (2018) - depth residual networks and continuous-time latent variable models. The paper also constructs continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, the authors show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models. NIPS 2018 best paper.
- Augmented Neural ODEs (2019)
-
Implementations
-
-
Value Iteration Networks
-
Papers
- Value Iteration Networks (2016) - wise pooling. It is able to generalize better in environments where a network needs to plan. NIPS 2016 best paper.
-
-
Densely Connected Convolutional Networks
-
Papers
-
Implementations
-
-
Deep Residual Learning
-
Papers
- The Reversible Residual Network: Backpropagation Without Storing Activations - public)] constructs reversible residual layers (no need to store activations) and surprisingly finds out that reversible layers don't impact final performance.
- Squeeze-and-Excitation Networks - frank/SENet)], introduces Squeeze-and-Excitation (SE) block, that adaptively recalibrates channel-wise feature responses. It achieved the 1st place on ILSVRC17.
- Aggregated Residual Transformation for Deep Neural Networks (2016) - block. It achieved the 2nd place on ILSVRC16.
- Residual Networks of Residual Networks: Multilevel Residual Networks (2016) - level hierarchical residual mappings and shows that this improves the accuracy of deep networks
- Wide Residual Networks (2016) - residual-networks)], studies wide residual neural networks and shows that making residual blocks wider outperforms deeper and thinner network architectures
- Swapout: Learning an ensemble of deep architectures (2016)
- Deep Networks with Stochastic Depth (2016)
- Identity Mappings in Deep Residual Networks (2016) - 1k-layers)], improving the original proposed residual units by reordering batchnorm and activation layers
- Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (2016)
- Deep Residual Learning for Image Recognition (2015) - residual-networks)], original paper introducing residual neural networks
- Swapout: Learning an ensemble of deep architectures (2016)
-
Implementations
-
-
Highway Networks
-
Papers
-
Implementations
-
-
Very Deep Learning Theory
-
Papers
- Identity Matters in Deep Learning
- The Shattered Gradients Problem: If resnets are the answer, then what is the question?
- Skip Connections as Effective Symmetry-Breaking
- Highway and Residual Networks learn Unrolled Iterative Estimation
- Demystifying ResNet - shortcuts in ResNets achieves the best results because they have non-degenerate depth-invariant initial condition numbers (in comparison to 1 or 3-shortcuts), making it easy for the optimisation algorithm to escape from the initial point.
- Wider or Deeper? Revisiting the ResNet Model for Visual Recognition - deepened (hence not trained end-to-end), due to the much shorter effective path length.
- Residual Networks are Exponential Ensembles of Relatively Shallow Networks
- Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
- A Simple Way to Initialize Recurrent Networks of Rectified Linear Units - ResNet Hinton paper that suggested, that the identity matrix could be useful for the initialization of deep networks
- ResNet with one-neuron hidden layers is a Universal Approximator
- Skip Connections as Effective Symmetry-Breaking
-
Categories
Sub Categories