Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/laserkelvin/learning-neural-networks
A repository of notebooks for teaching neural networks
https://github.com/laserkelvin/learning-neural-networks
Last synced: 2 days ago
JSON representation
A repository of notebooks for teaching neural networks
- Host: GitHub
- URL: https://github.com/laserkelvin/learning-neural-networks
- Owner: laserkelvin
- License: mit
- Created: 2020-03-31T16:58:48.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2021-06-02T18:07:29.000Z (over 3 years ago)
- Last Synced: 2024-04-16T04:09:28.613Z (8 months ago)
- Language: Jupyter Notebook
- Size: 1.14 MB
- Stars: 5
- Watchers: 2
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
# Learning Neural Networks
![splash](splash.png)
This repository contains a series of notebooks that demonstrate fundamentals of deep learning
with varying degrees of abstraction. The idea is that each notebook covers a part of deep learning
not necessarily with specific purposes in mind, but to try and get the core theoretical concepts
without applying it to the usual problems like cat and handwriting classification.## How to use these notebooks
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/laserkelvin/learning-neural-networks/master)
For viewing and easy playing, the Binder button above will launch a Binder
image to try stuff out. I recommend opening this github repository in Google
Colab—we get free computational power in the form of free GPUs, why not use it?
You can get start up the notebooks [by navigating to this
link](https://colab.research.google.com/github/laserkelvin/learning-neural-networks).If you'd like to make modifications and really play around with the notebooks
however, I suggest you clone this repo and install the packages specified in
`requirements.txt`.In terms of the natural progression of things, this is the general gist/summary
of each notebook:1. Fundamentals
- Dive into _why_ we should use neural networks and deep learning
- Low-level implementation of the core mechanics of neural networks, the perceptron, using NumPy
- Effect of non-linearities on our model output Teaching a neural network
- How neural networks learn; cost and autograd with Jax2. Primer On Auxiliary Functions
- Activation Functions
- Loss Functions
- Optimizers3. PyTorch Abstraction
- Good practices in PyTorch
- GPU Models## Acknowledgements
I created these notebooks while doing the
[deeplearning.ai](https://www.coursera.org/specializations/deep-learning)
specialization for deep learning. Andrew Ng and his team has put together a set
of great courses, and so some of the things I'm describing in my notebooks are
inspired (but not lifted!) from his videos.