Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ardawanism/advanced_deep_learning_fall_2023
In this repository some codes and visualizations are provided for Advanced Deep Learning course.
https://github.com/ardawanism/advanced_deep_learning_fall_2023
advanced-deep-learning convolutional-neural-networks deep-learning deep-neural-networks machine-learning machine-learning-algorithms recurrent-neural-networks
Last synced: 3 days ago
JSON representation
In this repository some codes and visualizations are provided for Advanced Deep Learning course.
- Host: GitHub
- URL: https://github.com/ardawanism/advanced_deep_learning_fall_2023
- Owner: Ardawanism
- Created: 2023-10-05T10:03:35.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2023-10-19T14:15:23.000Z (over 1 year ago)
- Last Synced: 2024-01-30T06:59:19.379Z (12 months ago)
- Topics: advanced-deep-learning, convolutional-neural-networks, deep-learning, deep-neural-networks, machine-learning, machine-learning-algorithms, recurrent-neural-networks
- Language: Jupyter Notebook
- Homepage:
- Size: 5.57 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Advanced_Deep_Learning_Fall_2023
## Lecture 2:
A perceptron can be used for Binary Classification while two classes are linearly seperable. A 2D visualization of perceptron is depicted below:![Alt Text](https://github.com/Ardawanism/Advanced_Deep_Learning_Fall_2023/blob/master/Asset/pix/perceptron.gif)
We know that a single perceptron is a linear function and can only do Binary Classification. We connect multiple perceptrons in multiple layers and apply non-linear activation function in between and improve the hypothesis class to learn better non-linear mappings. Such Neural Network is Called Feed Forward Neural Network and can learn more complex non-linear mapping. The learning procedure of an MLP which is utilized to regress a function is depicted below:
![Alt Text](https://github.com/Ardawanism/Advanced_Deep_Learning_Fall_2023/blob/master/Asset/pix/regression.gif)
## Lecture 3:
There are lot's of techniques to improve Vanilla Gradient Descent and there are lot's of variations of Vanilla GD, such as Adam, AdamW, and etc. One of popular techniques is Momentum. A visual comparison between Vanilla Gradient Descent and Gradient Descent with Momentum is depicted below:![Alt Text](https://github.com/Ardawanism/Advanced_Deep_Learning_Fall_2023/blob/master/Asset/pix/optimization.gif)