https://github.com/saarbk/Introduction-to-Machine-Learning
Sharing both theoretical and programing ideas, that I came across at Introduction to Machine Learning course. notes, homework solution and python assignment
https://github.com/saarbk/Introduction-to-Machine-Learning
homework-assignments knn-algorithm machine-learning neural-network sgd-classifier student-project svm
Last synced: 7 months ago
JSON representation
Sharing both theoretical and programing ideas, that I came across at Introduction to Machine Learning course. notes, homework solution and python assignment
- Host: GitHub
- URL: https://github.com/saarbk/Introduction-to-Machine-Learning
- Owner: saarbk
- License: mit
- Created: 2022-05-03T18:47:07.000Z (about 3 years ago)
- Default Branch: master
- Last Pushed: 2023-10-16T18:02:25.000Z (over 1 year ago)
- Last Synced: 2024-04-12T16:33:43.445Z (about 1 year ago)
- Topics: homework-assignments, knn-algorithm, machine-learning, neural-network, sgd-classifier, student-project, svm
- Language: TeX
- Homepage: https://saarbk.github.io/iml
- Size: 1.57 MB
- Stars: 4
- Watchers: 2
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[](https://saarbk.github.io/iml)
# Introduction-to-Machine-Learning
Repository of notebooks and conceptual insights from my "Introduction to Machine Learning" course. Each section contains corresponding PDF solutions, , and Python scripts.## 
(Warm-up)
\
[1.1](Section1.0/section_1.pdf) Linear Algebra
\
[1.2](Section1.0/section_1.pdf) Calculus and Probability
\
[1.3](Section1.0/section_1.pdf) Optimal Classifiers and Decision Rules
\
[1.4](Section1.0/section_1.pdf) Multivariate normal (or Gaussian) distribution

\
[Visualizing the Hoeffding bound.](Section1.0/plot1.png)
[k-NN algorithm.](Section1.0/KNN.py)## [](https://github.com/saarbk/Introduction-to-Machine-Learning/blob/main/EX2/Section_2.pdf)

\
[2.1](Section2.0/Section2.pdf) PAC learnability of ℓ2-balls around the origin
\
[2.2](Section2.0/Section2.pdf) PAC in Expectation
\
[2.3](Section2.0/Section2.pdf) Union Of Intervals
\
[2.4](Section2.0/Section2.pdf) Prediction by polynomials
\
[2.5](Section2.0/Section2.pdf) Structural Risk Minimization
[Union Of Intervals.](EX2/union_of_intervals.py)
Study the hypothesis class of a finite
union of disjoint intervals, and the properties of the ERM algorithm for this class.
To review, let the sample space be  and assume we study a binary classification problem,i.e. .
We will try to learn using an hypothesis class that consists of k disjoint intervals.
define the corresponding hypothesis as
=%5Cbegin%7Bcases%7D1%20&%5Ctext%7Bif%20%7D%20x%5Cin%20%5Bl_1,u_1%5D%5Ccup%20%5Cdots%20%5Ccup%20%5Bl_k,u_k%5D%20%5C%5C1%20&%5Ctext%7Botherwise%7D%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%5Cend%7Bcases%7D)
## 

\
[3.1](Section3.0/section3.pdf) Step-size Perceptron
\
[3.2](Section3.0/section3.pdf) Convex functions
\
[3.3](Section3.0/section3.pdf) GD with projection
\
[3.4](Section3.0/section3.pdf) Gradient Descent on Smooth Functions
[SGD for Hinge loss.](Section3.0/sgd.py)
In the file skeleton sgd.py there is an helper function. The function reads the examples labelled 0, 8
and returns them with the labels −1/+1. In case you are unable to
read the MNIST data with the provided script, you can download the file from [ Here](https://github.com/amplab/datasciencesp14/blob/master/lab7/mldata/mnist-original.mat)._{hinge}=\max&space;(0,1-\mathbf{x}_i&space;y_i))
[SGD for log-loss.](Section3.0/sgd.py)
In this exercise we will optimize the log loss defined
as follows:&space;=&space;\log(1+e^{-y\mathbf{w}\cdot&space;x}))
## 

\
[4.1](Section4.0/section_4.pdf) SVM with multiple classes
\
[4.2](Section4.0/section_4.pdf) Soft-SVM bound using hard-SVM
\
[4.3](Section4.0/section_4.pdf) Separability using polynomial kernel
\
[4.4](Section4.0/section_4.pdf) Expressivity of ReLU networks
\
[4.5](Section4.0/section_4.pdf) Implementing boolean functions using ReLU networks.
[SVM](Section4.0/svm.py)
Exploring different polynomial kernel degrees for
SVM. We will use an existing implementation of SVM, the SVC class from `sklearn.svm.`[Neural Networks](Section4.0/svm.py)
we will implement the back-propagation
algorithm for training a neural network. We will work with the MNIST data set that consists
of 60000 28x28 gray scale images with values of 0 to 1.
Define the log-loss on a single example%7D(W)=-%5Cmathbf%7By%7D%5Clog%5Cmathbf%7Bz%7D_L(%5Cmathbf%7Bx;%5Cmathcal%7BW%7D%7D))
And the loss we want to minimize is
=%5Cfrac%7B1%7D%7Bn%7D%5Csum_%7Bi=1%7D%5E%7Bn%7D%5Cell%20(%5Cmathbf%7Bx%7D_i,%5Cmathbf%7By%7D_i)(%5Cmathcal%7BW%7D)=%5Cfrac%7B1%7D%7Bn%7D%5Csum_%7Bi=1%7D%5E%7Bn%7D-%5Cmathbf%7By%7D_i%5Cast%20%5Clog%20%5Cmathbf%7Bz%7D_L(%5Cmathbf%7Bx%7D_i;%5Cmathcal%7BW%7D))