An open API service indexing awesome lists of open source software.

https://github.com/nitesh009/100-days-of-ml-code

Machine Learning 100 Days Challenge
https://github.com/nitesh009/100-days-of-ml-code

100daysofmlcode artificial-intelligence machine-learning scikit-learn siraj-raval-challenge sklearn-library supervised-learning

Last synced: 5 months ago
JSON representation

Machine Learning 100 Days Challenge

Awesome Lists containing this project

README

          

## 100-Days-Of-ML-Code

> This Repository is a part of **100 days ML Code Challenge **
---

Day 0
July 6, 2018 Simple Linear Regression

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Simple%20Linear%20Regression
"Example")

Day 1
July 7, 2018 Support Vector Regression

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/SVR)

Day 2
July 9, 2018 Multiple Regression

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Multiple%20Linear%20Regression)

Day 3
July 12, 2018 Logistic Regression

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Logistic%20Regression)

Day 4
July 14, 2018 SVM

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/SVM)

Day 5
July 15, 2018 KNN

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/KNN)

Day 6
July 16, 2018 Kernel SVM

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/kernel_SVM)

Day 7
July 17, 2018 Naive Bayes

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Naive%20Bayes)

Day 8
July 18, 2018 Decision Tree

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Decision%20Tree)

Day 9
July 19, 2018 Random Forest

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Random%20Forest)

Day 10
July 21, 2018 K-means Clustering

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Clustering/K%20means)

Day 11
July 22, 2018 Clustering

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Clustering/hc)

Day 12
July 23, 2018 Association Rule Learning

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Apriori)

Day 13
Upper Confidence Bound

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Apriori)

Day 14
Thompson Sampling

Link to work: [Sample Example](https://github.com/nitesh009/100-Days-Of-ML-Code/tree/master/Apriori)

---

### FAQ's
---

> How do I know which model to choose for my problem ?

Same as for regression models, you first need to figure out whether your problem is linear or non linear.

If your problem is linear, you should go for **Logistic Regression or SVM**.

If your problem is non linear, you should go for **K-NN, Naive Bayes, Decision Tree or Random Forest**.

Then from a business point of view, you would rather use:

* Logistic Regression or Naive Bayes when you want to rank your predictions by their probability. For example if you want to rank your customers from the highest probability that they buy a certain product, to the lowest probability. Eventually that allows you to target your marketing campaigns. And of course for this type of business problem, you should use Logistic Regression if your problem is linear, and Naive Bayes if your problem is non linear.

* SVM when you want to predict to which segment your customers belong to. Segments can be any kind of segments, for example some market segments you identified earlier with clustering.

* Decision Tree when you want to have clear interpretation of your model results,

* Random Forest when you are just looking for high performance with less need for interpretation.

----

> NOTE:
``` bash
All algorithms are implemented using Python.
```