Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/saman-nia/multiclass-classification
Deep Learning VS. Machine learning
https://github.com/saman-nia/multiclass-classification
classification deep-learning logistic-regression multi-class-classification multi-classify-with-tensorflow one-vs-rest scikit-learn tensorflow text-features
Last synced: about 1 month ago
JSON representation
Deep Learning VS. Machine learning
- Host: GitHub
- URL: https://github.com/saman-nia/multiclass-classification
- Owner: saman-nia
- Created: 2019-04-04T14:03:05.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-04-20T18:21:57.000Z (almost 6 years ago)
- Last Synced: 2024-11-13T09:44:53.715Z (3 months ago)
- Topics: classification, deep-learning, logistic-regression, multi-class-classification, multi-classify-with-tensorflow, one-vs-rest, scikit-learn, tensorflow, text-features
- Language: Jupyter Notebook
- Homepage:
- Size: 2.94 MB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Multi-Class Classification Features with Tensorflow an Scikit Learn Logistic Regression
# One vs. All:
Here, you can see the performance of Deep Learning Vs. Machine Learning:
![alt text](https://github.com/saman-nia/MultiClass-Classification/blob/master/data/Result.png)
Here, you can see the performance of deep multi class classification:
![alt text](https://github.com/saman-nia/Deep-Learning-MultiClass-Classification/blob/master/data/Image_Performance.png)
Concept from: https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/one-vs-all
One vs. all provides a way to leverage binary classification. Given a classification problem with N possible solutions, a one-vs.-all solution consists of N separate binary classifiers—one binary classifier for each possible outcome. During training, the model runs through a sequence of binary classifiers, training each to answer a separate classification question. For example, given a picture of a dog, five different recognizers might be trained, four seeing the image as a negative example (not a dog) and one seeing the image as a positive example (a dog). That is:Is this image an apple? No.
Is this image a bear? No.
Is this image candy? No.
Is this image a dog? Yes.
Is this image an egg? No.This approach is fairly reasonable when the total number of classes is small, but becomes increasingly inefficient as the number of classes rises.
We can create a significantly more efficient one-vs.-all model with a deep neural network in which each output node represents a different class. The following figure suggests this approach:
![alt text](https://developers.google.com/machine-learning/crash-course/images/OneVsAll.svg)