Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/architgargpro/emotiondetector

This Machine Learning project aims to classify the emotion on a person's face into one of four categories, using deep convolutional neural networks.
https://github.com/architgargpro/emotiondetector

cnn-keras deep-neural-networks keras-tensorflow machine-learning python

Last synced: 26 days ago
JSON representation

This Machine Learning project aims to classify the emotion on a person's face into one of four categories, using deep convolutional neural networks.

Awesome Lists containing this project

README

        

# Face and Emotion Detection

## Introduction
This project aims to classify the emotion on a person's face into one of four categories, using deep convolutional neural networks. This repository is an implementation of [this](https://github.com/atulapra/Emotion-detection/blob/master/ResearchPaper.pdf) research paper and some help from [this](https://github.com/atulapra/Emotion-detection) GitHub repo.

## Dataset
The model is trained on the FER-2013 dataset which was published on International Conference on Machine Learning (ICML) available [here](https://anonfile.com/bdj3tfoeba/data_zip). This dataset consists of 35887 grayscale, 48x48 sized face images with seven emotions - angry, disgusted, fearful, happy, neutral, sad and surprised.

## Dependencies
* Python 3.6
* [OpenCV](https://opencv.org/)
* [Tensorflow](https://www.tensorflow.org/)
* [Keras.](https://keras.io/)

## Usage
* First clone or downlad this repository then enter the folder: 'cd emotionDetector'
* Now, you can either use this pretrained modelby loading the weights or train your oun custom model:
The code is modified in order to use the pretrained model, to use the model use this command: 'python edModel.py'

To train your model, uncomment code at line 18-74 and place your dataset in data/train/ and data/test/ directories.
Then use this command 'python edModel.py --mode train' to start training, then 'python edModel.py --mode display'

## Algorithm
* First, we use **haar cascade** to detect faces in each frame of the webcam feed.
* The region of image containing the face is resized to **48x48** and is passed as input to the ConvNet.
* The network outputs a list of **softmax scores** for the seven classes. (we have modified it to give result out of 4 emotions)
* The emotion with maximum score is displayed on the screen.

## References
"Challenges in Representation Learning: A report on three machine learning contests." I Goodfellow, D Erhan, PL Carrier, A Courville, M Mirza, B
Hamner, W Cukierski, Y Tang, DH Lee, Y Zhou, C Ramaiah, F Feng, R Li,
X Wang, D Athanasakis, J Shawe-Taylor, M Milakov, J Park, R Ionescu,
M Popescu, C Grozea, J Bergstra, J Xie, L Romaszko, B Xu, Z Chuang, and
Y. Bengio. arXiv 2013.