https://github.com/ajay-308/facial_expression_recognition
https://github.com/ajay-308/facial_expression_recognition
deep-learning emotion emotion-recognition face-detection face-recognition machine-learning opencv
Last synced: 20 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/ajay-308/facial_expression_recognition
- Owner: Ajay-308
- Created: 2023-08-07T13:27:35.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-11-29T15:48:41.000Z (almost 2 years ago)
- Last Synced: 2024-11-12T14:22:27.255Z (11 months ago)
- Topics: deep-learning, emotion, emotion-recognition, face-detection, face-recognition, machine-learning, opencv
- Language: Jupyter Notebook
- Homepage: https://face-rgh6.onrender.com/
- Size: 44.8 MB
- Stars: 1
- Watchers: 2
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 🎭 Facial Emotion Recognition with Deep Learning

[](https://python.org)
[](https://tensorflow.org)
[](https://keras.io)
[](LICENSE)*A deep learning solution for real-time facial emotion detection using Convolutional Neural Networks*
## 📋 Table of Contents
- [Overview](#-overview)
- [Features](#-features)
- [Architecture](#-architecture)
- [Performance](#-performance)
- [Installation](#-installation)
- [Usage](#-usage)
- [Dataset](#-dataset)
- [Results](#-results)
- [Future Improvements](#-future-improvements)
- [Contributing](#-contributing)
- [License](#-license)## 🎯 Overview
This project implements a sophisticated **Convolutional Neural Network (CNN)** for facial emotion recognition, capable of classifying human emotions from facial expressions into 7 distinct categories. The model achieved **81.4% training accuracy** and **62.7% validation accuracy** after 70 epochs of training on FER2013-style dataset.
### Supported Emotions
- 😠 **Angry**
- 🤢 **Disgust**
- 😨 **Fear**
- 😊 **Happy**
- 😐 **Neutral**
- 😢 **Sad**
- 😲 **Surprise**## ✨ Features
- **Deep CNN Architecture**: 4-layer convolutional network with advanced regularization
- **Real-time Prediction**: Fast inference for live emotion detection
- **Robust Preprocessing**: Automated image normalization and augmentation
- **Model Persistence**: Save/load functionality for trained models
- **Comprehensive Evaluation**: Confusion matrix and performance visualization
- **Production Ready**: Modular code structure for easy deployment## 🏗️ Architecture
### Model Structure
```
Input (48x48 Grayscale)
↓
Conv2D (128 filters) + LeakyReLU + MaxPool + Dropout
↓
Conv2D (256 filters) + LeakyReLU + MaxPool + Dropout
↓
Conv2D (512 filters) + LeakyReLU + MaxPool + Dropout
↓
Conv2D (512 filters) + LeakyReLU + MaxPool + Dropout
↓
Flatten + Dense (512) + LeakyReLU + Dropout
↓
Dense (256) + LeakyReLU + Dropout
↓
Dense (7) + Softmax → Emotion Classification
```### Key Components
- **Activation**: LeakyReLU for better gradient flow
- **Regularization**: Dropout layers to prevent overfitting
- **Pooling**: MaxPooling2D for spatial dimension reduction
- **Optimizer**: Adam with categorical crossentropy loss## 📊 Performance
### Training Metrics
| Metric | Training | Validation |
|--------|----------|------------|
| **Accuracy** | 81.4% | 62.7% |
| **Loss** | 0.52 | 1.17 |
| **Epochs** | 70 | 70 |### Training Characteristics
- **Dataset Size**: ~28,000 training images, ~7,000 validation images
- **Batch Size**: 128
- **Convergence**: Optimal performance around epoch 45
- **Overfitting**: Observed after epoch 45 (validation plateau)## 🚀 Installation
### Prerequisites
```bash
Python 3.8+
pip package manager
```### Setup
```bash
# Clone the repository
git clone https://github.com/yourusername/facial-emotion-recognition.git
cd facial-emotion-recognition# Install dependencies
pip install tensorflow keras opencv-python numpy pandas matplotlib seaborn scikit-learn# Or use requirements.txt
pip install -r requirements.txt
```## 💻 Usage
### Training the Model
```python
from emotion_recognition import EmotionRecognizer# Initialize the model
model = EmotionRecognizer()# Train on your dataset
model.train(train_path='data/train',
validation_path='data/validation',
epochs=70,
batch_size=128)# Save the trained model
model.save('emotion_model.h5')
```### Making Predictions
```python
# Load pre-trained model
model = EmotionRecognizer.load('emotion_model.h5')# Predict emotion from image
emotion = model.predict('path/to/image.jpg')
print(f"Detected emotion: {emotion}")# Real-time prediction
model.predict_realtime()
```### Evaluation
```python
# Generate confusion matrix
model.evaluate(test_path='data/test')# Plot training history
model.plot_training_history()
```## 📁 Dataset
The model is trained on a FER2013-style dataset with the following structure:
```
data/
├── train/
│ ├── angry/
│ ├── disgust/
│ ├── fear/
│ ├── happy/
│ ├── neutral/
│ ├── sad/
│ └── surprise/
└── validation/
├── angry/
├── disgust/
├── fear/
├── happy/
├── neutral/
├── sad/
└── surprise/
```**Image Specifications:**
- **Format**: Grayscale (48x48 pixels)
- **Normalization**: Pixel values scaled to [0, 1]
- **Augmentation**: Random rotations, shifts, and flips## 📈 Results
### Training Progress
The model shows strong learning capability with steady improvement in training metrics. However, validation performance plateaus after epoch 45, indicating overfitting.### Performance Analysis
- **Strong Training Fit**: 81.4% accuracy demonstrates good model capacity
- **Moderate Generalization**: 62.7% validation accuracy shows room for improvement
- **Overfitting Detected**: Divergence between training and validation metrics### Recommendations for Improvement
1. **Early Stopping**: Halt training at epoch 45
2. **Data Augmentation**: Increase variety in training data
3. **Regularization**: Stronger dropout or L2 regularization
4. **Model Architecture**: Experiment with lighter architectures## 🔮 Future Improvements
### Short Term
- [ ] Implement early stopping mechanism
- [ ] Add data augmentation pipeline
- [ ] Create real-time webcam integration
- [ ] Develop REST API for deployment### Long Term
- [ ] Experiment with transfer learning (VGG, ResNet)
- [ ] Multi-face detection and emotion recognition
- [ ] Mobile optimization with TensorFlow Lite
- [ ] Integration with cloud services (AWS, GCP)### Research Directions
- [ ] Attention mechanisms for better feature focus
- [ ] Multi-modal emotion recognition (audio + visual)
- [ ] Federated learning for privacy-preserving training
- [ ] Emotion intensity prediction (not just classification)## 🤝 Contributing
We welcome contributions! Here's how you can help:
1. **Fork** the repository
2. **Create** a feature branch (`git checkout -b feature/AmazingFeature`)
3. **Commit** your changes (`git commit -m 'Add some AmazingFeature'`)
4. **Push** to the branch (`git push origin feature/AmazingFeature`)
5. **Open** a Pull Request### Areas for Contribution
- Model architecture improvements
- Data preprocessing enhancements
- Performance optimization
- Documentation improvements
- Bug fixes and testing## 📜 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
### 🌟 If you found this project helpful, please give it a star!
**Made with ❤️ for the AI community**
[⬆ Back to Top](#-facial-emotion-recognition-with-deep-learning)