An open API service indexing awesome lists of open source software.

https://github.com/akshaypatra/expression_detector

The Expression Detector is an AI-based system that identifies and classifies human facial expressions in real-time . It utilizes computer vision and deep learning techniques to analyze facial features and determine the emotional state of a person, such as happiness, sadness, anger, surprise, fear, disgust, or neutrality.
https://github.com/akshaypatra/expression_detector

keras-tensorflow neural-network numpy opencv-python python3

Last synced: 10 months ago
JSON representation

The Expression Detector is an AI-based system that identifies and classifies human facial expressions in real-time . It utilizes computer vision and deep learning techniques to analyze facial features and determine the emotional state of a person, such as happiness, sadness, anger, surprise, fear, disgust, or neutrality.

Awesome Lists containing this project

README

          

## 🎭 Expression Detector

### Description

An Expression Detector is a system that automatically detects and identifies facial expressions from a person's face using techniques from computer vision and machine learning. Its goal is to classify human emotions such as happy, sad, angry, surprised, neutral, etc., based on facial features.

### 🔑 Key Points of Expression Detector

#### 1. Face Detection

-- First, the system detects the face region in the image or video frame.
-- OpenCV is used .

#### 2. Feature Extraction

-- Facial landmarks (eyes, eyebrows, mouth, nose, etc.) are extracted.
-- MediaPipe and deep learning models (like CNNs) is used.

#### 3. Emotion Classification

-- Once features are extracted, the model classifies the expression.
-- Deep learning models (e.g. custom CNNs) is used.
-- Typical emotions: Happy, Sad, Angry, Fear, Disgust, Surprise, Neutral.

#### 4. Real-time Analysis

-- Webcam or camera feed is used to analyze and predict expressions in real-time.
-- Useful for interactive systems or monitoring applications.

#### 5. Output

-- Displays the detected emotion label predicted by the Model .

### Sample Output :

Screenshot 2025-04-10 at 21 57 57

Screenshot 2025-04-10 at 21 58 17

## Before you run

### Step 1: install python@3.10 or 3.11

### Step 2: create a virtual environment

1. for mac :

python3.10 -m venv myenv

source myenv/bin/activate

2. for windows :

python -m venv myenv

myenv\Scripts\activate

### Step 3: Configure VS Code to Use Python 3.10

If you're using VS Code, follow these steps:

1. Press Ctrl+Shift+P → Type "Python: Select Interpreter".

2. Select Python 3.10 or the virtual environment myenv/bin/python.

3. Restart VS Code to apply changes.

### Step 4 : install dependencies

pip install mediapipe numpy tensorflow keras opencv-python