https://github.com/anshumankansana/emotion-detection-and-facial-recognition
A Python-based AI assistant that performs real-time emotion detection and facial recognition using deep learning. It remembers faces, detects emotions like happy, sad, angry, or tired, and responds with GPT-powered personality through voice. Supports multilingual voice input (Hindi & English) and always replies in English.
https://github.com/anshumankansana/emotion-detection-and-facial-recognition
deepface dotenv emotion-recognition facial-recognition openai opencv python
Last synced: 3 months ago
JSON representation
A Python-based AI assistant that performs real-time emotion detection and facial recognition using deep learning. It remembers faces, detects emotions like happy, sad, angry, or tired, and responds with GPT-powered personality through voice. Supports multilingual voice input (Hindi & English) and always replies in English.
- Host: GitHub
- URL: https://github.com/anshumankansana/emotion-detection-and-facial-recognition
- Owner: anshumankansana
- Created: 2025-07-09T16:35:15.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-07-09T16:56:48.000Z (3 months ago)
- Last Synced: 2025-07-10T01:32:38.786Z (3 months ago)
- Topics: deepface, dotenv, emotion-recognition, facial-recognition, openai, opencv, python
- Language: Python
- Homepage:
- Size: 75.6 MB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Absolutely! Here's a complete and professional `README.md` for your project **"Emotion Detection and Facial Recognition"** that you can just copy-paste into your repo:
---
```markdown
# π€ Emotion Detection and Facial Recognition Robot Girl AssistantThis project is a smart virtual assistant with emotional intelligence, facial recognition, and voice interaction. It reacts with sarcastic or caring personality traits based on the userβs emotion and identity, responding both visually and through voice.
> Built using Python, OpenCV, Dlib, FER, Text-to-Speech, and GPT-3.5 (optional).
---
## π§ Features
- π Real-time face detection and recognition
- π Emotion detection using deep learning (FER)
- π€ Tiredness detection via facial landmarks
- π€ Voice input (English + Hindi)
- π Voice replies using Text-to-Speech
- 𧬠GPT-3.5 personality engine (Optional: OpenAI or OpenRouter API)
- π§ Memory of known faces and their past emotions
- πΎ Persistent memory using `.pkl` files---
## π Folder Structure
```
π robot girl/
βββ face\_emotion\_memory.py # Main assistant file
βββ face\_data.pkl # Saved face encodings
βββ emotion\_history.pkl # Saved emotional patterns
βββ shape\_predictor\_68\_face\_landmarks.dat # Dlib facial landmark model
βββ .env # (Optional) Stores your API key
βββ .gitignore
βββ README.md````
---
## βοΈ Requirements
Install these Python modules:
```bash
pip install -r requirements.txt
````> If `requirements.txt` is not provided, install manually:
```bash
pip install opencv-python dlib face_recognition numpy scipy pyttsx3 fer SpeechRecognition python-dotenv
```---
## π οΈ Setup
1. **Download Dlib's Landmark Model**
Download from [dlib.net/files/shape\_predictor\_68\_face\_landmarks.dat.bz2](http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2)
Extract it into the same folder.2. **Create a `.env` File (Optional if using GPT)**
```
OPENAI_API_KEY=your_openai_or_openrouter_key
# Optional for OpenRouter
OPENAI_API_BASE=https://openrouter.ai/api/v1
```3. **Run the Assistant**
```bash
python face_emotion_memory.py
```---
## π§βπ» How it Works
* Detects face β Recognizes user β Detects emotion using FER
* Detects tiredness using Dlib facial landmarks (EAR/MAR)
* Talks back using a smart GPT-based personality
* Speaks responses aloud and listens to user's voice---
## π£οΈ Supported Languages
* **Voice input**: Hindi (`hi-IN`) and English (`en-IN`)
* **Voice output**: Always replies in English---
## β οΈ Warnings & Tips
* If using GPT, make sure your API key is valid and not expired.
* File `shape_predictor_68_face_landmarks.dat` is \~95MB β GitHub recommends using Git LFS.
* Use in well-lit environments for better face detection.---
---
## π§ Author
**Anshuman Kansana**
π [GitHub Profile](https://github.com/anshumankansana)---
## π License
This project is open-source and free to use under the [MIT License](LICENSE).
---
## π‘ Future Ideas
* Add animated 2D/3D avatar (Live2D or VTube Studio)
* Use voice cloning for more human-like responses
* Add chatbot memory for conversation history```
---
Let me know if you'd like me to:
- Add badges (like Python version, license, etc.)
- Include a sample `.env`
- Create `requirements.txt` automatically for youReady to post it on GitHub now π«
```