https://github.com/lahi-ru/emotion_based_music_player
A web-based music player that selects songs based on the userβs detected emotion using AI-powered emotion recognition.
https://github.com/lahi-ru/emotion_based_music_player
Last synced: 5 months ago
JSON representation
A web-based music player that selects songs based on the userβs detected emotion using AI-powered emotion recognition.
- Host: GitHub
- URL: https://github.com/lahi-ru/emotion_based_music_player
- Owner: LAHI-RU
- License: mit
- Created: 2024-11-14T17:06:35.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-05-06T18:49:28.000Z (10 months ago)
- Last Synced: 2025-05-20T18:58:26.277Z (10 months ago)
- Language: Python
- Size: 21.5 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# π΅ Emotion-Based Music Player
A machine learning application that detects emotions from facial expressions and plays music matching your mood using the Spotify API.

## β¨ Features
- **Real-time Emotion Detection**: Uses computer vision to analyze facial expressions via webcam
- **Music Recommendation Engine**: Selects music that matches your current emotional state
- **Spotify Integration**: Plays tracks directly on your Spotify account
- **Responsive Web Interface**: Clean, intuitive design that updates in real-time
## π§ Emotions Detected
- π Angry
- π€’ Disgust
- π¨ Fear
- π Happy
- π’ Sad
- π² Surprise
- π Neutral
## π Installation
### Prerequisites
- Python 3.8+
- Webcam
- Spotify account (Free or Premium)
- Spotify Developer account with registered application
### Step 1: Clone the repository
```bash
git clone https://github.com/yourusername/emotion-based-music-player.git
cd emotion-based-music-player
```
### Step 2: Set up a virtual environment
```bash
python -m venv venv
```
Activate the virtual environment:
**Windows**
```bash
venv\Scripts\activate
```
**macOS/Linux**
```bash
source venv/bin/activate
```
### Step 3: Install dependencies
```bash
pip install -r requirements.txt
```
### Step 4: Configure Spotify API credentials
1. Create a Spotify Developer account at [developer.spotify.com](https://developer.spotify.com/)
2. Create a new application in the Spotify Developer Dashboard
3. Set the Redirect URI to `http://127.0.0.1:8000/callback`
4. Note your Client ID and Client Secret
5. Create a `config.py` file in the project root:
```python
# Spotify API credentials
SPOTIFY_CLIENT_ID = "your-client-id-here"
SPOTIFY_CLIENT_SECRET = "your-client-secret-here"
SPOTIFY_REDIRECT_URI = "http://127.0.0.1:8000/callback"
# Emotion detection settings
EMOTION_DETECTION_INTERVAL = 5 # Detect emotion every 5 seconds
CAMERA_INDEX = 0 # Default camera (usually the webcam)
# Flask app settings
DEBUG = True
PORT = 8000
```
### Step 5: Create default album art
Run the following script to create a default album art image:
```bash
python generate_default_album.py
```
## π Usage
1. Start the application:
```bash
python app.py
```
2. Open your browser and go to:
```
http://127.0.0.1:8000
```
3. Click "Start Detection" to enable the webcam and begin emotion detection
4. Open the Spotify application on your device (to act as a playback device)
5. The system will detect your emotion and play appropriate music automatically
## π§ How It Works
1. **Emotion Detection**: The application uses OpenCV for face detection and DeepFace for emotion classification.
2. **Music Selection**: Based on detected emotions, the application:
- First attempts to find matching songs from your personal playlists
- Falls back to your liked songs library if no playlist matches
- Uses Spotify's recommendation API as a final fallback
3. **Playback**: The selected track is played on your active Spotify device
## π§© Project Structure
```
emotion-music-player/
βββ app.py # Main application file
βββ emotion_detector.py # Emotion detection module
βββ music_player.py # Music recommendation and playback
βββ config.py # Configuration settings
βββ requirements.txt # Dependencies
βββ static/ # Static files for web interface
β βββ css/ # Stylesheets
β βββ js/ # JavaScript files
β βββ img/ # Images
βββ templates/ # HTML templates
βββ index.html # Main interface
```
## π οΈ Customization
### Emotion-Music Mapping
Edit the `emotion_features` and `emotion_genres` dictionaries in `music_player.py` to customize the audio features and genres associated with each emotion.
### Interface
Modify the CSS in `static/css/style.css` to change the appearance of the web interface.
## π€ Contributing
Contributions are welcome! Here's how you can contribute:
1. Fork the repository
2. Create a feature branch: `git checkout -b feature-name`
3. Commit your changes: `git commit -m 'Add some feature'`
4. Push to the branch: `git push origin feature-name`
5. Submit a pull request
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgements
- [DeepFace](https://github.com/serengil/deepface) for facial emotion recognition
- [Spotipy](https://github.com/plamere/spotipy) for Spotify API integration
- [Flask](https://flask.palletsprojects.com/) for the web framework
- [OpenCV](https://opencv.org/) for computer vision capabilities
## π§ Contact
For questions or feedback, please contact:
- Your Name - [Lahiru Bandara](mailto:lahiiru.dananjaya@gmail.com)
- Project Link: [https://github.com/LAHI-RU/emotion-based-music-player](https://github.com/LAHI-RU/emotion-based-music-player)