{"id":32898389,"url":"https://github.com/codehass/facial-emotional-detection","last_synced_at":"2026-04-12T06:34:19.283Z","repository":{"id":322984253,"uuid":"1089392430","full_name":"codehass/facial-emotional-detection","owner":"codehass","description":"Implement a complete AI pipeline in Computer Vision, from face detection to emotion classification using CNNs (TensorFlow/Keras) and Haar Cascades (OpenCV), and integrate this model into a FastAPI API connected to a PostgreSQL database.","archived":false,"fork":false,"pushed_at":"2025-11-14T16:33:12.000Z","size":2666,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-12T06:34:00.484Z","etag":null,"topics":["cnn-model","fastapi","keras","mathplotlib","numpy","opencv","postgresql","pydantic","pytest","sqlalchemy","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/codehass.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-11-04T09:29:44.000Z","updated_at":"2026-02-25T13:25:39.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/codehass/facial-emotional-detection","commit_stats":null,"previous_names":["codehass/facial-emotional-detection"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/codehass/facial-emotional-detection","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codehass%2Ffacial-emotional-detection","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codehass%2Ffacial-emotional-detection/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codehass%2Ffacial-emotional-detection/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codehass%2Ffacial-emotional-detection/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/codehass","download_url":"https://codeload.github.com/codehass/facial-emotional-detection/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codehass%2Ffacial-emotional-detection/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31706764,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-12T06:22:27.080Z","status":"ssl_error","status_checked_at":"2026-04-12T06:21:52.710Z","response_time":58,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cnn-model","fastapi","keras","mathplotlib","numpy","opencv","postgresql","pydantic","pytest","sqlalchemy","tensorflow"],"created_at":"2025-11-10T12:01:07.676Z","updated_at":"2026-04-12T06:34:19.273Z","avatar_url":"https://github.com/codehass.png","language":"Jupyter Notebook","readme":"# facial-emotional-detection\n\nA lightweight FastAPI-based backend that performs face detection and emotion prediction from images using a trained TensorFlow CNN model (.keras).\nThe system detects faces in an image, predicts emotions, and stores prediction history in a PostgreSQL database.\n\n## 🚀 Features\n\n- Face detection using OpenCV Haar Cascade\n\n- Emotion prediction using a TensorFlow CNN model\n\n- REST API using FastAPI\n\n- PostgreSQL database for storing prediction results\n\n- Environment-based configuration using .env\n\n- Unit tests using unittest\n\n- Docker-ready structure\n\n- Integrated GitHub Actions workflow for CI testing\n\n- Swagger auto-generated documentation\n\n- Easily extensible architecture\n\n### 🧠 Tech Stack\n\n- Backend: FastAPI, Python 3.11\n\n- ML/AI: TensorFlow, OpenCV\n\n- Database: PostgreSQL\n\n- ORM: psycopg2 (raw SQL)\n\n- Testing: unittest\n\n- Environment Management: python-dotenv, pyenv\n\n- API Docs: Swagger/OpenAPI\n\n- Version Control: Git + GitHub\n\n- CI/CD: GitHub Actions\n\n## 📁 Project Structure\n\n```\nfacial-emotional-detection/\n├── backend/\n│ ├── app/\n│ │ ├── config.py\n│ │ ├── database.py\n│ │ ├── main.py\n│ │ ├── models/\n│ │ ├── schemas/\n│ │ ├── routers/\n│ │ ├── ml/\n│ │ │ ├── model.keras\n│ │ │ └── haarcascade_frontalface_default.xml\n│ │ └── utils/\n│ ├── tests/\n│ └── requirements.txt\n├── .env.example\n├── README.md\n└── docker-compose.yml (if added later)\n```\n\n## Setup Instructions\n\n### 1- Clone the repository:\n\n```shell\n  git clone git@github.com:codehass/facial-emotional-detection.git\n```\n\n### 2- database configuration:\n\n- Create a `.env` file in the root directory of the project.\n- Copy the content of the `.env.example` file into the newly created `.env` file and replace it with your real data.\n\n  ```shell\n  cp .env.example .env\n  ```\n\n### 3- Install the dependencies:\n\n- For python version I used 3.11.14\n- You can use pyenv to manage python versions.\n- Make sure to install python 3.11.14 using pyenv:\n- Use the version locally and create a virtual environment:\n\n  ```shell\n    cd facial-emotional-detection\n    pyenv local 3.11.14\n  ```\n\n  ```shell\n    python -m venv venv\n    source venv/bin/activate\n  ```\n\n- Install the required packages:\n\n  ```shell\n    pip install -r requirements.txt\n  ```\n\n### 4- Run the application:\n\n- Start the application using the following command:\n\n  ```shell\n    cd backend\n    uvicorn backend.main:app --reload\n  ```\n\n### 5- Access the application:\n\n- Open your web browser and navigate to `http://localhost:8000/docs`\n\n### 6- Run Tests:\n\n- To run the tests, use the following command:\n\n  ```shell\n    python -m unittest discover -s tests\n  ```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcodehass%2Ffacial-emotional-detection","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcodehass%2Ffacial-emotional-detection","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcodehass%2Ffacial-emotional-detection/lists"}