https://github.com/margaretmz/cvnd-facial-keypoint-detection
Use a CNN to detect facial keypoints
https://github.com/margaretmz/cvnd-facial-keypoint-detection
Last synced: 6 months ago
JSON representation
Use a CNN to detect facial keypoints
- Host: GitHub
- URL: https://github.com/margaretmz/cvnd-facial-keypoint-detection
- Owner: margaretmz
- License: mit
- Created: 2019-08-05T17:02:36.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2022-11-22T02:24:08.000Z (almost 3 years ago)
- Last Synced: 2025-04-13T14:56:18.712Z (6 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 7.67 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[//]: # (Image References)
[image1]: ./images/key_pts_example.png "Facial Keypoint Detection"
# Facial Keypoint Detection
## Project Overview
This project uses computer vision techniques and deep learning architectures to build a facial keypoint detection system. Facial keypoints include points around the eyes, nose, and mouth on a face and are used in many applications. These applications include: facial tracking, facial pose recognition, facial filters, and emotion recognition. The code is able to look at any image, detect faces, and predict the locations of facial keypoints on each face; examples of these keypoints are displayed below.
![Facial Keypoint Detection][image1]
## Project Files
__Notebook 1__ : Loading and Visualizing the Facial Keypoint Data
__Notebook 2__ : Defining and Training a Convolutional Neural Network (CNN) to Predict Facial Keypoints
__Notebook 3__ : Facial Keypoint Detection Using Haar Cascades and trained CNN
__Notebook 4__ : Fun Filters and Keypoint Uses
__models.py__ : Define the neural network architectures
__data_load.py__ : Load and transform data
__data/__ : Where the training and test data are download
__saved_model/__ : Where you save the trained PyTorch model
### Data
Use *Notebook 1: Loading and Visualizing Data* to download and explore the data for the project. In the folder `data` are training and tests set of image/keypoint data, and their respective csv files.
## Project Instructions
All of the starting code and resources you'll need to compete this project are in this Github repository. Before you can get started coding, you'll have to make sure that you have all the libraries and dependencies required to support this project. If you have already created a `cv-nd` environment for [exercise code](https://github.com/udacity/CVND_Exercises), then you can use that environment! If not, instructions for creation and activation are below.
*Note that this project does not require the use of GPU, so this repo does not include instructions for GPU setup.*
### Local Environment Instructions
1. Clone the repository, and navigate to the downloaded folder. This may take a minute or two to clone due to the included image data.
```
git clone https://github.com/margaretmz/CVND-Facial-Keypoint-Detection.git
cd CVND-Facial-Keypoint-Detection
```2. Create (and activate) a new environment, named `cv-nd` with Python 3.6. If prompted to proceed with the install `(Proceed [y]/n)` type y.
- __Linux__ or __Mac__:
```
conda create -n cv-nd python=3.6
source activate cv-nd
```
- __Windows__:
```
conda create --name cv-nd python=3.6
activate cv-nd
```
At this point your command line should look something like: `(cv-nd) :P1_Facial_Keypoints $`. The `(cv-nd)` indicates that your environment has been activated, and you can proceed with further package installations.3. Install PyTorch and torchvision; this should install the latest version of PyTorch.
- __Linux__ or __Mac__:
```
conda install pytorch torchvision -c pytorch
```
- __Windows__:
```
conda install pytorch-cpu -c pytorch
pip install torchvision
```6. Install a few required pip packages, which are specified in the requirements text file (including OpenCV).
```
pip install -r requirements.txt
```