Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/svip-lab/GazeFollowing
Code for ACCV2018 paper 'Believe It or Not, We Know What You Are Looking at!'
https://github.com/svip-lab/GazeFollowing
accv2018 gaze-follow pytorch
Last synced: about 1 month ago
JSON representation
Code for ACCV2018 paper 'Believe It or Not, We Know What You Are Looking at!'
- Host: GitHub
- URL: https://github.com/svip-lab/GazeFollowing
- Owner: svip-lab
- License: mit
- Created: 2018-10-07T14:28:19.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2021-07-09T12:08:39.000Z (over 3 years ago)
- Last Synced: 2024-08-01T19:46:11.965Z (4 months ago)
- Topics: accv2018, gaze-follow, pytorch
- Language: Python
- Homepage:
- Size: 11.7 MB
- Stars: 101
- Watchers: 7
- Forks: 22
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-cv - Believe It or Not, We Know What You Are Looking at!(PyTorch Implementation)
- awesome-cv - Believe It or Not, We Know What You Are Looking at!(PyTorch Implementation)
README
# Gaze following
PyTorch implementation of our ACCV2018 paper:**'Believe It or Not, We Know What You Are Looking at!'** [[paper](https://arxiv.org/pdf/1907.02364.pdf)]
[[poster](images/poster.pdf)]Dongze Lian*, Zehao Yu*, Shenghua Gao
(* Equal Contribution)# Prepare training data
GazeFollow dataset is proposed in [1], please download the dataset from http://gazefollow.csail.mit.edu/download.html.
Note that the downloaded testing data may have wrong label, so we request test2 provided by author.
I do not know whether the author update their testing set. If not, it is better for you to e-mail authors in [1].
For your convenience, we also paste the testing set link [here](http://videogazefollow.csail.mit.edu/downloads/test_set.zip) provided by authors in [1] when we request.
(Note that the license is in [1])# Download our dataset
OurData is in [Onedrive](https://yien01-my.sharepoint.com/:u:/g/personal/doubility_z0_tn/Ea2BrlvFfQ5Dt8UjgfVnA6QB7yUAvbDDQFr1rZ_b0m9Nvw?e=jaUGWb)
Please download and unzip itOurData contains data descriped in our paper.
```
OurData/tools/extract_frame.py
```
extract frame from clipVideo in 2fps.
Different version of ffmpeg may have different results, we provide our extracted images.
```
OurData/tools/create_video_image_list.py
```
extract annotation to json.# Testing on gazefollow data
Please download the [pretrained model](https://drive.google.com/open?id=1eN0NysvRNsWaoyJea3w1Tdbt7iPMvjmp) manually and save to model/
```
cd code
python test_gazefollow.py
```# Evaluation metrics
```
cd code
python cal_min_dis.py
python cal_auc.py
```# Test on our data
```
cd code
python test_ourdata.py
```# Training scratch
```
cd code
python train.py
```# Inference
simply run python inference.py image_path eye_x eye_y to infer the gaze. Note that eye_x and eye_y is the normalized coordinate (from 0 - 1) for eye position. The script will save the inference result in tmp.png.
```
cd code
python inference.py ../images/00000003.jpg 0.52 0.14
```# Reference:
[1] Recasens*, A., Khosla*, A., Vondrick, C., Torralba, A.: Where are they looking? In: Advances in Neural
Information Processing Systems (NIPS) (2015).# Citation
If this project is helpful for you, you can cite our paper:
```
@InProceedings{Lian_2018_ACCV,
author = {Lian, Dongze and Yu, Zehao and Gao, Shenghua},
title = {Believe It or Not, We Know What You Are Looking at!},
booktitle = {ACCV},
year = {2018}
}
```