Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/WuZhuoran/awesome-gaze

A curated list of awesome gaze estimation frameworks, datasets and other awesomeness.
https://github.com/WuZhuoran/awesome-gaze

List: awesome-gaze

computer-vision deep-learning gaze gaze-estimation gaze-tracking machine-learning

Last synced: 7 days ago
JSON representation

A curated list of awesome gaze estimation frameworks, datasets and other awesomeness.

Awesome Lists containing this project

README

        

# Awesome Gaze
A curated list of awesome gaze estimation papers, codes, datasets and other awesomeness.

## Table of Content

* [Review Papers](#Review-Papers)
* [Journal Papers](#Journal-Papers)
* [Conference Papers](#Conference-Papers)
* [arXiv Papers](#arXiv-papers)
* [Datasets](#Datasets)
* [Contribution](#Contribution)
* [License](#License)

## Review Papers

* Mohamed Khamis, Florian Alt, Andreas Bulling. **The Past, Present, and Future of Gaze-enabled Handheld Mobile Devices: Survey and Lessons Learned** [PDF](http://eprints.gla.ac.uk/170199/1/170199.pdf)

* Anuradha Kar and Peter Corcoran. **A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms** [PDF](https://ieeexplore.ieee.org/document/8003267)

* kshay A Gawande, Gangotri Nathaney. **A Survey on Gaze Estimation Techniques in Smartphone** [PDF](https://www.irjet.net/archives/V4/i4/IRJET-V4I4651.pdf)

* Xiaomeng Wang, Kang Liu, Xu Qian. **A Survey on Gaze Estimation** [PDF](https://ieeexplore.ieee.org/document/7383057)

* M. V. Sireesha, P. A. Vijaya, K. Chellamma. **A Survey on Gaze Estimation Techniques** [PDF](https://link.springer.com/chapter/10.1007%2F978-81-322-1524-0_43)

* D.W. Hansen, Qiang Ji. **In the Eye of the Beholder: A Survey of Models for Eyes and Gaze** [PDF](https://ieeexplore.ieee.org/document/4770110)

* Abdallahi Ould, Mohamed Matthieu, Perreira Da Silva, Vincent Courboulay. **A history of eye gaze tracking** [PDF](https://hal.archives-ouvertes.fr/hal-00215967/document)

* Carlos H. Morimoto, Marcio R.M. Mimica. **Eye gaze tracking techniques for interactive applications** [PDF](https://www.sciencedirect.com/science/article/pii/S1077314204001109)

## Journal Papers

## Conference Papers

### AAAI 2019

* Dongze Lian, Ziheng Zhang, Weixin Luo, Lina Hu, Minye Wu, Zechao Li, Jingyi Yu, Shenghua Gao. **RGBD Based Gaze Estimation via Multi-Task CNN** [PDF](https://www.aaai.org/ojs/index.php/AAAI/article/view/4094/3972) [Code](https://github.com/svip-lab/RGBD-Gaze)

### ECCV 2019

* Petr Kellnhofer, Adria Recasens, Simon Stent, Wojciech Matusik, Antonio Torralba. **Gaze360: Physically Unconstrained Gaze Estimation in the Wild** [PDF](http://openaccess.thecvf.com/content_ICCV_2019/papers/Kellnhofer_Gaze360_Physically_Unconstrained_Gaze_Estimation_in_the_Wild_ICCV_2019_paper.pdf) [Code](https://github.com/Erkil1452/gaze360)

* Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Otmar Hilliges, Jan Kautz. **Few-Shot Adaptive Gaze Estimation** [PDF](http://openaccess.thecvf.com/content_ICCV_2019/papers/Park_Few-Shot_Adaptive_Gaze_Estimation_ICCV_2019_paper.pdf) [Code](https://github.com/NVlabs/few_shot_gaze)

### CVPR 2019

* Yunyang Xiong, Hyunwoo J. Kim, Vikas Singh. **Mixed Effects Convolutional Neural Networks (MeNets) with Applications to Gaze Estimation** [PDF](http://openaccess.thecvf.com/content_CVPR_2019/papers/Xiong_Mixed_Effects_Neural_Networks_MeNets_With_Applications_to_Gaze_Estimation_CVPR_2019_paper.pdf)

* Yu Yu, Gang Liu, Jean-Marc Odobez. **Improving User-Specific Gaze Estimation via Gaze Redirection Synthesis** [PDF](https://www.idiap.ch/~odobez/publications/YuLiuOdobez-CVPR2019.pdf)

* Kang Wang, Hui Su, Qiang Ji. **Neuro-inspired Eye Tracking with Eye Movement Dynamics** [PDF](http://homepages.rpi.edu/~wangk10/papers/wang2019neural.pdf)

* Kang Wang, Rui Zhao, Hui Su, Qiang Ji. **Generalizing Eye Tracking with Bayesian Adversarial Learning** [PDF](http://homepages.rpi.edu/~wangk10/papers/wang2019generalize.pdf)

### CVPR 2018

* Brian Dolhansky, Cristian Canton Ferrer. **Eye In-Painting with Exemplar Generative Adversarial Networks** [PDF](https://arxiv.org/pdf/1712.03999.pdf) [Code](https://github.com/zhangqianhui/Exemplar-GAN-Eye-Inpainting-Tensorflow)

* Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao. **Gaze Prediction in Dynamic 360° Immersive Videos** [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Gaze_Prediction_in_CVPR_2018_paper.pdf) [Code](https://github.com/xuyanyu-shh/VR-EyeTracking)

* Kang Wang, Rui Zhao, Qiang Ji. **A Hierarchical Generative Model for Eye Image Synthesis and Eye Gaze Estimation** [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_A_Hierarchical_Generative_CVPR_2018_paper.pdf)

* Arun Balajee Vasudevan, Dengxin Dai, Luc Van Gool. **Object Referring in Videos with Language and Human Gaze** [PDF](https://arxiv.org/pdf/1801.01582.pdf) [Code](https://github.com/arunbalajeev/gaze-interface)

* Fan, Lifeng and Chen, Yixin and Wei, Ping and Wang, Wenguan and Zhu, Song-Chun. **Inferring Shared Attention in Social Scene Videos** [PDF](http://www.stat.ucla.edu/~pwei/items/publications/Conf_2018_CVPR_SharedAttention.pdf)

* Ping Wei, Yang Liu, Tianmin Shu, Nanning Zheng, Song-Chun Zhu. **Where and Why Are They Looking? Jointly Inferring Human Attention and Intentions in Complex Tasks** [PDF](http://www.stat.ucla.edu/~sczhu/papers/Conf_2018/CVPR_2018_Attention_Intention.pdf)

* Rajeev Ranjan, Shalini De Mello, Jan Kautz. **Light-weight Head Pose Invariant Gaze Tracking** [PDF](https://arxiv.org/pdf/1804.08572.pdf)

### ECCV 2018

* Tobias Fischer, Hyung Jin Chang, Yiannis Demiris. **RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments** [PDF](http://openaccess.thecvf.com/content_ECCV_2018/papers/Tobias_Fischer_RT-GENE_Real-Time_Eye_ECCV_2018_paper.pdf) [Code](https://github.com/Tobias-Fischer/rt_gene)

* Ernesto Brau, Jinyan Guan, Tanya Jeffries, Kobus Barnard. **Multiple-Gaze Geometry: Inferring Novel 3D Locations from Gazes Observed in Monocular Video** [PDF](http://openaccess.thecvf.com/content_ECCV_2018/papers/Ernesto_Brau_Stereo_gaze_Inferring_ECCV_2018_paper.pdf)

* Yin Li, Miao Liu, James M. Rehg. **In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Video** [PDF](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf)

* Yifei Huang, Minjie Cai, Zhenqiang Li, Yoichi Sato. **Predicting Gaze in Egocentric Video by Learning Task-Dependent Attention Transition** [PDF](https://arxiv.org/abs/1803.09125) [Code](https://github.com/hyf015/egocentric-gaze-prediction)

* Eunji Chong and Nataniel Ruiz and Yongxin Wang and Yun Zhang and Agata Rozga and James Rehg. **Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency** [PDF](https://arxiv.org/pdf/1807.10437.pdf)

* Seonwook Park and Adrian Spurr and Otmar Hilliges. **Deep Pictorial Gaze Estimation** [PDF](https://arxiv.org/pdf/1807.10002.pdf) [Code](https://github.com/swook/GazeML)

* Cheng, Yihua and Lu, Feng and Zhang, Xucong. **Appearance-Based Gaze Estimation via Evaluation-Guided Asymmetric Regression** [PDF](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yihua_Cheng_Appearance-Based_Gaze_Estimation_ECCV_2018_paper.pdf)

* Yu Yu, Gang Liu, Jean-Marc Odobez. **Deep Multitask Gaze Estimation with a Constrained Landmark-Gaze Model** [PDF](http://openaccess.thecvf.com/content_ECCVW_2018/papers/11130/Yu_Deep_Multitask_Gaze_Estimation_with_a_Constrained_Landmark-Gaze_Model_ECCVW_2018_paper.pdf)

### BMVC 2018

* Cristina Palmero, Javier Selva, Mohammad Ali Bagheri, Sergio Escalera. **Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues** [PDF](https://arxiv.org/pdf/1805.03064.pdf) [Code](https://github.com/crisie/RecurrentGaze)

* Liu, Gang and Yu, Yu and Funes-Mora, Kenneth A and Odobez, Jean-Marc and SA, Eyeware Tech. **A Differential Approach for Gaze Estimation with Calibration** [PDF](https://pdfs.semanticscholar.org/192e/b550675b0f9cc69389ef2ec27efa72851253.pdf)

### ICCV 2017

* George Leifman, Dmitry Rudoy, Tristan Swedish, Eduardo Bayro-Corrochano, Ramesh Raskar. **Learning Gaze Transitions From Depth to Improve Video Saliency Estimation** [PDF](http://openaccess.thecvf.com/content_ICCV_2017/papers/Leifman_Learning_Gaze_Transitions_ICCV_2017_paper.pdf)

* Kang Wang, Qiang Ji. **Real Time Eye Gaze Tracking With 3D Deformable Eye-Face Model** [PDF](https://ieeexplore.ieee.org/document/8237376)

* Haoping Deng and Wangjiang Zhu. **Monocular Free-Head 3D Gaze Tracking with Deep Learning and Geometry Constraints** [PDF](http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Monocular_Free-Head_3D_ICCV_2017_paper.pdf) [Code](https://github.com/Walleclipse/Gaze_Tracking)

* Adria Recasens and Carl Vondrick and Aditya Khosla and Antonio Torralba. **Following Gaze in Video** [PDF](http://people.csail.mit.edu/recasens/docs/videogazefollow.pdf) [Code](https://github.com/recasens/Gaze-Following)

### CVPR 2017

* Nour Karessli, Zeynep Akata, Bernt Schiele, Andreas Bulling. **Gaze Embeddings for Zero-Shot Image Classification** [PDF](https://arxiv.org/pdf/1611.09309.pdf) [Code](https://github.com/Noura-kr/CVPR17)

* Mengmi Zhang, Keng Teck Ma, Joo Hwee Lim, Qi Zhao, Jiashi Feng. **Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks** [PDF](http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhang_Deep_Future_Gaze_CVPR_2017_paper.pdf) [Code](https://github.com/Mengmi/deepfuturegaze_gan)

* Xucong Zhang and Yusuke Sugano and Mario Fritz and Andreas Bulling. **It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation** [PDF](https://arxiv.org/pdf/1611.08860.pdf)

### Other 2017

* Xucong Zhang and Yusuke Sugano and Andreas Bulling. **Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery** [PDF](https://perceptual.mpi-inf.mpg.de/files/2017/05/zhang17_uist.pdf)

### ECCV 2016

* Erroll Wood and Tadas Baltrusaitis and Louis-Philippe Morency and Peter Robinson and Andreas Bulling. **A 3D Morphable Eye Region Model for Gaze Estimation** [PDF](https://perceptual.mpi-inf.mpg.de/wp-content/blogs.dir/12/files/2016/08/wood16_eccv.pdf)

* Ganin, Yaroslav, Daniil Kononenko, Diana Sungatullina, and Victor Lempitsky. **DeepWarp: Photorealistic Image Resynthesis for Gaze Manipulation** [PDF](https://arxiv.org/pdf/1607.07215.pdf) [Code](https://github.com/BlueWinters/DeepWarp)

### CVPR 2016

* Kyle Krafka and Aditya Khosla and Petr Kellnhofer and Harini Kannan and Suchendra Bhandarkar and Wojciech Matusik and Antonio Torralba. **Eye Tracking for Everyone** [PDF](http://gazecapture.csail.mit.edu/cvpr2016_gazecapture.pdf) [Code](https://github.com/CSAILVision/GazeCapture)

* Pei Yu and Jiahuan Zhou and Ying Wu. **Learning Reconstruction-Based Remote Gaze Estimation** [PDF](https://zpascal.net/cvpr2016/Yu_Learning_Reconstruction-Based_Remote_CVPR_2016_paper.pdf)

* Laszlo A. Jeni and Jeffrey F. Cohn. **Person-Independent 3D Gaze Estimation Using Face Frontalization** [PDF](http://www.pitt.edu/~jeffcohn/biblio/3D-Gaze.pdf)

## arXiv Papers

## Datasets

|Dataset|RGB/RGB-D|Image type|Annotation type|Images|Distance|Head pose annot.|Gaze annot.|Head pose orient.|
|---|---|---|---|---|---|---|---|---|
|[MPII Gaze](https://github.com/trakaros/MPIIGaze)|RGB|Face + Eye Patches|Gaze Vector|213.659|40-60cm|Y|Y|Frontal|
|[BIWI]()|RGB-D|Camera frame|Head pose vector|≈ 15.500|100cm|Y|N|All|
|[CMU Multi-Pie]()|RGB|Camera frame|68 Facial landmarks|755.370|≈ 300cm|Y|N|All|
|[Coffeebreak]()|RGB|Low res. face image|Head pose vector|18.117|Varying|Y|N|All|
|[Columbia]()|RGB|High res. camera image|Gaze vector|5.880|200cm|5 orient.|Y|Frontal|
|[Deep Head Pose]()|RGB-D|Camera frame|Head pose vector|68.000|≈ 200-800cm|Y|N|All|
|[EYEDIAP]()|RGB-D|Face + eye patches|Gaze vector|≈ 62.500|80-120cm|Y|Y|Frontal|
|[Gaze Capture]()|RGB|Face + eye patches|2D pos on screen|> 2.5M|80-120cm|Y|Y|Frontal|
|[ICT 3D Head pose]()|RGB-D|Camera frame|Head pose vector|14.000|≈ 100cm|Y|N|All|
|[Rice TabletGaze]()|RGB|Tablet camera video|2D pos on screen|≈ 100.000|30-50cm|N|Y|Frontal|
|[RT-GENE]()|RGB-D|Face + eye patches|Gaze vector|122.531|80-280cm|Y|Y|All|
|[SynthesEyes]()|RGB|Synthesized eye patches|Gaze vector|11.382|Varying|Y|Y|All|
|[UnityEyes](https://www.cl.cam.ac.uk/research/rainbow/projects/unityeyes/)|RGB|Synthesized eye patches|Gaze vector|1M|Varying|Y|Y|All|
|[UT Multi-view]()|RGB|Eye area + eye patches|Gaze vector|1.152.000|60cm|Y|Y|All|
|[Vernissage]()|RGB|(Robot)|camera frame|Head pose vector|Unknown|Varying|Y|N|All||

## Reference

Many papers collections are from [Awesome-Gaze-Estimation](https://github.com/cvlab-uob/Awesome-Gaze-Estimation) by [Computer Vision Research Lab at the University of Birmingham](https://github.com/cvlab-uob)

## Contribution

If you have anything that you think they are awesome related to Gaze Estimation, feel free to send a [pull request](https://github.com/WuZhuoran/awesome-gaze/pulls)

## License

[![CC0](https://camo.githubusercontent.com/60561947585c982aee67ed3e3b25388184cc0aa3/687474703a2f2f6d6972726f72732e6372656174697665636f6d6d6f6e732e6f72672f70726573736b69742f627574746f6e732f38387833312f7376672f63632d7a65726f2e737667)](http://creativecommons.org/publicdomain/zero/1.0/)

To the extent possible under law, [Zhuoran Wu](https://github.com/WuZhuoran) has waived all copyright and related or neighboring rights to this work. If you want to use any items in this list, please refer their own License.