Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/guglielmocamporese/awesome-action-prediction-list
A curated list of papers and resources linked to action anticipation and early action recognition from videos.
https://github.com/guglielmocamporese/awesome-action-prediction-list
List: awesome-action-prediction-list
action-anticipation action-prediction action-recognition awesome-list awesome-papers computer-vision deep-learning early-action-recognition
Last synced: 16 days ago
JSON representation
A curated list of papers and resources linked to action anticipation and early action recognition from videos.
- Host: GitHub
- URL: https://github.com/guglielmocamporese/awesome-action-prediction-list
- Owner: guglielmocamporese
- Created: 2021-04-22T11:10:08.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2021-06-14T11:50:11.000Z (over 3 years ago)
- Last Synced: 2024-05-23T00:04:23.667Z (7 months ago)
- Topics: action-anticipation, action-prediction, action-recognition, awesome-list, awesome-papers, computer-vision, deep-learning, early-action-recognition
- Homepage:
- Size: 20.5 KB
- Stars: 7
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
- Contributing: contributing.md
Awesome Lists containing this project
- ultimate-awesome - awesome-action-prediction-list - A curated list of papers and resources linked to action anticipation and early action recognition from videos. (Other Lists / Monkey C Lists)
README
# Awesome Action Prediction List: [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
A curated list of papers and resources linked to **Action Anticipation** and **Early Action Recognition** from videos.
```
If you find this list useful, please consider to give a star ⭐ !
```# Papers
## Action Anticipation Papers
Action Anticipation Papers
#### **2021**
- [Learning to Anticipate Egocentric Actions by Imagination](https://arxiv.org/pdf/2101.04924v2.pdf) - *Y. Wu et al.* - **TIP, 2021**.
- [Anticipative Video Transformer](https://arxiv.org/pdf/2106.02036.pdf) - *Girdhar et al.* - **arXiv, 2021**.#### **2020**
- [Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video](https://arxiv.org/abs/1911.10967) - *M. Liu et al.* - **ECCV, 2020**.
- [Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video](https://arxiv.org/pdf/2005.02190v2.pdf) - *A. Furnari, G. M. Farinella* - **PAMI, 2020**.
- [Temporal Aggregate Representations for Long-Range Video Understanding](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123610154.pdf) - F. Sener et al. - **ECCV, 2020**.
- [TTPP: Temporal Transformer with Progressive Prediction for Efficient Action Anticipation](https://arxiv.org/abs/2003.03530) - *W. Wang et al.* - **arXiv, 2020**.#### **2019**
- [Leveraging the Present to Anticipate the Future in Videos](https://research.fb.com/wp-content/uploads/2019/05/Leveraging-the-Present-to-Anticipate-the-Future-in-Videos.pdf) - *A. Miech et al.* - **CVPRW, 2019**.
- [Predicting the Future: A Jointly Learnt Model for Action Anticipation](https://openaccess.thecvf.com/content_ICCV_2019/papers/Gammulle_Predicting_the_Future_A_Jointly_Learnt_Model_for_Action_Anticipation_ICCV_2019_paper.pdf) - *H. Gammulle et al.* - **ICCV, 2019**.
- [Time-Conditioned Action Anticipation in One Shot](https://openaccess.thecvf.com/content_CVPR_2019/papers/Ke_Time-Conditioned_Action_Anticipation_in_One_Shot_CVPR_2019_paper.pdf) - *Q. Ke et al.* - **CVPR, 2019**.
- [What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention](https://openaccess.thecvf.com/content_ICCV_2019/papers/Furnari_What_Would_You_Expect_Anticipating_Egocentric_Actions_With_Rolling-Unrolling_LSTMs_ICCV_2019_paper.pdf) - *A. Furnari and G. M. Farinella* - **ICCV, 2019**.#### **2018**
- [Action Anticipation By Predicting Future Dynamic Images](https://arxiv.org/abs/1808.00141) - *C. Rodriguez et al.* - **ECCVW, 2018**.#### **2017**
- [Encouraging LSTMs to Anticipate Actions Very Early](https://basurafernando.github.io/papers/ICCV17.pdf) - *M. S. Aliakbarian et al.* - **ICCV, 2017**.## Early Action Recognition Papers
Early Action Recognition Papers
#### **2020**
- [Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video](https://arxiv.org/pdf/2005.02190v2.pdf) - *Furnari et al* - **PAMI, 2020**.#### **2019**
- [What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention](https://openaccess.thecvf.com/content_ICCV_2019/papers/Furnari_What_Would_You_Expect_Anticipating_Egocentric_Actions_With_Rolling-Unrolling_LSTMs_ICCV_2019_paper.pdf) - *Furnari et al* - **ICCV, 2019**.
- [Predicting the Future: A Jointly Learnt Model for Action Anticipation](http://openaccess.thecvf.com/content_ICCV_2019/papers/Gammulle_Predicting_the_Future_A_Jointly_Learnt_Model_for_Action_Anticipation_ICCV_2019_paper.pdf) - *Gammulle et al* - **ICCV, 2019**.
- [Spatiotemporal Feature Residual Propagation for Action Prediction](http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhao_Spatiotemporal_Feature_Residual_Propagation_for_Action_Prediction_ICCV_2019_paper.pdf) - *Zhao et al* - **ICCV, 2019**.
- [Relational Action Forecasting](http://openaccess.thecvf.com/content_CVPR_2019/papers/Sun_Relational_Action_Forecasting_CVPR_2019_paper.pdf) - *Sun et al* - **CVPR, 2020**.
- [Progressive Teacher-student Learning for Early Action Prediction](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Progressive_Teacher-Student_Learning_for_Early_Action_Prediction_CVPR_2019_paper.pdf) - *Wang et al* - **CVPR, 2019**.
- [Action Anticipation with RBF Kernelized Feature Mapping RNN](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yuge_Shi_Action_Anticipation_with_ECCV_2018_paper.pdf) - *Chen et al* - **ECCV, 2018**.#### **2018**
- [Part-Activated Deep Reinforcement Learning for Action Prediction](http://openaccess.thecvf.com/content_ECCV_2018/papers/Lei_Chen_Part-Activated_Deep_Reinforcement_ECCV_2018_paper.pdf) - Chen et al - **ECCV, 2018**.
- [Temporal Relational Reasoning in Videos](https://github.com/metalbubble/TRN-pytorch) - *Zhou et al* - **ECCV, 2018**.
- [Human Action Recognition and Prediction: A Survey](https://arxiv.org/pdf/1806.11230v2.pdf) - *Kong et al* - **arxiv, 2018**.
- [SSNet: Scale Selection Network for Online 3D Action Prediction](http://openaccess.thecvf.com/content_cvpr_2018/papers/Liu_SSNet_Scale_Selection_CVPR_2018_paper.pdf) - *Liu et al* - **CVPR, 2018**.
- [Action Prediction from Videos via Memorizing Hard-to-Predict Samples](http://www1.ece.neu.edu/~yukong/papers/AAAI2018.pdf) - *Kong et al* - **AAAI, 2018**.
- [On Encoding Temporal Evolution for Real-time Action Prediction](https://arxiv.org/ftp/arxiv/papers/1709/1709.07894.pdf) - *Rezazadegan et al* - **arXiv, 2018**.#### **2017**
- [Predictive Learning: Using Future Representation Learning Variantial Autoencoder for Human Action Prediction](https://arxiv.org/pdf/1711.09265v2.pdf) - *Runsheng et al* - **arXiv, 2017**.
- [Encouraging LSTMs to Anticipate Actions Very Early](https://arxiv.org/pdf/1703.07023.pdf) - *Aliakbarian et al* - **ICCV, 2017**.
- [Online Real-time Multiple Spatiotemporal Action Localisation and Prediction](https://github.com/gurkirt/realtime-action-detection) - *Singh et al* - **ICCV, 2017**.
- [Visual Forecasting by Imitating Dynamics in Natural Sequences](http://openaccess.thecvf.com/content_ICCV_2017/papers/Zeng_Visual_Forecasting_by_ICCV_2017_paper.pdf) - *Zeng et al *- **ICCV, 2017**.
- [Binary Coding for Partial Action Analysis with Limited Observation Ratios](http://openaccess.thecvf.com/content_cvpr_2017/papers/Qin_Binary_Coding_for_CVPR_2017_paper.pdf) - *Qin et al* - **CVPR, 2017**.
- [Deep Sequential Context Networks for Action Prediction](http://openaccess.thecvf.com/content_cvpr_2017/papers/Kong_Deep_Sequential_Context_CVPR_2017_paper.pdf) - *Kong et al* - **CVPR, 2017**.
- [RED: Reinforced Encoder-Decoder Networks for Action Anticipation](https://arxiv.org/pdf/1707.04818.pdf) - *Gao et al* - **BMVC, 2017**.#### **2016**
- [Anticipating Visual Representations from Unlabeled Video](https://arxiv.org/pdf/1504.08023.pdf) - *Vondrick et al* - **CVPR, 2016**.
- [Learning Activity Progression in LSTMs for Activity Detection and Early Detection](http://openaccess.thecvf.com/content_cvpr_2016/papers/Ma_Learning_Activity_Progression_CVPR_2016_paper.pdf) - *Ma et al* - **CVPR, 2016**.
- [Deep Action- and Context-Aware Sequence Learning for Activity Recognition and Anticipation](https://arxiv.org/pdf/1611.05520.pdf) - *Aliakbarian et al* - **arXiv, 2016**.#### **2014**
- [A hierarchical representation for future action prediction](http://cvgl.stanford.edu/papers/lan_eccv14.pdf) - *Lan et al* - **ECCV, 2014**.
- [A Discriminative Model with Multiple Temporal Scales for Action Prediction](https://pdfs.semanticscholar.org/e2e7/c8c47a11cca7be8c1b6a70b61efd1bfeb30b.pdf) - *Kong et al* - **ECCV, 2014**.#### **2011**
- [Human activity prediction: Early recognition of ongoing activities from streaming videos](http://michaelryoo.com/papers/iccv11_prediction_ryoo.pdf) - *Ryoo et al* - **ICCV, 2011**.
# Datasets
## Action Anticipation DatasetsAction Anticipation Datasets
- EPIC-Kitchens-100 - [[website](https://epic-kitchens.github.io/2021)][[paper](https://arxiv.org/abs/2006.13256)]
- EPIC-Kitchens-55 - [[website](https://epic-kitchens.github.io/2020-55.html)][[paper]()]
- EGTEA Gaze+ - [[website](http://cbs.ic.gatech.edu/fpv/)]## Early Action Recognition Datasets
Early Action Recognition Datasets
- EPIC-Kitchens-100 - [[website](https://epic-kitchens.github.io/2021)][[paper](https://arxiv.org/abs/2006.13256)]
- EPIC-Kitchens-55 - [[website](https://epic-kitchens.github.io/2020-55.html)][[paper]()]
- EGTEA Gaze+ - [[website](http://cbs.ic.gatech.edu/fpv/)]
# CompetitionsCompetitions
## Action Anticipation Competitions- EPIC-Kitchens-100 - [[website](https://competitions.codalab.org/competitions/25925)]
- EPIC-Kitchens-55 - [[website](https://competitions.codalab.org/competitions/20071)]# Contributing
Contributions are most welcome, if you have any suggestions and improvements, please create an issue or raise a pull request.**Note that:**
- **This list is not exhaustive.**
- **The lists use alphabetical order for fairness.**