Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-video-self-supervised-learning
A curated list of awesome self-supervised learning methods in videos
https://github.com/Malitha123/awesome-video-self-supervised-learning
Last synced: 6 days ago
JSON representation
-
Citing
-
Contents
- [Paper
- [Paper - BENCHMARK) [[Page]](https://bpiyush.github.io/SEVERE-website/)
- [Paper - A-Hassan/RSL-Pretext)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - VSSL/)
- [Paper
- [Paper - webpage/)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - mmai-research/Masked-Action-Recognition)
- [Paper
- [Paper
- [Paper - vl-lab/video-data-aug)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - at/ViTTA)
- [Paper - ->
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - hsy/MotionMAE)
- [Paper
- [Paper
- [Paper
- [Paper - A/CSTP)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - NJU/VideoMAE)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - ml/CPR)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - CMA)
- [Paper
- [Paper
- [Paper - pt/VideoMoCo)
- [Github
- [Paper
- [Paper - research/google-research/tree/master/vatt)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - Supervised-Temporal-Discriminative-Representation-Learning-for-Video-Action-Recognition)
- [Paper
- [Paper - NCE_HowTo100M)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - Contrastive-Learning)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Github
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - robotics/tce)
- [Paper - cvpr2022.github.io/)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - decoupling-scene-motion)
- [Paper - wang/video_repres_sts)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - NJU/MGMAE)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - mmai-research/Masked-Action-Recognition)
- [Paper
- [Paper
- [Paper - vl-lab/video-data-aug)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - ->
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - NJU/VideoMAE)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - CMA)
- [Paper
- [Paper - Representation-via-Multi-level-Optimization)
- [Paper
- [Paper
- [Paper
- [Paper - jo/PSPNet)
- [Paper
- [Paper
- [Paper
- [Paper - CV/SeCo-Sequence-Contrastive-Learning)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - models)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - MAE)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - robotics/tce)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - lab.github.io/SIGMA)
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
-
- me - supervised-learning/pulls). Your contributions are highly appreciated.**
-
-
Acknowledgments
-
*Background Bias*
-
*Temporal Modelling and Semantic Dependencies*
-
*Drawbacks in Contrastive Learning Techniques*
-
*Spatial-temporal learning*
-
*Learning from Frames and Videos*
-
Contents
-
-
*Utilizing spatial and temporal cues*
-
*Speed and Motion*
-
*Utilizing motion information*
-
*Need of Domain Knowledge*
-
Contents
-
-
*Learning view-invariant sensory representations*
-
Contents
-
-
*Temporal Coherence*
-
Contents
-
-
*Using realistic data augmentation techniques*
-
Contents
-
-
*Dependence on labelled training data*
-
Contents
-
-
*Novel Pretext Tasks*
-
*Fine-graind Feature Learning*
-
*Temporal resolution and long-short-term characteristics*
-
Contents
-
-
*Supervison and Transformation*
-
Contents
-
-
*Preserving Privacy*
-
Contents
-
-
*Applying deep unsupervised embeddings*
-
Contents
-
Programming Languages
Categories
Citing
279
*Drawbacks in Contrastive Learning Techniques*
9
*Spatial-temporal learning*
7
*Novel Pretext Tasks*
4
*Speed and Motion*
4
*Utilizing spatial and temporal cues*
3
*Temporal Modelling and Semantic Dependencies*
3
*Background Bias*
2
*Fine-graind Feature Learning*
2
*Utilizing motion information*
2
*Learning from Frames and Videos*
1
*Dependence on labelled training data*
1
*Learning view-invariant sensory representations*
1
*Using realistic data augmentation techniques*
1
*Preserving Privacy*
1
*Temporal resolution and long-short-term characteristics*
1
*Need of Domain Knowledge*
1
*Temporal Coherence*
1
*Applying deep unsupervised embeddings*
1
Acknowledgments
1
*Supervison and Transformation*
1
Sub Categories