Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome_Few_Shot_Learning

Advances of few-shot learning, especially for NLP applications.
https://github.com/wutong8023/Awesome_Few_Shot_Learning

  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![
  • ![ - comput.-surv.)
  • ![
  • ![ - long.254)<a href="https://scholar.google.com.hk/scholar?q=Prompt-free+and+Efficient+Few-shot+Learning+with+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prompt-free and Efficient Few-shot Learning with Language Models**](https://aclanthology.org/2022.acl-long.254) , <br> by *Karimi Mahabadi, Rabeeh and
  • ![ - long.439)<a href="https://scholar.google.com.hk/scholar?q=CONTaiNER:+Few-Shot+Named+Entity+Recognition+via+Contrastive+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**CONTaiNER: Few-Shot Named Entity Recognition via Contrastive Learning**](https://aclanthology.org/2022.acl-long.439) , <br> by *Das, Sarkar Snigdha Sarathi and
  • ![ - long.43)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Class-Incremental+Learning+for+Named+Entity+Recognition"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Class-Incremental Learning for Named Entity Recognition**](https://aclanthology.org/2022.acl-long.43) , <br> by *Wang, Rui and
  • ![ - long.198)<a href="https://scholar.google.com.hk/scholar?q=Continual+Few-shot+Relation+Learning+via+Embedding+Space+Regularization+and+Data+Augmentation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation**](https://aclanthology.org/2022.acl-long.198) , <br> by *Qin, Chengwei and
  • ![ - long.197)<a href="https://scholar.google.com.hk/scholar?q=A+Good+Prompt+Is+Worth+Millions+of+Parameters:+Low-resource+Prompt-based+Learning+for+Vision-Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models**](https://aclanthology.org/2022.acl-long.197) , <br> by *Jin, Woojeong and
  • ![ - long.521)<a href="https://scholar.google.com.hk/scholar?q=Memorisation+versus+Generalisation+in+Pre-trained+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Memorisation versus Generalisation in Pre-trained Language Models**](https://aclanthology.org/2022.acl-long.521) , <br> by *T{\"a}nzer, Michael and
  • ![ - long.592)<a href="https://scholar.google.com.hk/scholar?q=FlipDA:+Effective+and+Robust+Data+Augmentation+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning**](https://aclanthology.org/2022.acl-long.592) , <br> by *Zhou, Jing and
  • ![ - long.483)<a href="https://scholar.google.com.hk/scholar?q=Prototypical+Verbalizer+for+Prompt-based+Few-shot+Tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prototypical Verbalizer for Prompt-based Few-shot Tuning**](https://aclanthology.org/2022.acl-long.483) , <br> by *Cui, Ganqu and
  • ![ - long.481)<a href="https://scholar.google.com.hk/scholar?q=A+Rationale-Centric+Framework+for+Human-in-the-loop+Machine+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Rationale-Centric Framework for Human-in-the-loop Machine Learning**](https://aclanthology.org/2022.acl-long.481) , <br> by *Lu, Jinghui and
  • ![ - long.584)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Learning+with+Siamese+Networks+and+Label+Tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Learning with Siamese Networks and Label Tuning**](https://aclanthology.org/2022.acl-long.584) , <br> by *M{\"u}ller, Thomas and
  • ![ - long.576)<a href="https://scholar.google.com.hk/scholar?q=PPT:+Pre-trained+Prompt+Tuning+for+Few-shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**PPT: Pre-trained Prompt Tuning for Few-shot Learning**](https://aclanthology.org/2022.acl-long.576) , <br> by *Gu, Yuxian and
  • ![ - short.36)<a href="https://scholar.google.com.hk/scholar?q=Exploiting+Language+Model+Prompts+Using+Similarity+Measures:+A+Case+Study+on+the+Word-in-Context+Task"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task**](https://aclanthology.org/2022.acl-short.36) , <br> by *Tabasi, Mohsen and
  • ![ - acl.214)<a href="https://scholar.google.com.hk/scholar?q=Adaptive+Knowledge-Enhanced+Bayesian+Meta-Learning+for+Few-shot+Event+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event
  • ![ - long.447)<a href="https://scholar.google.com.hk/scholar?q=A+Closer+Look+at+Few-Shot+Crosslingual+Transfer:+The+Choice+of+Shots+Matters"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters**](https://aclanthology.org/2021.acl-long.447) , <br> by *Zhao, Mengjie and
  • ![ - long.239)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Question+Answering+by+Pretraining+Span+Selection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Question Answering by Pretraining Span Selection**](https://aclanthology.org/2021.acl-long.239) , <br> by *Ram, Ori and
  • ![ - long.248)<a href="https://scholar.google.com.hk/scholar?q=Few-NERD:+A+Few-shot+Named+Entity+Recognition+Dataset"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-NERD: A Few-shot Named Entity Recognition Dataset**](https://aclanthology.org/2021.acl-long.248) , <br> by *Ding, Ning and
  • ![ - long.295)<a href="https://scholar.google.com.hk/scholar?q=Making+Pre-trained+Language+Models+Better+Few-shot+Learners"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Making Pre-trained Language Models Better Few-shot Learners**](https://aclanthology.org/2021.acl-long.295) , <br> by *Gao, Tianyu and
  • ![ - short.105)<a href="https://scholar.google.com.hk/scholar?q=Distinct+Label+Representations+for+Few-Shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Distinct Label Representations for Few-Shot Text Classification**](https://aclanthology.org/2021.acl-short.105) , <br> by *Ohashi, Sora and
  • ![ - long.95)<a href="https://scholar.google.com.hk/scholar?q=AugNLG:+Few-shot+Natural+Language+Generation+using+Self-trained+Data+Augmentation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**AugNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation**](https://aclanthology.org/2021.acl-long.95) , <br> by *Xu, Xinnuo and
  • ![ - long.495)<a href="https://scholar.google.com.hk/scholar?q=Multi-Label+Few-Shot+Learning+for+Aspect+Category+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multi-Label Few-Shot Learning for Aspect Category Detection**](https://aclanthology.org/2021.acl-long.495) , <br> by *Hu, Mengting and
  • ![ - long.382)<a href="https://scholar.google.com.hk/scholar?q=Lexicon+Learning+for+Few+Shot+Sequence+Modeling"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Lexicon Learning for Few Shot Sequence Modeling**](https://aclanthology.org/2021.acl-long.382) , <br> by *Akyurek, Ekin and
  • ![ - short.124)<a href="https://scholar.google.com.hk/scholar?q=Entity+Concept-enhanced+Few-shot+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Entity Concept-enhanced Few-shot Relation Extraction**](https://aclanthology.org/2021.acl-short.124) , <br> by *Yang, Shan and
  • ![ - long.487)<a href="https://scholar.google.com.hk/scholar?q=Learning+from+Miscellaneous+Other-Class+Words+for+Few-shot+Named+Entity+Recognition"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition**](https://aclanthology.org/2021.acl-long.487) , <br> by *Tong, Meihan and
  • ![ - short.2)<a href="https://scholar.google.com.hk/scholar?q=On+Training+Instance+Selection+for+Few-Shot+Neural+Text+Generation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**On Training Instance Selection for Few-Shot Neural Text Generation**](https://aclanthology.org/2021.acl-short.2) , <br> by *Chang, Ernie and
  • ![ - main.11)<a href="https://scholar.google.com.hk/scholar?q=Span-ConveRT:+Few-shot+Span+Extraction+for+Dialog+with+Pretrained+Conversational+Representations"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations**](https://www.aclweb.org/anthology/2020.acl-main.11) , <br> by *Coope, Samuel and
  • ![ - main.18)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+NLG+with+Pre-Trained+Language+Model"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot NLG with Pre-Trained Language Model**](https://www.aclweb.org/anthology/2020.acl-main.18) , <br> by *Chen, Zhiyu and
  • ![ - main.102)<a href="https://scholar.google.com.hk/scholar?q=Dynamic+Memory+Induction+Networks+for+Few-Shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Dynamic Memory Induction Networks for Few-Shot Text Classification**](https://www.aclweb.org/anthology/2020.acl-main.102) , <br> by *Geng, Ruiying and
  • ![ - main.128)<a href="https://scholar.google.com.hk/scholar?q=Few-shot+Slot+Tagging+with+Collapsed+Dependency+Transfer+and+Label-enhanced+Task-adaptive+Projection+Network"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network**](https://www.aclweb.org/anthology/2020.acl-main.128) , <br> by *Hou, Yutai and
  • ![ - main.436)<a href="https://scholar.google.com.hk/scholar?q=Shaping+Visual+Representations+with+Language+for+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Shaping Visual Representations with Language for Few-Shot Classification**](https://www.aclweb.org/anthology/2020.acl-main.436) , <br> by *Mu, Jesse and
  • ![ - main.517)<a href="https://scholar.google.com.hk/scholar?q=Learning+to+Customize+Model+Structures+for+Few-shot+Dialogue+Generation+Tasks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks**](https://www.aclweb.org/anthology/2020.acl-main.517) , <br> by *Song, Yiping and
  • ![ - main.654)<a href="https://scholar.google.com.hk/scholar?q=Multi-source+Meta+Transfer+for+Low+Resource+Multiple-Choice+Question+Answering"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multi-source Meta Transfer for Low Resource Multiple-Choice Question Answering**](https://www.aclweb.org/anthology/2020.acl-main.654) , <br> by *Yan, Ming and
  • ![ - main.437)<a href="https://scholar.google.com.hk/scholar?q=Discrete+Latent+Variable+Representations+for+Low-Resource+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Discrete Latent Variable Representations for Low-Resource Text Classification**](https://www.aclweb.org/anthology/2020.acl-main.437) , <br> by *Jin, Shuning and
  • ![ - main.523)<a href="https://scholar.google.com.hk/scholar?q=Improving+Low-Resource+Named+Entity+Recognition+using+Joint+Sentence+and+Token+Labeling"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling**](https://www.aclweb.org/anthology/2020.acl-main.523) , <br> by *Kruengkrai, Canasai and
  • ![ - main.722)<a href="https://scholar.google.com.hk/scholar?q=Soft+Gazetteers+for+Low-Resource+Named+Entity+Recognition"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Soft Gazetteers for Low-Resource Named Entity Recognition**](https://www.aclweb.org/anthology/2020.acl-main.722) , <br> by *Rijhwani, Shruti and
  • ![ - 1279)<a href="https://scholar.google.com.hk/scholar?q=Matching+the+Blanks:+Distributional+Similarity+for+Relation+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Matching the Blanks: Distributional Similarity for Relation Learning**](https://doi.org/10.18653/v1/p19-1279) , <br> by *Livio Baldini Soares and
  • ![ - 1277)<a href="https://scholar.google.com.hk/scholar?q=Multi-Level+Matching+and+Aggregation+Network+for+Few-Shot+Relation+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multi-Level Matching and Aggregation Network for Few-Shot Relation
  • ![ - main.212)<a href="https://scholar.google.com.hk/scholar?q=MapRE:+An+Effective+Semantic+Mapping+Approach+for+Low-resource+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**MapRE: An Effective Semantic Mapping Approach for Low-resource Relation Extraction**](https://aclanthology.org/2021.emnlp-main.212) , <br> by *Dong, Manqing and
  • ![ - main.144)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Intent+Detection+via+Contrastive+Pre-Training+and+Fine-Tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning**](https://aclanthology.org/2021.emnlp-main.144) , <br> by *Zhang, Jianguo and
  • ![ - main.142)<a href="https://scholar.google.com.hk/scholar?q=Self-training+Improves+Pre-training+for+Few-shot+Learning+in+Task-oriented+Dialog+Systems"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems**](https://aclanthology.org/2021.emnlp-main.142) , <br> by *Mi, Fei and
  • ![ - main.131)<a href="https://scholar.google.com.hk/scholar?q=Nearest+Neighbour+Few-Shot+Learning+for+Cross-lingual+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Nearest Neighbour Few-Shot Learning for Cross-lingual Classification**](https://aclanthology.org/2021.emnlp-main.131) , <br> by *Bari, M Saiful and
  • ![ - main.221)<a href="https://scholar.google.com.hk/scholar?q=TransPrompt:+Towards+an+Automatic+Transferable+Prompting+Framework+for+Few-shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**TransPrompt: Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification**](https://aclanthology.org/2021.emnlp-main.221) , <br> by *Wang, Chengyu and
  • ![ - main.433)<a href="https://scholar.google.com.hk/scholar?q=Towards+Realistic+Few-Shot+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Towards Realistic Few-Shot Relation Extraction**](https://aclanthology.org/2021.emnlp-main.433) , <br> by *Brody, Sam and
  • ![ - main.204)<a href="https://scholar.google.com.hk/scholar?q=Exploring+Task+Difficulty+for+Few-Shot+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploring Task Difficulty for Few-Shot Relation Extraction**](https://aclanthology.org/2021.emnlp-main.204) , <br> by *Han, Jiale and
  • ![ - main.427)<a href="https://scholar.google.com.hk/scholar?q=Learning+Prototype+Representations+Across+Few-Shot+Tasks+for+Event+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning Prototype Representations Across Few-Shot Tasks for Event Detection**](https://aclanthology.org/2021.emnlp-main.427) , <br> by *Lai, Viet Dac and
  • ![ - main.734)<a href="https://scholar.google.com.hk/scholar?q=Language+Models+are+Few-Shot+Butlers"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Language Models are Few-Shot Butlers**](https://aclanthology.org/2021.emnlp-main.734) , <br> by *Micheli, Vincent and
  • ![ - main.637)<a href="https://scholar.google.com.hk/scholar?q=Honey+or+Poison?+Solving+the+Trigger+Curse+in+Few-shot+Event+Detection+via+Causal+Intervention"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Honey or Poison? Solving the Trigger Curse in Few-shot Event Detection via Causal Intervention**](https://aclanthology.org/2021.emnlp-main.637) , <br> by *Chen, Jiawei and
  • ![ - main.572)<a href="https://scholar.google.com.hk/scholar?q=CrossFit:+A+Few-shot+Learning+Challenge+for+Cross-task+Generalization+in+NLP"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP**](https://aclanthology.org/2021.emnlp-main.572) , <br> by *Ye, Qinyuan and
  • ![ - main.608)<a href="https://scholar.google.com.hk/scholar?q=Constrained+Language+Models+Yield+Few-Shot+Semantic+Parsers"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Constrained Language Models Yield Few-Shot Semantic Parsers**](https://aclanthology.org/2021.emnlp-main.608) , <br> by *Shin, Richard and
  • ![ - main.407)<a href="https://scholar.google.com.hk/scholar?q=Improving+and+Simplifying+Pattern+Exploiting+Training"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Improving and Simplifying Pattern Exploiting Training**](https://aclanthology.org/2021.emnlp-main.407) , <br> by *Tam, Derek and
  • ![ - main.836)<a href="https://scholar.google.com.hk/scholar?q=Self-training+with+Few-shot+Rationalization"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-training with Few-shot Rationalization**](https://aclanthology.org/2021.emnlp-main.836) , <br> by *Bhat, Meghana Moorthy and
  • ![ - main.92)<a href="https://scholar.google.com.hk/scholar?q=Label+Verbalization+and+Entailment+for+Effective+Zero+and+Few-Shot+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction**](https://aclanthology.org/2021.emnlp-main.92) , <br> by *Sainz, Oscar and
  • ![ - main.460)<a href="https://scholar.google.com.hk/scholar?q=Continual+Few-Shot+Learning+for+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Continual Few-Shot Learning for Text Classification**](https://aclanthology.org/2021.emnlp-main.460) , <br> by *Pasunuru, Ramakanth and
  • ![ - main.813)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Named+Entity+Recognition:+An+Empirical+Baseline+Study"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Named Entity Recognition: An Empirical Baseline Study**](https://aclanthology.org/2021.emnlp-main.813) , <br> by *Huang, Jiaxin and
  • ![ - main.462)<a href="https://scholar.google.com.hk/scholar?q=STraTA:+Self-Training+with+Task+Augmentation+for+Better+Few-shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**STraTA: Self-Training with Task Augmentation for Better Few-shot Learning**](https://aclanthology.org/2021.emnlp-main.462) , <br> by *Vu, Tu and
  • ![ - main.491)<a href="https://scholar.google.com.hk/scholar?q=FewshotQA:+A+simple+framework+for+few-shot+learning+of+question+answering+tasks+using+pre-trained+text-to-text+models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models**](https://aclanthology.org/2021.emnlp-main.491) , <br> by *Chada, Rakesh and
  • ![ - main.713)<a href="https://scholar.google.com.hk/scholar?q=Avoiding+Inference+Heuristics+in+Few-shot+Prompt-based+Finetuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning**](https://aclanthology.org/2021.emnlp-main.713) , <br> by *Utama, Prasetya and
  • ![ - main.718)<a href="https://scholar.google.com.hk/scholar?q=Revisiting+Self-training+for+Few-shot+Learning+of+Language+Model"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Revisiting Self-training for Few-shot Learning of Language Model**](https://aclanthology.org/2021.emnlp-main.718) , <br> by *Chen, Yiming and
  • ![ - main.509)<a href="https://scholar.google.com.hk/scholar?q=Open+Aspect+Target+Sentiment+Classification+with+Natural+Language+Prompts"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Open Aspect Target Sentiment Classification with Natural Language Prompts**](https://aclanthology.org/2021.emnlp-main.509) , <br> by *Seoh, Ronald and
  • ![ - main.301)<a href="https://scholar.google.com.hk/scholar?q=FiD-Ex:+Improving+Sequence-to-Sequence+Models+for+Extractive+Rationale+Generation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation**](https://aclanthology.org/2021.emnlp-main.301) , <br> by *Lakhotia, Kushal and
  • ![ - main.549)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Emotion+Recognition+in+Conversation+with+Sequential+Prototypical+Networks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks**](https://aclanthology.org/2021.emnlp-main.549) , <br> by *Guibon, Ga{\"e}l and
  • ![ - main.346)<a href="https://scholar.google.com.hk/scholar?q=AutoPrompt:+Eliciting+Knowledge+from+Language+Models+with+Automatically+Generated+Prompts"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts**](https://www.aclweb.org/anthology/2020.emnlp-main.346) , <br> by *Shin, Taylor and
  • ![ - main.38)<a href="https://scholar.google.com.hk/scholar?q=Self-Supervised+Meta-Learning+for+Few-Shot+Natural+Language+Classification+Tasks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks**](https://www.aclweb.org/anthology/2020.emnlp-main.38) , <br> by *Bansal, Trapit and
  • ![ - main.131)<a href="https://scholar.google.com.hk/scholar?q=Adaptive+Attentional+Network+for+Few-Shot+Knowledge+Graph+Completion"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adaptive Attentional Network for Few-Shot Knowledge Graph Completion**](https://www.aclweb.org/anthology/2020.emnlp-main.131) , <br> by *Sheng, Jiawei and
  • ![ - main.235)<a href="https://scholar.google.com.hk/scholar?q=Multi-label+Few/Zero-shot+Learning+with+Knowledge+Aggregated+from+Multiple+Label+Graphs"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multi-label Few/Zero-shot Learning with Knowledge Aggregated from Multiple Label Graphs**](https://www.aclweb.org/anthology/2020.emnlp-main.235) , <br> by *Lu, Jueqing and
  • ![ - main.375)<a href="https://scholar.google.com.hk/scholar?q=Structural+Supervision+Improves+Few-Shot+Learning+and+Syntactic+Generalization+in+Neural+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models**](https://www.aclweb.org/anthology/2020.emnlp-main.375) , <br> by *Wilcox, Ethan and
  • ![ - main.411)<a href="https://scholar.google.com.hk/scholar?q=Discriminative+Nearest+Neighbor+Few-Shot+Intent+Detection+by+Transferring+Natural+Language+Inference"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference**](https://www.aclweb.org/anthology/2020.emnlp-main.411) , <br> by *Zhang, Jianguo and
  • ![ - main.469)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Complex+Knowledge+Base+Question+Answering+via+Meta+Reinforcement+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning**](https://www.aclweb.org/anthology/2020.emnlp-main.469) , <br> by *Hua, Yuncheng and
  • ![ - main.516)<a href="https://scholar.google.com.hk/scholar?q=Simple+and+Effective+Few-Shot+Named+Entity+Recognition+with+Structured+Nearest+Neighbor+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning**](https://www.aclweb.org/anthology/2020.emnlp-main.516) , <br> by *Yang, Yi and
  • ![ - main.607)<a href="https://scholar.google.com.hk/scholar?q=An+Empirical+Study+on+Large-Scale+Multi-Label+Text+Classification+Including+Few+and+Zero-Shot+Labels"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels**](https://www.aclweb.org/anthology/2020.emnlp-main.607) , <br> by *Chalkidis, Ilias and
  • ![ - main.660)<a href="https://scholar.google.com.hk/scholar?q=Universal+Natural+Language+Processing+with+Limited+Annotations:+Try+Few-shot+Textual+Entailment+as+a+Start"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start**](https://www.aclweb.org/anthology/2020.emnlp-main.660) , <br> by *Yin, Wenpeng and
  • ![ - emnlp.17)<a href="https://scholar.google.com.hk/scholar?q=Few-shot+Natural+Language+Generation+for+Task-Oriented+Dialog"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Natural Language Generation for Task-Oriented Dialog**](https://www.aclweb.org/anthology/2020.findings-emnlp.17) , <br> by *Peng, Baolin and
  • ![ - emnlp.108)<a href="https://scholar.google.com.hk/scholar?q=Dynamic+Semantic+Matching+and+Aggregation+Network+for+Few-shot+Intent+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection**](https://www.aclweb.org/anthology/2020.findings-emnlp.108) , <br> by *Nguyen, Hoang and
  • ![ - emnlp.303)<a href="https://scholar.google.com.hk/scholar?q=Composed+Variational+Natural+Language+Generation+for+Few-shot+Intents"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Composed Variational Natural Language Generation for Few-shot Intents**](https://www.aclweb.org/anthology/2020.findings-emnlp.303) , <br> by *Xia, Congying and
  • ![ - 1649)<a href="https://scholar.google.com.hk/scholar?q=FewRel+2.0:+Towards+More+Challenging+Few-Shot+Relation+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FewRel 2.0: Towards More Challenging Few-Shot Relation Classification**](https://doi.org/10.18653/v1/D19-1649) , <br> by *Tianyu Gao and
  • ![ - 1514)<a href="https://scholar.google.com.hk/scholar?q=FewRel:+A+Large-Scale+Supervised+Few-shot+Relation+Classification+Dataset+with+State-of-the-Art+Evaluation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FewRel: A Large-Scale Supervised Few-shot Relation Classification
  • ![ - main.7)<a href="https://scholar.google.com.hk/scholar?q=LEA:+Meta+Knowledge-Driven+Self-Attentive+Document+Embedding+for+Few-Shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**LEA: Meta Knowledge-Driven Self-Attentive Document Embedding for Few-Shot Text Classification**](https://aclanthology.org/2022.naacl-main.7) , <br> by *Hong, S. K. and
  • ![ - main.98)<a href="https://scholar.google.com.hk/scholar?q=On+the+Economics+of+Multilingual+Few-shot+Learning:+Modeling+the+Cost-Performance+Trade-offs+of+Machine+Translated+and+Manual+Data"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**On the Economics of Multilingual Few-shot Learning: Modeling the Cost-Performance Trade-offs of Machine Translated and Manual Data**](https://aclanthology.org/2022.naacl-main.98) , <br> by *Ahuja, Kabir and
  • ![ - main.39)<a href="https://scholar.google.com.hk/scholar?q=Fine-tuning+Pre-trained+Language+Models+for+Few-shot+Intent+Detection:+Supervised+Pre-training+and+Isotropization"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization**](https://aclanthology.org/2022.naacl-main.39) , <br> by *Zhang, Haode and
  • ![ - main.260)<a href="https://scholar.google.com.hk/scholar?q=Improving+In-Context+Few-Shot+Learning+via+Self-Supervised+Training"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Improving In-Context Few-Shot Learning via Self-Supervised Training**](https://aclanthology.org/2022.naacl-main.260) , <br> by *Chen, Mingda and
  • ![ - main.369)<a href="https://scholar.google.com.hk/scholar?q=An+Enhanced+Span-based+Decomposition+Method+for+Few-Shot+Sequence+Labeling"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling**](https://aclanthology.org/2022.naacl-main.369) , <br> by *Wang, Peiyi and
  • ![ - main.141)<a href="https://scholar.google.com.hk/scholar?q=MGIMN:+Multi-Grained+Interactive+Matching+Network+for+Few-shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**MGIMN: Multi-Grained Interactive Matching Network for Few-shot Text Classification**](https://aclanthology.org/2022.naacl-main.141) , <br> by *Zhang, Jianhai and
  • ![ - main.47)<a href="https://scholar.google.com.hk/scholar?q=Reframing+Human-AI+Collaboration+for+Generating+Free-Text+Explanations"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Reframing Human-AI Collaboration for Generating Free-Text Explanations**](https://aclanthology.org/2022.naacl-main.47) , <br> by *Wiegreffe, Sarah and
  • ![ - main.421)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Document-Level+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Document-Level Relation Extraction**](https://aclanthology.org/2022.naacl-main.421) , <br> by *Popovic, Nicholas and
  • ![ - main.420)<a href="https://scholar.google.com.hk/scholar?q=Template-free+Prompt+Tuning+for+Few-shot+NER"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Template-free Prompt Tuning for Few-shot NER**](https://aclanthology.org/2022.naacl-main.420) , <br> by *Ma, Ruotian and
  • ![ - main.201)<a href="https://scholar.google.com.hk/scholar?q=MetaICL:+Learning+to+Learn+In+Context"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**MetaICL: Learning to Learn In Context**](https://aclanthology.org/2022.naacl-main.201) , <br> by *Min, Sewon and
  • ![ - main.408)<a href="https://scholar.google.com.hk/scholar?q=Contrastive+Learning+for+Prompt-based+Few-shot+Language+Learners"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Contrastive Learning for Prompt-based Few-shot Language Learners**](https://aclanthology.org/2022.naacl-main.408) , <br> by *Jian, Yiren and
  • ![ - main.404)<a href="https://scholar.google.com.hk/scholar?q=Embedding+Hallucination+for+Few-shot+Language+Fine-tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Embedding Hallucination for Few-shot Language Fine-tuning**](https://aclanthology.org/2022.naacl-main.404) , <br> by *Jian, Yiren and
  • ![ - main.401)<a href="https://scholar.google.com.hk/scholar?q=Automatic+Multi-Label+Prompting:+Simple+and+Interpretable+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification**](https://aclanthology.org/2022.naacl-main.401) , <br> by *Wang, Han and
  • ![ - main.88)<a href="https://scholar.google.com.hk/scholar?q=DReCa:+A+General+Task+Augmentation+Strategy+for+Few-Shot+Natural+Language+Inference"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**DReCa: A General Task Augmentation Strategy for Few-Shot Natural Language Inference**](https://www.aclweb.org/anthology/2021.naacl-main.88) , <br> by *Murty, Shikhar and
  • ![ - main.410)<a href="https://scholar.google.com.hk/scholar?q=Learning+How+to+Ask:+Querying+LMs+with+Mixtures+of+Soft+Prompts"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning How to Ask: Querying LMs with Mixtures of Soft Prompts**](https://www.aclweb.org/anthology/2021.naacl-main.410) , <br> by *Qin, Guanghui and
  • ![ - main.398)<a href="https://scholar.google.com.hk/scholar?q=Factual+Probing+Is+[MASK]:+Learning+vs.+Learning+to+Recall"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Factual Probing Is [MASK]: Learning vs. Learning to Recall**](https://www.aclweb.org/anthology/2021.naacl-main.398) , <br> by *Zhong, Zexuan and
  • ![ - main.185)<a href="https://scholar.google.com.hk/scholar?q=It's+Not+Just+Size+That+Matters:+Small+Language+Models+Are+Also+Few-Shot+Learners"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners**](https://www.aclweb.org/anthology/2021.naacl-main.185) , <br> by *Schick, Timo and
  • ![ - main.59)<a href="https://scholar.google.com.hk/scholar?q=Few-shot+Intent+Classification+and+Slot+Filling+with+Retrieved+Examples"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Intent Classification and Slot Filling with Retrieved Examples**](https://www.aclweb.org/anthology/2021.naacl-main.59) , <br> by *Yu, Dian and
  • ![ - main.106)<a href="https://scholar.google.com.hk/scholar?q=Incremental+Few-shot+Text+Classification+with+Multi-round+New+Classes:+Formulation,+Dataset+and+System"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System**](https://www.aclweb.org/anthology/2021.naacl-main.106) , <br> by *Xia, Congying and
  • ![ - main.158)<a href="https://scholar.google.com.hk/scholar?q=Towards+Few-shot+Fact-Checking+via+Perplexity"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Towards Few-shot Fact-Checking via Perplexity**](https://www.aclweb.org/anthology/2021.naacl-main.158) , <br> by *Lee, Nayeon and
  • ![ - main.261)<a href="https://scholar.google.com.hk/scholar?q=Knowledge+Guided+Metric+Learning+for+Few-Shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Knowledge Guided Metric Learning for Few-Shot Text Classification**](https://www.aclweb.org/anthology/2021.naacl-main.261) , <br> by *Sui, Dianbo and
  • ![ - main.264)<a href="https://scholar.google.com.hk/scholar?q=ConVEx:+Data-Efficient+and+Few-Shot+Slot+Labeling"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**ConVEx: Data-Efficient and Few-Shot Slot Labeling**](https://www.aclweb.org/anthology/2021.naacl-main.264) , <br> by *Henderson, Matthew and
  • ![ - main.434)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Text+Classification+with+Triplet+Networks,+Data+Augmentation,+and+Curriculum+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Text Classification with Triplet Networks, Data Augmentation, and Curriculum Learning**](https://www.aclweb.org/anthology/2021.naacl-main.434) , <br> by *Wei, Jason and
  • ![ - main.563)<a href="https://scholar.google.com.hk/scholar?q=Bridging+Text+and+Knowledge+with+Multi-Prototype+Embedding+for+Few-Shot+Relational+Triple+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
  • ![ - main.20)<a href="https://scholar.google.com.hk/scholar?q=Exploiting+Cloze-Questions+for+Few-Shot+Text+Classification+and+Natural+Language+Inference"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference**](https://www.aclweb.org/anthology/2021.eacl-main.20) , <br> by *Schick, Timo and
  • ![ - VAE:+Hierarchical+Context+Aggregation+for+Few-Shot+Generation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation**](https://proceedings.mlr.press/v162/giannone22a.html) , <br> by *Giannone, Giorgio and Winther, Ole* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3-L10) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v162-giannone22a```
  • ![ - Shot+Image+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Channel Importance Matters in Few-Shot Image Classification**](https://proceedings.mlr.press/v162/luo22c.html) , <br> by *Luo, Xu, Xu, Jing and Xu, Zenglin* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L12-L19) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v162-luo22c```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold**](https://proceedings.mlr.press/v162/sharma22b.html) , <br> by *Sharma, Sugandha, Chandra, Sarthak and Fiete, Ila* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L22-L29) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v162-sharma22b```
  • ![ - Shot+Policy+Generalization"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prompting Decision Transformer for Few-Shot Policy Generalization**](https://proceedings.mlr.press/v162/xu22g.html) , <br> by *Xu, Mengdi, Shen, Yikang, Zhang, Shun, Lu, Yuchen, Zhao, Ding, Tenenbaum, Joshua and Gan, Chuang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L33-L40) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v162-xu22g```
  • ![ - Supervised+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning**](https://proceedings.mlr.press/v162/zhmoginov22a.html) , <br> by *Zhmoginov, Andrey, Sandler, Mark and Vladymyrov, Maksym* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L43-L50) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v162-zhmoginov22a```
  • ![ - learners+for+Few-shot+Polythetic+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Attentional Meta-learners for Few-shot Polythetic Classification**](https://proceedings.mlr.press/v162/day22a.html) , <br> by *Day, Ben J, Torn{\'e}, Ramon Vi{\~n}as, Simidjievski, Nikola and Li{\'o}, Pietro* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L54-L61) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v162-day22a```
  • ![ - Stage+Feature+Reconstruction+for+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification**](http://proceedings.mlr.press/v139/lee21d.html) , <br> by *Lee, Dong Hoon and Chung, Sae-Young* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1413-L1420) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-lee21d```
  • ![ - Scale+Meta-Learning+with+Continual+Trajectory+Shifting"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Large-Scale Meta-Learning with Continual Trajectory Shifting**](http://proceedings.mlr.press/v139/shin21a.html) , <br> by *Shin, Jaewoong, Lee, Hae Beom, Gong, Boqing and Hwang, Sung Ju* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1586-L1593) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-shin21a```
  • ![ - shot+Language+Coordination+by+Modeling+Theory+of+Mind"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Language Coordination by Modeling Theory of Mind**](http://proceedings.mlr.press/v139/zhu21d.html) , <br> by *Zhu, Hao, Neubig, Graham and Bisk, Yonatan* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1596-L1603) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-zhu21d```
  • ![ - shot+Performance+of+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Calibrate Before Use: Improving Few-shot Performance of Language Models**](http://proceedings.mlr.press/v139/zhao21c.html) , <br> by *Zhao, Zihao, Wallace, Eric, Feng, Shi, Klein, Dan and Singh, Sameer* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1607-L1614) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-zhao21c```
  • ![ - Shot+Neural+Architecture+Search"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Neural Architecture Search**](http://proceedings.mlr.press/v139/zhao21d.html) , <br> by *Zhao, Yiyang, Wang, Linnan, Tian, Yuandong, Fonseca, Rodrigo and Guo, Tian* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1617-L1624) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-zhao21d```
  • ![ - shot+Dataset+Generalization"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning a Universal Template for Few-shot Dataset Generalization**](http://proceedings.mlr.press/v139/triantafillou21a.html) , <br> by *Triantafillou, Eleni, Larochelle, Hugo, Zemel, Richard and Dumoulin, Vincent* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1627-L1634) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-triantafillou21a```
  • ![ - representation+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Parameterless Transductive Feature Re-representation for Few-Shot Learning**](http://proceedings.mlr.press/v139/cui21a.html) , <br> by *Cui, Wentao and Guo, Yuhong* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1637-L1644) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-cui21a```
  • ![ - Validation+Split+in+Meta-Learning?"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**How Important is the Train-Validation Split in Meta-Learning?**](http://proceedings.mlr.press/v139/bai21a.html) , <br> by *Bai, Yu, Chen, Minshuo, Zhou, Pan, Zhao, Tuo, Lee, Jason, Kakade, Sham, Wang, Huan and Xiong, Caiming* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1647-L1654) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-bai21a```
  • ![ - Shot+Conformal+Prediction+with+Auxiliary+Tasks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Conformal Prediction with Auxiliary Tasks**](http://proceedings.mlr.press/v139/fisch21a.html) , <br> by *Fisch, Adam, Schuster, Tal, Jaakkola, Tommi and Barzilay, Dr.Regina* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1657-L1664) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-fisch21a```
  • ![ - dependent+Analysis+of+Meta+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Distribution-dependent Analysis of Meta Learning**](http://proceedings.mlr.press/v139/konobeev21a.html) , <br> by *Konobeev, Mikhail, Kuzborskij, Ilja and Szepesvari, Csaba* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1667-L1674) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-konobeev21a```
  • ![ - Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Data Augmentation for Meta-Learning**](http://proceedings.mlr.press/v139/ni21a.html) , <br> by *Ni, Renkun, Goldblum, Micah, Sharaf, Amr, Kong, Kezhi and Goldstein, Tom* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1677-L1684) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-ni21a```
  • ![ - Task+Learning+and+Meta-Learning:+Towards+Efficient+Training+and+Effective+Adaptation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation**](http://proceedings.mlr.press/v139/wang21ad.html) , <br> by *Wang, Haoxiang, Zhao, Han and Li, Bo* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1687-L1694) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-wang21ad```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**CURI: A Benchmark for Productive Concept Learning Under Uncertainty**](http://proceedings.mlr.press/v139/vedantam21a.html) , <br> by *Vedantam, Ramakrishna, Szlam, Arthur, Nickel, Maximillian, Morcos, Ari and Lake, Brenden M* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1697-L1704) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-vedantam21a```
  • ![ - Validation+Splitting+in+Meta-Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning**](http://proceedings.mlr.press/v139/saunshi21a.html) , <br> by *Saunshi, Nikunj, Gupta, Arushi and Hu, Wei* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1707-L1714) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-saunshi21a```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Memory Efficient Online Meta Learning**](http://proceedings.mlr.press/v139/acar21b.html) , <br> by *Acar, Durmus Alp Emre, Zhu, Ruizhao and Saligrama, Venkatesh* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1717-L1724) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-acar21b```
  • ![ - Shot+Problems"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Addressing Catastrophic Forgetting in Few-Shot Problems**](http://proceedings.mlr.press/v139/yap21a.html) , <br> by *Yap, Pauching, Ritter, Hippolyt and Barber, David* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1726-L1733) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-yap21a```
  • ![ - Tree:+A+Gaussian+Process+Classifier+for+Few-Shot+Incremental+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning**](http://proceedings.mlr.press/v139/achituve21a.html) , <br> by *Achituve, Idan, Navon, Aviv, Yemini, Yochai, Chechik, Gal and Fetaya, Ethan* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1735-L1742) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v139-achituve21a```
  • ![ - Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**TaskNorm: Rethinking Batch Normalization for Meta-Learning**](http://proceedings.mlr.press/v119/bronskill20a.html) , <br> by *Bronskill, John, Gordon, Jonathan, Requeima, James, Nowozin, Sebastian and Turner, Richard* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2096-L2103) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-bronskill20a```
  • ![ - Learning:+Understanding+Feature+Representations+for+Few-Shot+Tasks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks**](http://proceedings.mlr.press/v119/goldblum20a.html) , <br> by *Goldblum, Micah, Reich, Steven, Fowl, Liam, Ni, Renkun, Cherepanova, Valeriia and Goldstein, Tom* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2106-L2113) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-goldblum20a```
  • ![ - Learning+with+Shared+Amortized+Variational+Inference"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning with Shared Amortized Variational Inference**](http://proceedings.mlr.press/v119/iakovleva20a.html) , <br> by *Iakovleva, Ekaterina, Verbeek, Jakob and Alahari, Karteek* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2116-L2123) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-iakovleva20a```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta Variance Transfer: Learning to Augment from the Others**](http://proceedings.mlr.press/v119/park20b.html) , <br> by *Park, Seong-Jin, Han, Seungju, Baek, Ji-Won, Kim, Insoo, Song, Juhwan, Lee, Hae Beom, Han, Jae-Joon and Hwang, Sung Ju* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2126-L2133) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-park20b```
  • ![ - shot+Relation+Extraction+via+Bayesian+Meta-learning+on+Relation+Graphs"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs**](http://proceedings.mlr.press/v119/qu20a.html) , <br> by *Qu, Meng, Gao, Tianyu, Xhonneux, Louis-Pascal and Tang, Jian* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2136-L2143) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-qu20a```
  • ![ - shot+Domain+Adaptation+by+Causal+Mechanism+Transfer"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Domain Adaptation by Causal Mechanism Transfer**](http://proceedings.mlr.press/v119/teshima20a.html) , <br> by *Teshima, Takeshi, Sato, Issei and Sugiyama, Masashi* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2146-L2153) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-teshima20a```
  • ![ - Shot+Object+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Frustratingly Simple Few-Shot Object Detection**](http://proceedings.mlr.press/v119/wang20j.html) , <br> by *Wang, Xin, Huang, Thomas, Gonzalez, Joseph, Darrell, Trevor and Yu, Fisher* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2155-L2162) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-wang20j```
  • ![ - Adaptive+Representation+for+Incremental+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning**](http://proceedings.mlr.press/v119/yoon20b.html) , <br> by *Yoon, Sung Whan, Kim, Do-Yeon, Seo, Jun and Moon, Jaekyun* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2165-L2172) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-yoon20b```
  • ![ - shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Infinite Mixture Prototypes for Few-shot Learning**]( http://proceedings.mlr.press/v97/allen19b.html ) , <br> by *Allen, Kelsey, Shelhamer, Evan, Shin, Hanul and Tenenbaum, Joshua* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2175-L2182) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v97-allen19b```
  • ![ - Net:+Learning+to+Generate+Matching+Networks+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning**](http://proceedings.mlr.press/v97/li19c.html) , <br> by *Li, Huaiyu, Dong, Weiming, Mei, Xing, Ma, Chongyang, Huang, Feiyue and Hu, Bao-Gang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2185-L2192) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v97-li19c```
  • ![ - Adaptive+Projection+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning**](http://proceedings.mlr.press/v97/yoon19a.html) , <br> by *Yoon, Sung Whan, Seo, Jun and Moon, Jaekyun* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2194-L2201) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v97-yoon19a```
  • ![
  • ![
  • ![ - shot+and+Zero-shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**MSplit LBI: Realizing Feature Selection and Dense Estimation Simultaneously in Few-shot and Zero-shot Learning**](http://proceedings.mlr.press/v80/zhao18c.html) , <br> by *Zhao, Bo, Sun, Xinwei, Fu, Yanwei, Yao, Yuan and Wang, Yizhou* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2226-L2233) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v80-zhao18c```
  • ![ - Learning+by+Adjusting+Priors+Based+on+Extended+PAC-Bayes+Theory"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory**](http://proceedings.mlr.press/v80/amit18a.html) , <br> by *Amit, Ron and Meir, Ron* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2236-L2243) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v80-amit18a```
  • ![ - Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Bilevel Programming for Hyperparameter Optimization and Meta-Learning**](http://proceedings.mlr.press/v80/franceschi18a.html) , <br> by *Franceschi, Luca, Frasconi, Paolo, Salzo, Saverio, Grazzi, Riccardo and Pontil, Massimiliano* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2246-L2253) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v80-franceschi18a```
  • ![ - Based+Meta-Learning+with+Learned+Layerwise+Metric+and+Subspace"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace**](http://proceedings.mlr.press/v80/lee18a.html) , <br> by *Lee, Yoonho and Choi, Seungjin* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2256-L2263) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v80-lee18a```
  • ![ - Agnostic+Meta-Learning+for+Fast+Adaptation+of+Deep+Networks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks**](http://proceedings.mlr.press/v70/finn17a.html) , <br> by *Chelsea Finn, Pieter Abbeel and Sergey Levine* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2266-L2273) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v70-finn17a```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta Networks**](http://proceedings.mlr.press/v70/munkhdalai17a.html) , <br> by *Tsendsuren Munkhdalai and Hong Yu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2275-L2282) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v70-munkhdalai17a```
  • ![ - trained+Language+Models+Better+Few-shot+Learners"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners**](https://openreview.net/forum?id=ek9a0qIafW) , <br> by *Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang and Huajun Chen* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L592-L598) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```zhang2022differentiable```
  • ![ - training"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploring the Limits of Large Scale Pre-training**](https://openreview.net/forum?id=V3C8p78sDa) , <br> by *Samira Abnar, Mostafa Dehghani, Behnam Neyshabur and Hanie Sedghi* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L601-L607) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```abnar2022exploring```
  • ![ - tnQ)<a href="https://scholar.google.com.hk/scholar?q=Subspace+Regularizers+for+Few-Shot+Class+Incremental+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Subspace Regularizers for Few-Shot Class Incremental Learning**](https://openreview.net/forum?id=boJy41J-tnQ) , <br> by *Afra Feyza Aky{\"u}rek, Ekin Aky{\"u}rek, Derry Wijaya and Jacob Andreas* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L610-L616) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```akyurek2022subspace```
  • ![ - Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Task Affinity with Maximum Bipartite Matching in Few-Shot Learning**](https://openreview.net/forum?id=u2GZOiUTbt) , <br> by *Cat Phuoc Le, Juncheng Dong, Mohammadreza Soltani and Vahid Tarokh* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L619-L625) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```le2022task```
  • ![ - Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**On the Importance of Firth Bias Reduction in Few-Shot Classification**](https://openreview.net/forum?id=DNRADop4ksB) , <br> by *Saba Ghaffari, Ehsan Saleh, David Forsyth and Yu-Xiong Wang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L628-L634) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ghaffari2022on```
  • ![ - iABMvzIc)<a href="https://scholar.google.com.hk/scholar?q=Switch+to+Generalize:+Domain-Switch+Learning+for+Cross-Domain+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Switch to Generalize: Domain-Switch Learning for Cross-Domain Few-Shot Classification**](https://openreview.net/forum?id=H-iABMvzIc) , <br> by *Zhengdong Hu, Yifan Sun and Yi Yang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L637-L643) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```hu2022switch```
  • ![ - shot+Language+Learning+Based+on+Prompt+Tuning+of+T5"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5**](https://openreview.net/forum?id=HCRVf71PMF) , <br> by *Chengwei Qin and Shafiq Joty* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L646-L652) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```qin2022lfpt```
  • ![ - Shot+Imitation+with+Skill+Transition+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Hierarchical Few-Shot Imitation with Skill Transition Models**](https://openreview.net/forum?id=xKZ4K0lTj_) , <br> by *Kourosh Hakhamaneshi, Ruihan Zhao, Albert Zhan, Pieter Abbeel and Michael Laskin* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L655-L661) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```hakhamaneshi2022hierarchical```
  • ![ - Domain+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning**](https://openreview.net/forum?id=zRJu6mU2BaE) , <br> by *Debasmit Das, Sungrack Yun and Fatih Porikli* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L664-L670) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```das2022confess```
  • ![ - shot+Learning+Across+Domains"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Hierarchical Variational Memory for Few-shot Learning Across Domains**](https://openreview.net/forum?id=i3RI65sR7N) , <br> by *Yingjun Du, Xiantong Zhen, Ling Shao and Cees G. M. Snoek* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L673-L679) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```du2022hierarchical```
  • ![ - Shot+Sequence+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence Classification**](https://openreview.net/forum?id=p3DKPQ7uaAi) , <br> by *Bing Su and Ji-Rong Wen* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L682-L688) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```su2022temporal```
  • ![ - Shot+NAS+with+Gradient+Matching"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Generalizing Few-Shot NAS with Gradient Matching**](https://openreview.net/forum?id=_jMtny3sMKU) , <br> by *Shoukang Hu, Ruochen Wang, Lanqing HONG, Zhenguo Li, Cho-Jui Hsieh and Jiashi Feng* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L691-L697) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```hu2022generalizing```
  • ![ - shot+Learning+via+Dirichlet+Tessellation+Ensemble"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Learning via Dirichlet Tessellation Ensemble**](https://openreview.net/forum?id=6kCiVaoQdx9) , <br> by *Chunwei Ma, Ziyun Huang, Mingchen Gao and Jinhui Xu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L700-L706) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ma2022fewshot```
  • ![ - Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**How to Train Your MAML to Excel in Few-Shot Classification**](https://openreview.net/forum?id=49h_IkpJtaE) , <br> by *Han-Jia Ye and Wei-Lun Chao* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L709-L715) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ye2022how```
  • ![ - shot+Learning:+Distribution+Calibration"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Free Lunch for Few-shot Learning: Distribution Calibration**](https://openreview.net/forum?id=JWOiYxMG92s) , <br> by *Shuo Yang, Lu Liu and Min Xu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1953-L1959) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```yang2021free```
  • ![ - training+For+Few-shot+Transfer+Across+Extreme+Task+Differences"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-training For Few-shot Transfer Across Extreme Task Differences**](https://openreview.net/forum?id=O3Y56aqpChA) , <br> by *Cheng Perng Phoo and Bharath Hariharan* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1962-L1968) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```phoo2021selftraining```
  • ![ - shot+learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Wandering within a world: Online contextualized few-shot learning**](https://openreview.net/forum?id=oZIvHV04XgC) , <br> by *Mengye Ren, Michael Louis Iuzzolino, Michael Curtis Mozer and Richard Zemel* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1971-L1977) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ren2021wandering```
  • ![ - Shot+Learning+via+Learning+the+Representation,+Provably"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Learning via Learning the Representation, Provably**](https://openreview.net/forum?id=pW2Q2xLwIMD) , <br> by *Simon Shaolei Du, Wei Hu, Sham M. Kakade, Jason D. Lee and Qi Lei* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1980-L1986) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```du2021fewshot```
  • ![ - Shot+Image+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Universal Representation Transformer Layer for Few-Shot Image Classification**](https://openreview.net/forum?id=04cII6MumYV) , <br> by *Lu Liu, William L. Hamilton, Guodong Long, Jing Jiang and Hugo Larochelle* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1989-L1995) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```liu2021a```
  • ![ - sample+\BERT\+Fine-tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Revisiting Few-sample \BERT\ Fine-tuning**](https://openreview.net/forum?id=cO1IH43yUF) , <br> by *Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger and Yoav Artzi* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1998-L2004) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```zhang2021revisiting```
  • ![ - LoZO)<a href="https://scholar.google.com.hk/scholar?q=Concept+Learners+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Concept Learners for Few-Shot Learning**](https://openreview.net/forum?id=eJIJF3-LoZO) , <br> by *Kaidi Cao, Maria Brbic and Jure Leskovec* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2007-L2013) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```cao2021concept```
  • ![ - Task+Learning:+Improving+Transfer+Learning+in+\NLP\+Using+Fewer+Parameters+\&+Less+Data"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in \NLP\ Using Fewer Parameters \& Less Data**](https://openreview.net/forum?id=de11dbHzAMF) , <br> by *Jonathan Pilault, Amine El hattami and Christopher Pal* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2016-L2022) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pilault2021conditionally```
  • ![ - ZePhnZM)<a href="https://scholar.google.com.hk/scholar?q=Incremental+few-shot+learning+via+vector+quantization+in+deep+embedded+space"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Incremental few-shot learning via vector quantization in deep embedded space**](https://openreview.net/forum?id=3SV-ZePhnZM) , <br> by *Kuilin Chen and Chi-Guhn Lee* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2025-L2031) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```chen2021incremental```
  • ![ - h)<a href="https://scholar.google.com.hk/scholar?q=Repurposing+Pretrained+Models+for+Robust+Out-of-domain+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Repurposing Pretrained Models for Robust Out-of-domain Few-Shot Learning**](https://openreview.net/forum?id=qkLMTphG5-h) , <br> by *Namyeong Kwon, Hwidong Na, Gabriel Huang and Simon Lacoste-Julien* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2034-L2040) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```kwon2021repurposing```
  • ![ - Learning+via+Modeling+Episode-Level+Relationships+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**\MELR\: Meta-Learning via Modeling Episode-Level Relationships for Few-Shot Learning**](https://openreview.net/forum?id=D3PcGLdMx0) , <br> by *Nanyi Fei, Zhiwu Lu, Tao Xiang and Songfang Huang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2043-L2049) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```fei2021melr```
  • ![ - Lr-u0b42he)<a href="https://scholar.google.com.hk/scholar?q=Disentangling+3D+Prototypical+Networks+for+Few-Shot+Concept+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Disentangling 3D Prototypical Networks for Few-Shot Concept Learning**](https://openreview.net/forum?id=-Lr-u0b42he) , <br> by *Mihir Prabhudesai, Shamit Lal, Darshan Patil, Hsiao-Yu Tung, Adam W Harley and Katerina Fragkiadaki* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2052-L2058) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```prabhudesai2021disentangling```
  • ![ - Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Attentional Constellation Nets for Few-Shot Learning**](https://openreview.net/forum?id=vujTf_I8Kmc) , <br> by *Weijian Xu, yifan xu, Huaijin Wang and Zhuowen Tu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2061-L2067) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```xu2021attentional```
  • ![ - shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**\BOIL\: Towards Representation Change for Few-shot Learning**](https://openreview.net/forum?id=umIdUL8rMH) , <br> by *Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim and Se-Young Yun* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2070-L2076) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```oh2021boil```
  • ![ - learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Theoretical bounds on estimation error for meta-learning**](https://openreview.net/forum?id=SZ3wtsXfzQR) , <br> by *James Lucas, Mengye Ren, Irene Raissa KAMENI KAMENI, Toniann Pitassi and Richard Zemel* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2079-L2085) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```lucas2021theoretical```
  • ![ - -gvHfE3Xf5)<a href="https://scholar.google.com.hk/scholar?q=Meta-Learning+of+Structured+Task+Distributions+in+Humans+and+Machines"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning of Structured Task Distributions in Humans and Machines**](https://openreview.net/forum?id=--gvHfE3Xf5) , <br> by *Sreejan Kumar, Ishita Dasgupta, Jonathan Cohen, Nathaniel Daw and Thomas Griffiths* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2088-L2094) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```kumar2021metalearning```
  • ![ - learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Automated Relational Meta-learning**](https://openreview.net/forum?id=rklp93EtwH) , <br> by *Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li and Zhenhui Li* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2285-L2291) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Yao2020Automated```
  • ![ - Dataset:+A+Dataset+of+Datasets+for+Learning+to+Learn+from+Few+Examples"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples**](https://openreview.net/forum?id=rkgAGAVKPr) , <br> by *Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol and Hugo Larochelle* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2304-L2310) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Triantafillou2020Meta-Dataset```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Towards Fast Adaptation of Neural Architectures with Meta Learning**](https://openreview.net/forum?id=r1eowANFvr) , <br> by *Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang and Shenghua Gao* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2314-L2320) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Lian2020Towards```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Bayesian Meta Sampling for Fast Uncertainty Adaptation**](https://openreview.net/forum?id=Bkxv90EKPB) , <br> by *Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang and Changyou Chen* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2324-L2330) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Wang2020Bayesian```
  • ![ - Learning+without+Memorization"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning without Memorization**](https://openreview.net/forum?id=BklEFpEYwS) , <br> by *Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine and Chelsea Finn* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2334-L2340) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Yin2020Meta-Learning```
  • ![ - Learning+Acquisition+Functions+for+Transfer+Learning+in+Bayesian+Optimization"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization**](https://openreview.net/forum?id=ryeYpJSKwr) , <br> by *Michael Volpp, Lukas P. Fröhlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter and Christian Daniel* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2343-L2349) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Volpp2020Meta-Learning```
  • ![ - Learning+for+Imbalanced+and+Out-of-distribution+Tasks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks**](https://openreview.net/forum?id=rkeZIJBYvr) , <br> by *Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang and Sung Ju Hwang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2352-L2358) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Lee2020Learning```
  • ![ - shot+Text+Classification+with+Distributional+Signatures"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Text Classification with Distributional Signatures**](https://openreview.net/forum?id=H1emfT4twB) , <br> by *Yujia Bao, Menghua Wu, Shiyu Chang and Regina Barzilay* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2362-L2368) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Bao2020Few-shot```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Disentangling Factors of Variations Using Few Labels**](https://openreview.net/forum?id=SygagpEKwB) , <br> by *Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf and Olivier Bachem* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2371-L2377) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Locatello2020Disentangling```
  • ![ - SHOT+LEARNING+ON+GRAPHS+VIA+SUPER-CLASSES+BASED+ON+GRAPH+SPECTRAL+MEASURES"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FEW-SHOT LEARNING ON GRAPHS VIA SUPER-CLASSES BASED ON GRAPH SPECTRAL MEASURES**](https://openreview.net/forum?id=Bkeeca4Kvr) , <br> by *Jatin Chauhan, Deepak Nathani and Manohar Kaul* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2380-L2386) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Chauhan2020FEW-SHOT```
  • ![ - Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Theoretical Analysis of the Number of Shots in Few-Shot Learning**](https://openreview.net/forum?id=HkgB2TNYPS) , <br> by *Tianshi Cao, Marc T Law and Sanja Fidler* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2389-L2395) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Cao2020A```
  • ![ - Shot+Image+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Baseline for Few-Shot Image Classification**](https://openreview.net/forum?id=rylXBkrYDS) , <br> by *Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran and Stefano Soatto* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2398-L2404) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Dhillon2020A```
  • ![ - Domain+Few-Shot+Classification+via+Learned+Feature-Wise+Transformation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation**](https://openreview.net/forum?id=SJl5Np4tPr) , <br> by *Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang and Ming-Hsuan Yang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2407-L2413) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Tseng2020Cross-Domain```
  • ![ - shot+learning+with+a+surprise-based+memory+module"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adaptive Posterior Learning: few-shot learning with a surprise-based memory module**](https://openreview.net/forum?id=ByeSdsC9Km) , <br> by *Tiago Ramalho and Marta Garnelo* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2416-L2422) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ramalho2018adaptive```
  • ![ - shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Closer Look at Few-shot Classification**](https://openreview.net/forum?id=HkxLXnAcFQ) , <br> by *Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang and Jia-Bin Huang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2425-L2431) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```chen2018a```
  • ![ - SHOT+LEARNING"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**LEARNING TO PROPAGATE LABELS: TRANSDUCTIVE PROPAGATION NETWORK FOR FEW-SHOT LEARNING**](https://openreview.net/forum?id=SyVuRiC5K7) , <br> by *Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sungju Hwang and Yi Yang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2434-L2440) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```liu2018learning```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Transferring Knowledge across Learning Processes**](https://openreview.net/forum?id=HygBZnRctX) , <br> by *Sebastian Flennerhag, Pablo Garcia Moreno, Neil Lawrence and Andreas Damianou* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2443-L2449) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```flennerhag2018transferring```
  • ![ - Learning+Probabilistic+Inference+for+Prediction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning Probabilistic Inference for Prediction**](https://openreview.net/forum?id=HkxStoC5F7) , <br> by *Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin and Richard Turner* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2452-L2458) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```gordon2018metalearning```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Learn with Conditional Class Dependencies**](https://openreview.net/forum?id=BJfOXnActQ) , <br> by *Xiang Jiang, Mohammad Havaei, Farshid Varno, Gabriel Chartrand, Nicolas Chapados and Stan Matwin* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2461-L2467) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```jiang2018learning```
  • ![ - shot+Autoregressive+Density+Estimation:+Towards+Learning+to+Learn+Distributions"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions**](https://openreview.net/forum?id=r1wEFyWCW) , <br> by *Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals and Nando de Freitas* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2470-L2476) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```reed2018fewshot```
  • ![ - Shot+Learning+with+Graph+Neural+Networks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Learning with Graph Neural Networks**](https://openreview.net/forum?id=BJj6qGbRW) , <br> by *Victor Garcia Satorras and Joan Bruna Estrach* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2479-L2485) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```garcia2018fewshot```
  • ![ - CZ)<a href="https://scholar.google.com.hk/scholar?q=Meta-Learning+for+Semi-Supervised+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning for Semi-Supervised Few-Shot Classification**](https://openreview.net/forum?id=HJcSzz-CZ) , <br> by *Mengye Ren, Sachin Ravi, Eleni Triantafillou, Jake Snell, Kevin Swersky, Josh B. Tenenbaum, Hugo Larochelle and Richard S. Zemel* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2488-L2494) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ren2018metalearning```
  • ![ - Learning+and+Universality:+Deep+Representations+and+Gradient+Descent+can+Approximate+any+Learning+Algorithm"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm**](https://openreview.net/forum?id=HyjC5yWCW) , <br> by *Chelsea Finn and Sergey Levine* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2497-L2503) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```finn2018metalearning```
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**META LEARNING SHARED HIERARCHIES**](https://openreview.net/forum?id=SyX0IeWAW) , <br> by *Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel and John Schulman* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L2506-L2512) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```frans2018meta```
  • ![ - Kcll)<a href="https://scholar.google.com.hk/scholar?q=Optimization+as+a+Model+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Optimization as a Model for Few-Shot Learning**](https://openreview.net/forum?id=rJY0-Kcll) , <br> by *Sachin Ravi and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Realistic+evaluation+of+transductive+few-shot+learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Realistic evaluation of transductive few-shot learning**](https://proceedings.neurips.cc/paper/2021/hash/4d7a968bb636e25818ff2a3941db08c1-Abstract.html) , <br> by *Olivier Veilleux and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Re-ranking+for+image+retrieval+and+transductive+few-shot+classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Re-ranking for image retrieval and transductive few-shot classification**](https://proceedings.neurips.cc/paper/2021/hash/d9fc0cdb67638d50f411432d0d41d0ba-Abstract.html) , <br> by *Xi Shen and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=True+Few-Shot+Learning+with+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**True Few-Shot Learning with Language Models**](https://proceedings.neurips.cc/paper/2021/hash/5c04925674920eb58467fb52ce4ef728-Abstract.html) , <br> by *Ethan Perez and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Grad2Task:+Improved+Few-shot+Text+Classification+Using+Gradients+for+Task+Representation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Grad2Task: Improved Few-shot Text Classification Using Gradients for
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=D2C:+Diffusion-Decoding+Models+for+Few-Shot+Conditional+Generation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**D2C: Diffusion-Decoding Models for Few-Shot Conditional Generation**](https://proceedings.neurips.cc/paper/2021/hash/682e0e796084e163c5ca053dd8573b0c-Abstract.html) , <br> by *Abhishek Sinha and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=TOHAN:+A+One-step+Approach+towards+Few-shot+Hypothesis+Adaptation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation**](https://proceedings.neurips.cc/paper/2021/hash/af5d5ef24881f3c3049a7b9bfe74d58b-Abstract.html) , <br> by *Haoang Chi and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=The+Role+of+Global+Labels+in+Few-Shot+Classification+and+How+to+Infer+Them"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**The Role of Global Labels in Few-Shot Classification and How to Infer
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Dynamic+Distillation+Network+for+Cross-Domain+Few-Shot+Recognition+with+Unlabeled+Data"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Dynamic Distillation Network for Cross-Domain Few-Shot Recognition
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Learning+to+Learn+Dense+Gaussian+Processes+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Learn Dense Gaussian Processes for Few-Shot Learning**](https://proceedings.neurips.cc/paper/2021/hash/6e2713a6efee97bacb63e52c54f0ada0-Abstract.html) , <br> by *Ze Wang and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Rectifying+the+Shortcut+Learning+of+Background+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Rectifying the Shortcut Learning of Background for Few-Shot Learning**](https://proceedings.neurips.cc/paper/2021/hash/6cfe0e6127fa25df2a0ef2ae1067d915-Abstract.html) , <br> by *Xu Luo and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=FLEX:+Unifying+Evaluation+for+Few-Shot+NLP"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FLEX: Unifying Evaluation for Few-Shot NLP**](https://proceedings.neurips.cc/paper/2021/hash/8493eeaccb772c0878f99d60a0bd2bb3-Abstract.html) , <br> by *Jonathan Bragg and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Multimodal+Few-Shot+Learning+with+Frozen+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multimodal Few-Shot Learning with Frozen Language Models**](https://proceedings.neurips.cc/paper/2021/hash/01b7575c38dac42f3cfb7d500438b875-Abstract.html) , <br> by *Maria Tsimpoukelli and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=On+Episodes,+Prototypical+Networks,+and+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**On Episodes, Prototypical Networks, and Few-Shot Learning**](https://proceedings.neurips.cc/paper/2021/hash/cdfa4c42f465a5a66871587c69fcfa34-Abstract.html) , <br> by *Steinar Laenen and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=POODLE:+Improving+Few-shot+Learning+via+Penalizing+Out-of-Distribution+Samples"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Overcoming+Catastrophic+Forgetting+in+Incremental+Few-Shot+Learning+by+Finding+Flat+Minima"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning
  • ![ - benchmarks-proceedings.neurips.cc/paper/2021/hash/3644a684f98ea8fe223c713b77189a77-Abstract-round2.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Learning+Evaluation+in+Natural+Language+Understanding"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Learning Evaluation in Natural Language Understanding**](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/3644a684f98ea8fe223c713b77189a77-Abstract-round2.html) , <br> by *Subhabrata Mukherjee and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Learning+to+Extrapolate+Knowledge:+Transductive+Few-shot+Out-of-Graph+Link+Prediction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Information+Maximization+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Information Maximization for Few-Shot Learning**](https://proceedings.neurips.cc/paper/2020/hash/196f5641aa9dc87067da4ff90fd81e7b-Abstract.html) , <br> by *Malik Boudiaf and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Interventional+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Interventional Few-Shot Learning**](https://proceedings.neurips.cc/paper/2020/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html) , <br> by *Zhongqi Yue and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Restoring+Negative+Information+in+Few-Shot+Object+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Restoring Negative Information in Few-Shot Object Detection**](https://proceedings.neurips.cc/paper/2020/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html) , <br> by *Yukuan Yang and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=OOD-MAML:+Meta-Learning+for+Few-Shot+Out-of-Distribution+Detection+and+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Few-shot+Image+Generation+with+Elastic+Weight+Consolidation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Image Generation with Elastic Weight Consolidation**](https://proceedings.neurips.cc/paper/2020/hash/b6d767d2f8ed5d21a44b0e5886680cb9-Abstract.html) , <br> by *Yijun Li and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Node+Classification+on+Graphs+with+Few-Shot+Novel+Labels+via+Meta+Transformed+Network+Embedding"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Node Classification on Graphs with Few-Shot Novel Labels via Meta
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Few-shot+Visual+Reasoning+with+Meta-Analogical+Contrastive+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Visual Reasoning with Meta-Analogical Contrastive Learning**](https://proceedings.neurips.cc/paper/2020/hash/c39e1a03859f9ee215bc49131d0caf33-Abstract.html) , <br> by *Youngsung Kim and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Adversarially+Robust+Few-Shot+Learning:+A+Meta-Learning+Approach"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adversarially Robust Few-Shot Learning: A Meta-Learning Approach**](https://proceedings.neurips.cc/paper/2020/hash/cfee398643cbc3dc5eefc89334cacdc1-Abstract.html) , <br> by *Micah Goldblum and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Uncertainty-aware+Self-training+for+Few-shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Uncertainty-aware Self-training for Few-shot Text Classification**](https://proceedings.neurips.cc/paper/2020/hash/f23d125da1e29e34c552f448610ff25f-Abstract.html) , <br> by *Subhabrata Mukherjee and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=A+Closer+Look+at+the+Training+Strategy+for+Modern+Meta-Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Closer Look at the Training Strategy for Modern Meta-Learning**](https://proceedings.neurips.cc/paper/2020/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html) , <br> by *Jiaxin Chen and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=The+Advantage+of+Conditional+Meta-Learning+for+Biased+Regularization+and+Fine+Tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**The Advantage of Conditional Meta-Learning for Biased Regularization
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Structured+Prediction+for+Conditional+Meta-Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Structured Prediction for Conditional Meta-Learning**](https://proceedings.neurips.cc/paper/2020/hash/1b69ebedb522700034547abc5652ffac-Abstract.html) , <br> by *Ruohan Wang and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Balanced+Meta-Softmax+for+Long-Tailed+Visual+Recognition"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Balanced Meta-Softmax for Long-Tailed Visual Recognition**](https://proceedings.neurips.cc/paper/2020/hash/2ba61cc3a8f44143e1f2f13b2b729ab3-Abstract.html) , <br> by *Jiawei Ren and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Meta-Learning+Requires+Meta-Augmentation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning Requires Meta-Augmentation**](https://proceedings.neurips.cc/paper/2020/hash/3e5190eeb51ebe6c5bbc54ee8950c548-Abstract.html) , <br> by *Janarthanan Rajendran and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Meta-learning+from+Tasks+with+Heterogeneous+Attribute+Spaces"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-learning from Tasks with Heterogeneous Attribute Spaces**](https://proceedings.neurips.cc/paper/2020/hash/438124b4c06f3a5caffab2c07863b617-Abstract.html) , <br> by *Tomoharu Iwata and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Online+Structured+Meta-learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Online Structured Meta-learning**](https://proceedings.neurips.cc/paper/2020/hash/4b86ca48d90bd5f0978afa3a012503a4-Abstract.html) , <br> by *Huaxiu Yao and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Modeling+and+Optimization+Trade-off+in+Meta-learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Modeling and Optimization Trade-off in Meta-learning**](https://proceedings.neurips.cc/paper/2020/hash/7fc63ff01769c4fa7d9279e97e307829-Abstract.html) , <br> by *Katelyn Gao and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Convergence+of+Meta-Learning+with+Task-Specific+Adaptation+over+Partial+Parameters"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Convergence of Meta-Learning with Task-Specific Adaptation over Partial
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=MATE:+Plugging+in+Model+Awareness+to+Task+Embedding+for+Meta+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**MATE: Plugging in Model Awareness to Task Embedding for Meta Learning**](https://proceedings.neurips.cc/paper/2020/hash/8989e07fc124e7a9bcbdebcc8ace2bc0-Abstract.html) , <br> by *Xiaohan Chen and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Continuous+Meta-Learning+without+Tasks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Continuous Meta-Learning without Tasks**](https://proceedings.neurips.cc/paper/2020/hash/cc3f5463bc4d26bc38eadc8bcffbc654-Abstract.html) , <br> by *James Harrison and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Task-Robust+Model-Agnostic+Meta-Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Task-Robust Model-Agnostic Meta-Learning**](https://proceedings.neurips.cc/paper/2020/hash/da8ce53cf0240070ce6c69c48cd588ee-Abstract.html) , <br> by *Liam Collins and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Meta-Learning+with+Adaptive+Hyperparameters"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning with Adaptive Hyperparameters**](https://proceedings.neurips.cc/paper/2020/hash/ee89223a2b625b5152132ed77abbcc79-Abstract.html) , <br> by *Sungyong Baik and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Probabilistic+Active+Meta-Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Probabilistic Active Meta-Learning**](https://proceedings.neurips.cc/paper/2020/hash/ef0d17b3bdb4ee2aa741ba28c7255c53-Abstract.html) , <br> by *Jean Kaddour and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Learning+to+Learn+Variational+Semantic+Memory"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Learn Variational Semantic Memory**](https://proceedings.neurips.cc/paper/2020/hash/67d16d00201083a2b118dd5128dd6f59-Abstract.html) , <br> by *Xiantong Zhen and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Language+Models+are+Few-Shot+Learners"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Language Models are Few-Shot Learners**](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html) , <br> by *Tom B. Brown and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Cross+Attention+Network+for+Few-shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Cross Attention Network for Few-shot Classification**](https://proceedings.neurips.cc/paper/2019/hash/01894d6f048493d2cacde3c579c315a3-Abstract.html) , <br> by *Ruibing Hou and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Adaptive+Cross-Modal+Few-shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adaptive Cross-Modal Few-shot Learning**](https://proceedings.neurips.cc/paper/2019/hash/d790c9e6c0b5e02c87b375e782ac01bc-Abstract.html) , <br> by *Chen Xing and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Unsupervised+Meta-Learning+for+Few-Shot+Image+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Unsupervised Meta-Learning for Few-Shot Image Classification**](https://proceedings.neurips.cc/paper/2019/hash/fd0a5a5e367a0955d81278062ef37429-Abstract.html) , <br> by *Siavash Khodadadeh and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Learning+to+Self-Train+for+Semi-Supervised+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Self-Train for Semi-Supervised Few-Shot Classification**](https://proceedings.neurips.cc/paper/2019/hash/bf25356fd2a6e038f1a3a59c26687e80-Abstract.html) , <br> by *Xinzhe Li and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Multimodal+Model-Agnostic+Meta-Learning+via+Task-Aware+Modulation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation**](https://proceedings.neurips.cc/paper/2019/hash/e4da3b7fbbce2345d7772b0674a318d5-Abstract.html) , <br> by *Risto Vuorio and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Self-Supervised+Generalisation+with+Meta+Auxiliary+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-Supervised Generalisation with Meta Auxiliary Learning**](https://proceedings.neurips.cc/paper/2019/hash/92262bf907af914b95a0fc33c3f33bf6-Abstract.html) , <br> by *Shikun Liu and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Adaptive+Gradient-Based+Meta-Learning+Methods"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adaptive Gradient-Based Meta-Learning Methods**](https://proceedings.neurips.cc/paper/2019/hash/f4aa0dd960521e045ae2f20621fb4ee9-Abstract.html) , <br> by *Mikhail Khodak and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Meta+Learning+with+Relational+Information+for+Short+Sequences"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta Learning with Relational Information for Short Sequences**](https://proceedings.neurips.cc/paper/2019/hash/6fe43269967adbb64ec6149852b5cc3e-Abstract.html) , <br> by *Yujia Xie and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=TADAM:+Task+dependent+adaptive+metric+for+improved+few-shot+learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**TADAM: Task dependent adaptive metric for improved few-shot learning**](https://proceedings.neurips.cc/paper/2018/hash/66808e327dc79d135ba18e051673d906-Abstract.html) , <br> by *Boris N. Oreshkin and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Learning+To+Learn+Around+A+Common+Mean"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning To Learn Around A Common Mean**](https://proceedings.neurips.cc/paper/2018/hash/b9a25e422ba96f7572089a00b838c3f8-Abstract.html) , <br> by *Giulia Denevi and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Low-shot+Learning+via+Covariance-Preserving+Adversarial+Augmentation+Networks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Low-shot Learning via Covariance-Preserving Adversarial Augmentation
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Learning+Through+an+Information+Retrieval+Lens"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Learning Through an Information Retrieval Lens**](https://proceedings.neurips.cc/paper/2017/hash/01e9565cecc4e989123f9620c1d09c09-Abstract.html) , <br> by *Eleni Triantafillou and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Prototypical+Networks+for+Few-shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prototypical Networks for Few-shot Learning**](https://proceedings.neurips.cc/paper/2017/hash/cb8da6767461f2812ae4290eac7cbc42-Abstract.html) , <br> by *Jake Snell and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Adversarial+Domain+Adaptation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Adversarial Domain Adaptation**](https://proceedings.neurips.cc/paper/2017/hash/21c5bba1dd6aed9ab48c2b34c1a0adde-Abstract.html) , <br> by *Saeid Motiian and
  • ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Matching+Networks+for+One+Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Matching Networks for One Shot Learning**](https://proceedings.neurips.cc/paper/2016/hash/90e1357833654983612fb05e3ec9148c-Abstract.html) , <br> by *Oriol Vinyals and
  • ![ - MSRE:+A+Few-Shot+Learning+based+Approach+to+Multimodal+Social+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FL-MSRE: A Few-Shot Learning based Approach to Multimodal Social
  • ![ - Shot+Relation+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Neural Snowball for Few-Shot Relation Learning**](https://aaai.org/ojs/index.php/AAAI/article/view/6281) , <br> by *Tianyu Gao and
  • ![ - Based+Prototypical+Networks+for+Noisy+Few-Shot+Relation+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation
  • ![ - Domain+Few-Shot+Classification+via+Adversarial+Task+Augmentation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Cross-Domain Few-Shot Classification via Adversarial Task Augmentation**](https://doi.org/10.24963/ijcai.2021/149) , <br> by *Wang, Haoqing and Deng, Zhi-Hong* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1191-L1198) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-149```
  • ![ - supervised+Network+Evolution+for+Few-shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-supervised Network Evolution for Few-shot Classification**](https://doi.org/10.24963/ijcai.2021/419) , <br> by *Tang, Xuwen, Teng, Zhu, Zhang, Baopeng and Fan, Jianping* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1200-L1207) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-419```
  • ![ - Supervised+Learning+for+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Conditional Self-Supervised Learning for Few-Shot Classification**](https://doi.org/10.24963/ijcai.2021/295) , <br> by *An, Yuexuan, Xue, Hui, Zhao, Xingyu and Zhang, Lu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1209-L1216) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-295```
  • ![ - Shot+Partial-Label+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Partial-Label Learning**](https://doi.org/10.24963/ijcai.2021/475) , <br> by *Zhao, Yunfeng, Yu, Guoxian, Liu, Lei, Yan, Zhongmin, Cui, Lizhen and Domeniconi, Carlotta* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1218-L1225) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-475```
  • ![ - Aware+Few-Shot+Image+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Uncertainty-Aware Few-Shot Image Classification**](https://doi.org/10.24963/ijcai.2021/471) , <br> by *Zhang, Zhizheng, Lan, Cuiling, Zeng, Wenjun, Chen, Zhibo and Chang, Shih-Fu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1227-L1234) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-471```
  • ![ - Shot+Learning+with+Part+Discovery+and+Augmentation+from+Unlabeled+Images"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Learning with Part Discovery and Augmentation from Unlabeled Images**](https://doi.org/10.24963/ijcai.2021/313) , <br> by *Chen, Wentao, Si, Chenyang, Wang, Wei, Wang, Liang, Wang, Zilei and Tan, Tieniu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1238-L1245) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-313```
  • ![ - Shot+Event+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Graph Learning Regularization and Transfer Learning for Few-Shot Event Detection**](https://doi.org/10.1145/3404835.3463054) , <br> by *Lai, Viet Dac, Nguyen, Minh Van, Nguyen, Thien Huu and Dernoncourt, Franck* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1422-L1429) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```3404835.3463054```
  • ![ - Shot+Intent+Generation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Pseudo Siamese Network for Few-Shot Intent Generation**](https://doi.org/10.1145/3404835.3462995) , <br> by *Xia, Congying, Xiong, Caiming and Yu, Philip* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1431-L1438) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```3404835.3462995```
  • ![ - Enhanced+Domain+Adaptation+in+Few-Shot+Relation+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Knowledge-Enhanced Domain Adaptation in Few-Shot Relation Classification**](https://doi.org/10.1145/3447548.3467438) , <br> by *Zhang, Jiawen, Zhu, Jiaqi, Yang, Yi, Shi, Wandong, Zhang, Congcong and Wang, Hongan* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1395-L1402) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```3447548.3467438```
  • ![ - Training+for+Few-Shot+Neural+Sequence+Labeling"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta Self-Training for Few-Shot Neural Sequence Labeling**](https://doi.org/10.1145/3447548.3467235) , <br> by *Wang, Yaqing, Mukherjee, Subhabrata, Chu, Haoda, Tu, Yuancheng, Wu, Ming, Gao, Jing and Awadallah, Ahmed Hassan* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1404-L1411) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```3447548.3467235```
  • ![ - Domain_Self-Supervised_Learning_for_Few-Shot_Unsupervised_Domain_Adaptation_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Prototypical+Cross-Domain+Self-Supervised+Learning+for+Few-Shot+Unsupervised+Domain+Adaptation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prototypical Cross-Domain Self-Supervised Learning for Few-Shot Unsupervised Domain Adaptation**](https://openaccess.thecvf.com/content/CVPR2021/html/Yue_Prototypical_Cross-Domain_Self-Supervised_Learning_for_Few-Shot_Unsupervised_Domain_Adaptation_CVPR_2021_paper.html) , <br> by *Yue, Xiangyu, Zheng, Zangwei, Zhang, Shanghang, Gao, Yang, Darrell, Trevor, Keutzer, Kurt and Vincentelli, Alberto Sangiovanni* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1745-L1752) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Yue_2021_CVPR```
  • ![ - Shot_Object_Detection_With_Support-Query_Mutual_Guidance_and_Hybrid_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Accurate+Few-Shot+Object+Detection+With+Support-Query+Mutual+Guidance+and+Hybrid+Loss"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Accurate Few-Shot Object Detection With Support-Query Mutual Guidance and Hybrid Loss**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Accurate_Few-Shot_Object_Detection_With_Support-Query_Mutual_Guidance_and_Hybrid_CVPR_2021_paper.html) , <br> by *Zhang, Lu, Zhou, Shuigeng, Guan, Jihong and Zhang, Ji* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1754-L1762) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhang_2021_CVPR```
  • ![ - Shot_Object_Detection_Without_Forgetting_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Generalized+Few-Shot+Object+Detection+Without+Forgetting"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Generalized Few-Shot Object Detection Without Forgetting**](https://openaccess.thecvf.com/content/CVPR2021/html/Fan_Generalized_Few-Shot_Object_Detection_Without_Forgetting_CVPR_2021_paper.html) , <br> by *Fan, Zhibo, Ma, Yuchen, Li, Zeming and Sun, Jian* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1764-L1771) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Fan_2021_CVPR```
  • ![ - Shot_Object_Detection_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Hallucination+Improves+Few-Shot+Object+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Hallucination Improves Few-Shot Object Detection**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Hallucination_Improves_Few-Shot_Object_Detection_CVPR_2021_paper.html) , <br> by *Zhang, Weilin and Wang, Yu-Xiong* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1754-L1762) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhang_2021_CVPR```
  • ![ - Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Incremental+Learning+With+Continually+Evolved+Classifiers"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Incremental Learning With Continually Evolved Classifiers**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Few-Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.html) , <br> by *Zhang, Chi, Song, Nan, Lin, Guosheng, Zheng, Yun, Pan, Pan and Xu, Yinghui* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1754-L1762) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhang_2021_CVPR```
  • ![ - Relative_Supervised_and_Unsupervised_Few-Shot_Learning_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Rethinking+Class+Relations:+Absolute-Relative+Supervised+and+Unsupervised+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Rethinking Class Relations: Absolute-Relative Supervised and Unsupervised Few-Shot Learning**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Rethinking_Class_Relations_Absolute-Relative_Supervised_and_Unsupervised_Few-Shot_Learning_CVPR_2021_paper.html) , <br> by *Zhang, Hongguang, Koniusz, Piotr, Jian, Songlei, Li, Hongdong and Torr, Philip H. S.* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1754-L1762) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhang_2021_CVPR```
  • ![ - Shot_Learning_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Prototype+Completion+With+Primitive+Knowledge+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prototype Completion With Primitive Knowledge for Few-Shot Learning**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_Prototype_Completion_With_Primitive_Knowledge_for_Few-Shot_Learning_CVPR_2021_paper.html) , <br> by *Zhang, Baoquan, Li, Xutao, Ye, Yunming, Huang, Zhichao and Zhang, Lisai* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1754-L1762) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhang_2021_CVPR```
  • ![ - Shot_Instance_Segmentation_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Incremental+Few-Shot+Instance+Segmentation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Incremental Few-Shot Instance Segmentation**](https://openaccess.thecvf.com/content/CVPR2021/html/Ganea_Incremental_Few-Shot_Instance_Segmentation_CVPR_2021_paper.html) , <br> by *Ganea, Dan Andrei, Boom, Bas and Poppe, Ronald* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1812-L1820) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Ganea_2021_CVPR```
  • ![ - Shot_Segmentation_Without_Meta-Learning_A_Good_Transductive_Inference_Is_All_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Segmentation+Without+Meta-Learning:+A+Good+Transductive+Inference+Is+All+You+Need?"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need?**](https://openaccess.thecvf.com/content/CVPR2021/html/Boudiaf_Few-Shot_Segmentation_Without_Meta-Learning_A_Good_Transductive_Inference_Is_All_CVPR_2021_paper.html) , <br> by *Boudiaf, Malik, Kervadec, Hoel, Masud, Ziko Imtiaz, Piantanida, Pablo, Ben Ayed, Ismail and Dolz, Jose* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1822-L1830) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Boudiaf_2021_CVPR```
  • ![ - Stable_Few-Shot_Object_Detection_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Semantic+Relation+Reasoning+for+Shot-Stable+Few-Shot+Object+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Semantic Relation Reasoning for Shot-Stable Few-Shot Object Detection**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_Semantic_Relation_Reasoning_for_Shot-Stable_Few-Shot_Object_Detection_CVPR_2021_paper.html) , <br> by *Zhu, Chenchen, Chen, Fangyi, Ahmed, Uzair, Shen, Zhiqiang and Savvides, Marios* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1832-L1840) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhu_2021_CVPR```
  • ![ - Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Self-Promoted+Prototype+Refinement+for+Few-Shot+Class-Incremental+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning**](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.html) , <br> by *Zhu, Kai, Cao, Yang, Zhai, Wei, Cheng, Jie and Zha, Zheng-Jun* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1832-L1840) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Zhu_2021_CVPR```
  • ![ - Shot_Classification_With_Feature_Map_Reconstruction_Networks_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Classification+With+Feature+Map+Reconstruction+Networks"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Classification With Feature Map Reconstruction Networks**](https://openaccess.thecvf.com/content/CVPR2021/html/Wertheimer_Few-Shot_Classification_With_Feature_Map_Reconstruction_Networks_CVPR_2021_paper.html) , <br> by *Wertheimer, Davis, Tang, Luming and Hariharan, Bharath* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1852-L1860) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Wertheimer_2021_CVPR```
  • ![ - Shot_Anchor-Free_Part-Based_Instance_Segmenter_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=FAPIS:+A+Few-Shot+Anchor-Free+Part-Based+Instance+Segmenter"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FAPIS: A Few-Shot Anchor-Free Part-Based Instance Segmenter**](https://openaccess.thecvf.com/content/CVPR2021/html/Nguyen_FAPIS_A_Few-Shot_Anchor-Free_Part-Based_Instance_Segmenter_CVPR_2021_paper.html) , <br> by *Nguyen, Khoi and Todorovic, Sinisa* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1862-L1870) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Nguyen_2021_CVPR```
  • ![ - Shot_Learning_and_Beyond_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Reinforced+Attention+for+Few-Shot+Learning+and+Beyond"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Reinforced Attention for Few-Shot Learning and Beyond**](https://openaccess.thecvf.com/content/CVPR2021/html/Hong_Reinforced_Attention_for_Few-Shot_Learning_and_Beyond_CVPR_2021_paper.html) , <br> by *Hong, Jie, Fang, Pengfei, Li, Weihao, Zhang, Tong, Simon, Christian, Harandi, Mehrtash and Petersson, Lars* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1872-L1880) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Hong_2021_CVPR```
  • ![ - Aware_Aggregation_for_Few-Shot_Object_Detection_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Dense+Relation+Distillation+With+Context-Aware+Aggregation+for+Few-Shot+Object+Detection"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Dense Relation Distillation With Context-Aware Aggregation for Few-Shot Object Detection**](https://openaccess.thecvf.com/content/CVPR2021/html/Hu_Dense_Relation_Distillation_With_Context-Aware_Aggregation_for_Few-Shot_Object_Detection_CVPR_2021_paper.html) , <br> by *Hu, Hanzhe, Bai, Shuai, Li, Aoxue, Cui, Jinshi and Wang, Liwei* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1882-L1890) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Hu_2021_CVPR```
  • ![ - Shot_Open-Set_Recognition_by_Transformation_Consistency_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Open-Set+Recognition+by+Transformation+Consistency"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Open-Set Recognition by Transformation Consistency**](https://openaccess.thecvf.com/content/CVPR2021/html/Jeong_Few-Shot_Open-Set_Recognition_by_Transformation_Consistency_CVPR_2021_paper.html) , <br> by *Jeong, Minki, Choi, Seokeon and Kim, Changick* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1892-L1900) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Jeong_2021_CVPR```
  • ![ - Filter_for_Few-Shot_Learning_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Learning+Dynamic+Alignment+via+Meta-Filter+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning Dynamic Alignment via Meta-Filter for Few-Shot Learning**](https://openaccess.thecvf.com/content/CVPR2021/html/Xu_Learning_Dynamic_Alignment_via_Meta-Filter_for_Few-Shot_Learning_CVPR_2021_paper.html) , <br> by *Xu, Chengming, Fu, Yanwei, Liu, Chen, Wang, Chengjie, Li, Jilin, Huang, Feiyue, Zhang, Li and Xue, Xiangyang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1902-L1910) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Xu_2021_CVPR```
  • ![ - Shot_CVPR_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Exploring+Complementary+Strengths+of+Invariant+and+Equivariant+Representations+for+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning**](https://openaccess.thecvf.com/content/CVPR2021/html/Rizve_Exploring_Complementary_Strengths_of_Invariant_and_Equivariant_Representations_for_Few-Shot_CVPR_2021_paper.html) , <br> by *Rizve, Mamshad Nayeem, Khan, Salman, Khan, Fahad Shahbaz and Shah, Mubarak* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1912-L1920) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Rizve_2021_CVPR```
  • ![ - Shot+Learning+With+Adaptive+Margin+Loss"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Boosting Few-Shot Learning With Adaptive Margin Loss**](https://doi.org/10.1109/CVPR42600.2020.01259) , <br> by *Aoxue Li and
  • ![ - Shot_Image_Classification_Just_Use_a_Library_of_Pre-Trained_Feature_ICCV_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Few-Shot+Image+Classification:+Just+Use+a+Library+of+Pre-Trained+Feature+Extractors+and+a+Simple+Classifier"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Image Classification: Just Use a Library of Pre-Trained Feature Extractors and a Simple Classifier**](https://openaccess.thecvf.com/content/ICCV2021/html/Chowdhury_Few-Shot_Image_Classification_Just_Use_a_Library_of_Pre-Trained_Feature_ICCV_2021_paper.html) , <br> by *Chowdhury, Arkabandhu, Jiang, Mingchao, Chaudhuri, Swarat and Jermaine, Chris* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1120-L1127) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Chowdhury_2021_ICCV```
  • ![ - Supervised_Few-Shot_Learning_ICCV_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=Iterative+Label+Cleaning+for+Transductive+and+Semi-Supervised+Few-Shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Iterative Label Cleaning for Transductive and Semi-Supervised Few-Shot Learning**](https://openaccess.thecvf.com/content/ICCV2021/html/Lazarou_Iterative_Label_Cleaning_for_Transductive_and_Semi-Supervised_Few-Shot_Learning_ICCV_2021_paper.html) , <br> by *Lazarou, Michalis, Stathaki, Tania and Avrithis, Yannis* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1129-L1136) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Lazarou_2021_ICCV```
  • ![ - Shot_Classification_ICCV_2021_paper.html)<a href="https://scholar.google.com.hk/scholar?q=On+the+Importance+of+Distractors+for+Few-Shot+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**On the Importance of Distractors for Few-Shot Classification**](https://openaccess.thecvf.com/content/ICCV2021/html/Das_On_the_Importance_of_Distractors_for_Few-Shot_Classification_ICCV_2021_paper.html) , <br> by *Das, Rajshekhar, Wang, Yu-Xiong and Moura, Jos\'e M. F.* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L1138-L1145) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```Das_2021_ICCV```
  • ![ - 1.42)<a href="https://scholar.google.com.hk/scholar?q=Revisiting+Few-shot+Relation+Classification:+Evaluation+Data+and+Classification+Schemes"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes**](https://aclanthology.org/2021.tacl-1.42) , <br> by *Sabo, Ofer and
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**How Can We Know What Language Models Know**](https://transacl.org/ojs/index.php/tacl/article/view/1983) , <br> by *Zhengbao Jiang and
  • ![ - shot+Learning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Generalizing from a Few Examples: A Survey on Few-shot Learning**](https://doi.org/10.1145/3386252) , <br> by *Yaqing Wang and
  • ![ - based+Finetuning+for+Relation+Extraction"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**AdaPrompt: Adaptive Prompt-based Finetuning for Relation Extraction**](https://arxiv.org/abs/2104.07650) , <br> by *Xiang Chen and
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**GPT Understands, Too**](https://arxiv.org/abs/2103.10385) , <br> by *Xiao Liu and
  • ![ - Tuning:+Optimizing+Continuous+Prompts+for+Generation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prefix-Tuning: Optimizing Continuous Prompts for Generation**](https://arxiv.org/abs/2101.00190) , <br> by *Xiang Lisa Li and
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Natural Instructions: Benchmarking Generalization to New Tasks from
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**PTR: Prompt Tuning with Rules for Text Classification**](https://arxiv.org/abs/2105.11259) , <br> by *Xu Han and
  • ![ - Efficient+Prompt+Tuning"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**The Power of Scale for Parameter-Efficient Prompt Tuning**](https://arxiv.org/abs/2104.08691) , <br> by *Brian Lester and
  • ![ - Shot+Controlled+Generation+with+Encoder-Decoder+Transformers"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Zero-Shot Controlled Generation with Encoder-Decoder Transformers**](https://arxiv.org/abs/2106.06411) , <br> by *Devamanyu Hazarika, Mahdi Namazifar and Dilek Hakkani-Tür* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3560-L3567) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```hazarika2021zeroshot```
  • ![ - scale+Language+Models+for+Text+Augmentation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation**](https://arxiv.org/abs/2104.08826) , <br> by *Kang Min Yoo and
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Generating Datasets with Pretrained Language Models**](https://arxiv.org/abs/2104.07540) , <br> by *Timo Schick and
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Neural Data Augmentation via Example Extrapolation**](https://arxiv.org/abs/2102.01335) , <br> by *Kenton Lee and
  • ![ - Shot+Learner"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Entailment as Few-Shot Learner**](https://arxiv.org/abs/2104.14690) , <br> by *Sinong Wang and
  • ![ - Shot+Prompt+Order+Sensitivity"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**An Empirical Survey of Data Augmentation for Limited Data Learning in NLP**](https://arxiv.org/abs/2106.07499) , <br> by *Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal and Diyi Yang* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3633-L3640) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```chen2021empirical```
  • ![ - tuning+Language+Models+to+Answer+Prompts+Better"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-tuning Language Models to Answer Prompts Better**](https://arxiv.org/abs/2104.04670) , <br> by *Ruiqi Zhong and
  • ![ - Learning+with+Fewer+Tasks+through+Task+Interpolation"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Meta-Learning with Fewer Tasks through Task Interpolation**](https://arxiv.org/abs/2106.02695) , <br> by *Huaxiu Yao, Linjun Zhang and Chelsea Finn* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3654-L3661) <br>```NeurIPS under-review
  • ![ - shot+Joint+Learning+of+Intent+Detection+and+Slot+Filling"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling**](https://arxiv.org/abs/2106.07343) , <br> by *Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che and Ting Liu* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3665-L3672) <br>```ACL Findings 2021 preprint
  • ![ - train,+Prompt,+and+Predict:+A+Systematic+Survey+of+Prompting+Methods+in+Natural+Language+Processing"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing**](https://arxiv.org/abs/2107.13586) , <br> by *Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi and Graham Neubig* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3707-L3715) <br>```Prompt-based learning -- survey paper
  • ![ - tuning:+Incorporating+Knowledge+into+Prompt+Verbalizer+for+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer
  • ![ - Shot+Text+Classification"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Noisy Channel Language Model Prompting for Few-Shot Text Classification**](https://arxiv.org/abs/2108.04106) , <br> by *Sewon Min and
  • ![ - Based+Models+Really+Understand+the+Meaning+of+their+Prompts?"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Do Prompt-Based Models Really Understand the Meaning of their Prompts?**](https://arxiv.org/abs/2109.01247) , <br> by *Albert Webson and Ellie Pavlick* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3769-L3776) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```webson2021promptbased```
  • ![ - Learning+for+Fine-Grained+Entity+Typing"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prompt-Learning for Fine-Grained Entity Typing**](https://arxiv.org/abs/2108.10604) , <br> by *Ning Ding and
  • ![ - 3+Can+Help"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Want To Reduce Labeling Cost? GPT-3 Can Help**](https://arxiv.org/abs/2108.13487) , <br> by *Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu and Michael Zeng* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3798-L3808) <br>```EMNLP Findings 2021, adopting GPT-3 for label generation.
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Discrete and Soft Prompting for Multilingual Models**](https://arxiv.org/abs/2109.03630) , <br> by *Mengjie Zhao and Hinrich Schütze* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3812-L3822) <br>```EMNLP 2021
  • ![ - Shot+Text+Generation+with+Pattern-Exploiting+Training"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Text Generation with Pattern-Exploiting Training**](https://arxiv.org/abs/2012.11926) , <br> by *Timo Schick and
  • ![ - Shot+Event+Detection+with+Prototypical+Amortized+Conditional+Random+Field"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-Shot Event Detection with Prototypical Amortized Conditional Random
  • ![ - Shot+Crosslingual+Transfer:+Variance,+Benchmarks+and+Baselines"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Closer Look at Few-Shot Crosslingual Transfer: Variance, Benchmarks
  • ![ - green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Learning from Very Few Samples: A Survey**](https://arxiv.org/abs/2009.02653) , <br> by *Jiang Lu and
  • ![ - Shot+Learning+with+Language+Models"><img src="https://img.shields.io/badge/-green.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models**](https://arxiv.org/abs/2106.13353) , <br> by *Robert L. Logan IV au2, Ivana Balažević, Eric Wallace, Fabio Petroni, Sameer Singh and Sebastian Riedel* [[bib]](https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/./bibtex.bib#L3732-L3739) <br></details><details><summary><img src=https://github.com/wutong8023/Awesome_Few_Shot_Learning/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```logan2021cutting```