Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tongoy/awesome-few-shot-learning
A list of awesome few-shot learning resources.
https://github.com/tongoy/awesome-few-shot-learning
List: awesome-few-shot-learning
Last synced: 3 months ago
JSON representation
A list of awesome few-shot learning resources.
- Host: GitHub
- URL: https://github.com/tongoy/awesome-few-shot-learning
- Owner: tongoy
- Created: 2022-09-16T13:00:22.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-12-25T09:50:16.000Z (10 months ago)
- Last Synced: 2024-05-21T01:07:45.651Z (6 months ago)
- Size: 144 KB
- Stars: 13
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ultimate-awesome - awesome-few-shot-learning - A list of awesome few-shot learning resources. (Other Lists / PowerShell Lists)
README
# Awesome Few-shot Learning
A list of awesome few-shot learning resources, inspired by [AWESOME](https://github.com/sindresorhus/awesome).
Entity Format in Markdown:
```
[n] **Paper Name.**
Author 1, Author 2, ..., Author n.
In conference/journal, year.
[[paper](url)]
[[code](url)]
```## YEAR 2024
[1] **Adversarially Robust Few-shot Learning via Parameter Co-distillation of Similarity and Class Concept Learners.**
Junhao Dong, Piotr Koniusz, Junxi Chen, Xiaohua Xie, Yew-Soon Ong.
In CVPR, 2024.
[[paper](https://openaccess.thecvf.com/content/CVPR2024/html/Dong_Adversarially_Robust_Few-shot_Learning_via_Parameter_Co-distillation_of_Similarity_and_CVPR_2024_paper.html)]
[[code]()][2] **Simple Semantic-Aided Few-Shot Learning.**
Hai Zhang, Junzhe Xu, Shanlin Jiang, Zhenan He.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2311.18649)]
[[code]()][3] **Frozen Feature Augmentation for Few-Shot Image Classification.**
Andreas Bär, Neil Houlsby, Mostafa Dehghani, Manoj Kumar.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2403.10519)]
[[code]()][4] **Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot Learning.**
Yixiong Zou, Yicong Liu, Yiman Hu, Yuhua Li, Ruixuan Li.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2403.00567)]
[[code]()][5] **AMU-Tuning: Effective Logit Bias for CLIP-based Few-shot Learning.**
Yuwei Tang, Zhenyi Lin, Qilong Wang, Pengfei Zhu, Qinghua Hu.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2404.08958)]
[[code]()][6] **Towards Generalizing to Unseen Domains with Few Labels.**
Chamuditha Jayanga Galappaththige, Sanoojan Baliah, Malitha Gunawardhana, Muhammad Haris Khan.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2403.11674)]
[[code]()][7] **Instance-based Max-margin for Practical Few-shot Recognition.**
Minghao Fu, Ke Zhu, Jianxin Wu.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2305.17368)]
[[code]()][8] **OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning.**
Noor Ahmed, Anna Kukleva, Bernt Schiele.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2403.18550)]
[[code]()][9] **Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot Learning.**
Rashindrie Perera, Saman Halgamuge.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2403.04492)]
[[code]()][10] **Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners.**
Keon-Hee Park, Kyungwoo Song, Gyeong-Moon Park.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2404.02117)]
[[code]()][11] **DeIL: Direct-and-Inverse CLIP for Open-World Few-Shot Learning.**
Shuai Shao, Yu Bai, Yan Wang, Baodi Liu, Yicong Zhou.
In CVPR, 2024.
[[paper](https://openaccess.thecvf.com/content/CVPR2024/html/Shao_DeIL_Direct-and-Inverse_CLIP_for_Open-World_Few-Shot_Learning_CVPR_2024_paper.html)]
[[code]()][12] **A Closer Look at the Few-Shot Adaptation of Large Vision-Language Models.**
Julio Silva-Rodríguez, Sina Hajimiri, Ismail Ben Ayed, Jose Dolz.
In CVPR, 2024.
[[paper](https://arxiv.org/abs/2312.12730)]
[[code]()][13] **Neural Fine-Tuning Search for Few-Shot Learning.**
Panagiotis Eustratiadis, Łukasz Dudziak, Da Li, Timothy Hospedales.
In ICLR, 2024.
[[paper](https://arxiv.org/abs/2306.09295)]
[[code]()][14] **A Hierarchical Bayesian Model for Deep Few-Shot Meta Learning.**
Minyoung Kim, Timothy Hospedales.
In ICLR, 2024.
[[paper](https://arxiv.org/abs/2306.09702)]
[[code]()][15] **BECLR: Batch Enhanced Contrastive Few-Shot Learning.**
Stylianos Poulakakis-Daktylidis, Hadi Jamali-Rad.
In ICLR, 2024.
[[paper](https://arxiv.org/abs/2402.02444)]
[[code]()][16] **MetaCoCo: A New Few-Shot Classification Benchmark with Spurious Correlation.**
Min Zhang, Haoxuan Li, Fei Wu, Kun Kuang.
In ICLR, 2024.
[[paper](https://arxiv.org/abs/2404.19644)]
[[code]()][17] **Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning.**
Zhuoyan Xu, Zhenmei Shi, Junyi Wei, Fangzhou Mu, Yin Li, Yingyu Liang.
In ICLR, 2024.
[[paper](https://arxiv.org/abs/2402.15017)]
[[code]()][18] **Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation.**
Yuan Yuan, Chenyang Shao, Jingtao Ding, Depeng Jin, Yong Li.
In ICLR, 2024.
[[paper](https://arxiv.org/abs/2402.11922)]
[[code]()]## YEAR 2023
[1] **Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners.**
Renrui Zhang, Xiangfei Hu, Bohao Li, Siyuan Huang, Hanqiu Deng, Hongsheng Li, Yu Qiao, Peng Gao.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2303.02151)]
[[code]()][2] **Revisiting Prototypical Network for Cross Domain Few-Shot Learning.**
Fei Zhou, Peng Wang, Lei Zhang, Wei Wei, Yanning Zhang.
In CVPR, 2023.
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhou_Revisiting_Prototypical_Network_for_Cross_Domain_Few-Shot_Learning_CVPR_2023_paper.pdf)]
[[code]()][3] **Glocal Energy-based Learning for Few-Shot Open-Set Recognition.**
Haoyu Wang, Guansong Pang, Peng Wang, Lei Zhang, Wei Wei, Yanning Zhang.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2304.11855)]
[[code]()][4] **Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models.**
Zhiqiu Lin, Samuel Yu, Zhiyi Kuang, Deepak Pathak, Deva Ramanan.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2301.06267)]
[[code]()][5] **Semantic Prompt for Few-Shot Image Recognition.**
Wentao Chen, Chenyang Si, Zhang Zhang, Liang Wang, Zilei Wang, Tieniu Tan.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2303.14123)]
[[code]()][6] **Transductive Few-shot Learning with Prototype-based Label Propagation by Iterative Graph Refinement.**
Hao Zhu, Piotr Koniusz.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2304.11598)]
[[code]()][7] **GKEAL: Gaussian Kernel Embedded Analytic Learning for Few-Shot Class Incremental Task.**
Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, Ziqian Zeng.
In CVPR, 2023.
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhuang_GKEAL_Gaussian_Kernel_Embedded_Analytic_Learning_for_Few-Shot_Class_Incremental_CVPR_2023_paper.pdf)]
[[code]()][8] **Few-Shot Class-Incremental Learning via Class-Aware Bilateral Distillation.**
Linglan Zhao, Jing Lu, Yunlu Xu, Zhanzhan Cheng, Dashan Guo, Yi Niu, Xiangzhong Fang.
In CVPR, 2023.
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhao_Few-Shot_Class-Incremental_Learning_via_Class-Aware_Bilateral_Distillation_CVPR_2023_paper.pdf)]
[[code]()][9] **Open-Set Likelihood Maximization for Few-Shot Learning.**
Malik Boudiaf, Etienne Bennequin, Myriam Tami, Antoine Toubhans, Pablo Piantanida, Céline Hudelot, Ismail Ben Ayed.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2301.08390)]
[[code]()][10] **Distilling Self-Supervised Vision Transformers for Weakly-Supervised Few-Shot Classification & Segmentation.**
Dahyun Kang, Piotr Koniusz, Minsu Cho, Naila Murray.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2307.03407)]
[[code]()][11] **Hubs and Hyperspheres: Reducing Hubness and Improving Transductive Few-shot Learning with Hyperspherical Embeddings.**
Daniel J. Trosten, Rwiddhi Chakraborty, Sigurd Løkse, Kristoffer Knutsen Wickstrøm, Robert Jenssen, Michael C. Kampffmeyer.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2303.09352)]
[[code]()][12] **Supervised Masked Knowledge Distillation for Few-Shot Transformers.**
Han Lin, Guangxing Han, Jiawei Ma, Shiyuan Huang, Xudong Lin, Shih-Fu Chang.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2303.15466)]
[[code]()][13] **Bi-Level Meta-Learning for Few-Shot Domain Generalization.**
Xiaorong Qin, Xinhang Song, Shuqiang Jiang.
In CVPR, 2023.
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Qin_Bi-Level_Meta-Learning_for_Few-Shot_Domain_Generalization_CVPR_2023_paper.pdf)]
[[code]()][14] **ProD: Prompting-To-Disentangle Domain Knowledge for Cross-Domain Few-Shot Image Classification.**
Tianyi Ma, Yifan Sun, Zongxin Yang, Yi Yang.
In CVPR, 2023.
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Ma_ProD_Prompting-To-Disentangle_Domain_Knowledge_for_Cross-Domain_Few-Shot_Image_Classification_CVPR_2023_paper.pdf)]
[[code]()][15] **Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning.**
Zeyin Song, Yifan Zhao, Yujun Shi, Peixi Peng, Li Yuan, Yonghong Tian.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2304.00426)]
[[code]()][16] **Boosting Transductive Few-Shot Fine-Tuning With Margin-Based Uncertainty Weighting and Probability Regularization.**
Ran Tao, Hao Chen, Marios Savvides.
In CVPR, 2023.
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Tao_Boosting_Transductive_Few-Shot_Fine-Tuning_With_Margin-Based_Uncertainty_Weighting_and_Probability_CVPR_2023_paper.pdf)]
[[code]()][17] **StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning.**
Yuqian Fu, Yu Xie, Yanwei Fu, Yu-Gang Jiang.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2302.09309)]
[[code]()][18] **WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation.**
Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, Onkar Dabeer.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2303.14814)]
[[code]()][19] **Few-Shot Learning with Visual Distribution Calibration and Cross-Modal Distribution Alignment.**
Runqi Wang, Hao Zheng, Xiaoyue Duan, Jianzhuang Liu, Yuning Lu, Tian Wang, Songcen Xu, Baochang Zhang.
In CVPR, 2023.
[[paper](https://arxiv.org/abs/2305.11439)]
[[code]()][20] **Domain Adaptive Few-Shot Open-Set Learning.**
Debabrata Pal, Deeptej More, Sai Bhargav, Dipesh Tamboli, Vaneet Aggarwal, Biplab Banerjee.
In ICCV, 2023.
[[paper](https://arxiv.org/abs/2309.12814)]
[[code]()][21] **StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation.**
Aibek Alanov, Vadim Titov, Maksim Nakhodnov, Dmitry Vetrov.
In ICCV, 2023.
[[paper](https://arxiv.org/abs/2212.10229)]
[[code]()][22] **Prototypes-oriented Transductive Few-shot Learning with Conditional Transport.**
Long Tian, Jingyi Feng, Wenchao Chen, Xiaoqiang Chai, Liming Wang, Xiyang Liu, Bo Chen.
In ICCV, 2023.
[[paper](https://arxiv.org/abs/2308.03047)]
[[code]()][23] **Few-shot Continual Infomax Learning.**
Ziqi Gu, Chunyan Xu, Jian Yang, Zhen Cui.
In ICCV, 2023.
[[paper](https://openaccess.thecvf.com/content/ICCV2023/html/Gu_Few-shot_Continual_Infomax_Learning_ICCV_2023_paper.html)]
[[code]()][24] **Task-aware Adaptive Learning for Cross-domain Few-shot Learning.**
Yurong Guo, Ruoyi Du, Yuan Dong, Timothy Hospedales, Yi-Zhe Song, Zhanyu Ma.
In ICCV, 2023.
[[paper](https://openaccess.thecvf.com/content/ICCV2023/html/Guo_Task-aware_Adaptive_Learning_for_Cross-domain_Few-shot_Learning_ICCV_2023_paper.html)]
[[code]()][25] **DETA: Denoised Task Adaptation for Few-Shot Learning.**
Ji Zhang, Lianli Gao, Xu Luo, Hengtao Shen, Jingkuan Song.
In ICCV, 2023.
[[paper](https://arxiv.org/abs/2303.06315)]
[[code]()][26] **Class-Aware Patch Embedding Adaptation for Few-Shot Image Classification.**
Fusheng Hao, Fengxiang He, Liu Liu, Fuxiang Wu, Dacheng Tao, Jun Cheng.
In ICCV, 2023.
[[paper](https://openaccess.thecvf.com/content/ICCV2023/html/Hao_Class-Aware_Patch_Embedding_Adaptation_for_Few-Shot_Image_Classification_ICCV_2023_paper.html)]
[[code]()][27] **Frequency Guidance Matters in Few-Shot Learning.**
Hao Cheng, Siyuan Yang, Joey Tianyi Zhou, Lanqing Guo, Bihan Wen.
In ICCV, 2023.
[[paper](https://openaccess.thecvf.com/content/ICCV2023/html/Cheng_Frequency_Guidance_Matters_in_Few-Shot_Learning_ICCV_2023_paper.html)]
[[code]()][28] **Unsupervised Meta-learning via Few-shot Pseudo-supervised Contrastive Learning.**
Huiwon Jang, Hankook Lee, Jinwoo Shin.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=TdTGGj7fYYJ)]
[[code]()][29] **Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning.**
Do-Yeon Kim, Dong-Jun Han, Jun Seo, Jaekyun Moon.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=kPLzOfPfA2l)]
[[code]()][30] **Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class Incremental Learning.**
Yibo Yang, Haobo Yuan, Xiangtai Li, Zhouchen Lin, Philip Torr, Dacheng Tao.
In ICLR, 2023.
[[paper](https://arxiv.org/abs/2302.03004)]
[[code]()][31] **Progressive Mix-Up for Few-Shot Supervised Multi-Source Domain Transfer.**
Ronghang Zhu, Ronghang Zhu, Xiang Yu, Sheng Li.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=H7M_5K5qKJV)]
[[code]()][32] **Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning.**
Ivona Najdenkoska, Xiantong Zhen, Marcel Worring.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=3oWo92cQyxL)]
[[code]()][33] **Contrastive Meta-Learning for Partially Observable Few-Shot Learning.**
Adam Jelley, Amos Storkey, Antreas Antoniou, Sam Devlin.
In ICLR, 2023.
[[paper](https://arxiv.org/abs/2301.13136)]
[[code]()][34] **Revisit Finetuning strategy for Few-Shot Learning to Strengthen the Equivariance of Emdeddings.**
Heng Wang, Tan Yue, Xiang Ye, Zihang He, Bohan Li, Yong Li.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=tXc-riXhmx)]
[[code]()][35] **On the Soft-Subnetwork for Few-shot Class Incremental Learning.**
Haeyong Kang, Jaehong Yoon, Sultan Rizky Hikmawan Madjid, Sung Ju Hwang, Chang D. Yoo.
In ICLR, 2023.
[[paper](https://arxiv.org/abs/2209.07529)]
[[code]()][36] **Hard-Meta-Dataset++: Towards Understanding Few-Shot Performance on Difficult Tasks.**
Samyadeep Basu, Megan Stanley, John F Bronskill, Soheil Feizi, Daniela Massiceti.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=wq0luyH3m4)]
[[code]()][37] **Context-enriched molecule representations improve few-shot drug discovery.**
Johannes Schimunek, Philipp Seidl, Lukas Friedrich, Daniel Kuhn, Friedrich Rippmann, Sepp Hochreiter, Günter Klambauer.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=XrMWUuEevr)]
[[code]()][38] **FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification.**
Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, Richard E Turner.
In ICLR, 2023.
[[paper](https://arxiv.org/abs/2206.08671)]
[[code]()][39] **Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot Classification.**
Hao ZHENG, Runqi Wang, Jianzhuang Liu, Asako Kanezaki.
In ICLR, 2023.
[[paper](https://openreview.net/pdf?id=Kn-HA8DFik)]
[[code]()][40] **Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration.**
Qi-Wei Wang, Da-Wei Zhou, Yi-Kai Zhang, De-Chuan Zhan, Han-Jia Ye.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2312.05229)]
[[code]()][41] **Understanding Few-Shot Learning: Measuring Task Relatedness and Adaptation Difficulty via Attributes.**
Minyang Hu, Hong Chang, Zong Guo, Bingpeng MA, Shiguang Shan, Xilin Chen.
In NeurIPS, 2023.
[[paper](https://papers.nips.cc/paper_files/paper/2023/hash/3df38ca67befaed9c03b95ffee07d9f8-Abstract-Conference.html)]
[[code]()][42] **Revisiting Logistic-softmax Likelihood in Bayesian Meta-Learning for Few-Shot Classification.**
Tianjun Ke, Haoqun Cao, Zenan Ling, Feng Zhou.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2310.10379)]
[[code]()][43] **FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning.**
Kun Song, Huimin Ma, Bochao Zou, Huishuai Zhang, Weiran Huang.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2310.15105)]
[[code]()][44] **DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation.**
Kaipeng Zheng, Huishuai Zhang, Weiran Huang.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2307.15317)]
[[code]()][45] **Meta-Adapter: An Online Few-shot Learner for Vision-Language Model.**
Cheng Cheng, Lin Song, Ruoyi Xue, Hang Wang, Hongbin Sun, Yixiao Ge, Ying Shan.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2311.03774)]
[[code]()][46] **Focus Your Attention when Few-Shot Classification.**
Haoqing Wang, Shibo Jie, Zhihong Deng.
In NeurIPS, 2023.
[[paper](https://papers.nips.cc/paper_files/paper/2023/hash/bbb7506579431a85861a05fff048d3e1-Abstract-Conference.html)]
[[code]()][47] **Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification.**
Neel Guha, Mayee F. Chen, Kush Bhatia, Azalia Mirhoseini, Frederic Sala, Christopher Ré.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2307.11031)]
[[code]()][48] **Meta-AdaM: An Meta-Learned Adaptive Optimizer with Momentum for Few-Shot Learning.**
Siyuan Sun, Hongyang Gao.
In NeurIPS, 2023.
[[paper](https://papers.nips.cc/paper_files/paper/2023/hash/ce26d21662c979d515164b416d4571fe-Abstract-Conference.html)]
[[code]()][49] **Alignment with human representations supports robust few-shot learning.**
Ilia Sucholutsky, Thomas L. Griffiths.
In NeurIPS, 2023.
[[paper](https://arxiv.org/abs/2301.11990)]
[[code]()]## YEAR 2022
[1] **Learning to Affiliate: Mutual Centralized Learning for Few-shot Classification.**
Yang Liu, Weifeng Zhang, Chao Xiang, Tu Zheng, Deng Cai, Xiaofei He.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2106.05517)]
[[code]()][2] **Revisiting Learnable Affines for Batch Norm in Few-Shot Transfer Learning.**
Moslem Yazdanpanah, Aamer Abdul Rahman, Muawiz Chaudhary, Christian Desrosiers, Mohammad Havaei, Eugene Belilovsky, Samira Ebrahimi Kahou.
In CVPR, 2022.
[[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Yazdanpanah_Revisiting_Learnable_Affines_for_Batch_Norm_in_Few-Shot_Transfer_Learning_CVPR_2022_paper.pdf)]
[[code]()][3] **Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference.**
Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, Timothy M. Hospedales.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2204.07305)]
[[code]()][4] **Uni-Perceiver: Pre-training Unified Architecture for Generic Perception for Zero-shot and Few-shot Tasks.**
Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Xiaogang Wang, Hongsheng Li, Xiaohua Wang, Jifeng Dai.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2112.01522)]
[[code]()][5] **Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning.**
Yangji He, Weihan Liang, Dongyang Zhao, Hong-Yu Zhou, Weifeng Ge, Yizhou Yu, Wenqiang Zhang.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2203.09064)]
[[code]()][6] **Task-Adaptive Negative Envision for Few-Shot Open-Set Recognition.**
Shiyuan Huang, Jiawei Ma, Guangxing Han, Shih-Fu Chang.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2012.13073)]
[[code]()][7] **CAD: Co-Adapting Discriminative Features for Improved Few-Shot Classification.**
Philip Chikontwe, Soopil Kim, Sang Hyun Park.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2203.13465)]
[[code]()][8] **Forward Compatible Few-Shot Class-Incremental Learning.**
Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, Liang Ma, Shiliang Pu, De-Chuan Zhan.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2203.06953)]
[[code]()][9] **Ranking Distance Calibration for Cross-Domain Few-Shot Learning.**
Pan Li, Shaogang Gong, Chengjie Wang, Yanwei Fu.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2112.00260)]
[[code]()][10] **Improving Adversarially Robust Few-Shot Image Classification With Generalizable Representations.**
Junhao Dong, Yuan Wang, Jian-Huang Lai, Xiaohua Xie.
In CVPR, 2022.
[[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Dong_Improving_Adversarially_Robust_Few-Shot_Image_Classification_With_Generalizable_Representations_CVPR_2022_paper.pdf)]
[[code]()][11] **Generating Representative Samples for Few-Shot Classification.**
Jingyi Xu, Hieu Le.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2205.02918)]
[[code]()][12] **Task Discrepancy Maximization for Fine-grained Few-Shot Classification.**
SuBeen Lee, WonJun Moon, Jae-Pil Heo.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2207.01376)]
[[code]()][13] **EASE: Unsupervised Discriminant Subspace Learning for Transductive Few-Shot Learning.**
Hao Zhu, Piotr Koniusz.
In CVPR, 2022.
[[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhu_EASE_Unsupervised_Discriminant_Subspace_Learning_for_Transductive_Few-Shot_Learning_CVPR_2022_paper.pdf)]
[[code]()][14] **Integrative Few-Shot Learning for Classification and Segmentation.**
Dahyun Kang, Minsu Cho.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2203.15712)]
[[code]()][15] **Constrained Few-shot Class-incremental Learning.**
Michael Hersche, Geethan Karunaratne, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2203.16588)]
[[code]()][16] **MetaFSCIL: A Meta-Learning Approach for Few-Shot Class Incremental Learning.**
Zhixiang Chi, Li Gu, Huan Liu, Yang Wang, Yuanhao Yu, Jin Tang.
In CVPR, 2022.
[[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.pdf)]
[[code]()][17] **Semi-Supervised Few-Shot Learning via Multi-Factor Clustering.**
Jie Ling, Lei Liao, Meng Yang, Jia Shuai.
In CVPR, 2022.
[[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Ling_Semi-Supervised_Few-Shot_Learning_via_Multi-Factor_Clustering_CVPR_2022_paper.pdf)]
[[code]()][18] **Cross-domain Few-shot Learning with Task-specific Adapters.**
Wei-Hong Li, Xialei Liu, Hakan Bilen.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2107.00358)]
[[code]()][19] **Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning.**
Haoxiang Wang, Yite Wang, Ruoyu Sun, Bo Li.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2203.09137)]
[[code]()][20] **Few-shot Learning with Noisy Labels.**
Kevin J Liang, Samrudhdhi B. Rangrej, Vladan Petrovic, Tal Hassner.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2204.05494)]
[[code]()][21] **Few-Shot Incremental Learning for Label-to-Image Translation.**
Pei Chen, Yangkang Zhang, Zejian Li, Lingyun Sun.
In CVPR, 2022.
[[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.pdf)]
[[code]()][22] **Matching Feature Sets for Few-Shot Image Classification.**
Arman Afrasiyabi, Hugo Larochelle, Jean-François Lalonde, Christian Gagné.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2204.00949)]
[[code]()][23] **Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification.**
Jiangtao Xie, Fei Long, Jiaming Lv, Qilong Wang, Peihua Li.
In CVPR, 2022.
[[paper](https://arxiv.org/abs/2204.04567)]
[[code]()][24] **Meta-Learning with Fewer Tasks through Task Interpolation.**
Huaxiu Yao, Linjun Zhang, Chelsea Finn.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2106.02695)]
[[code]()][25] **On the Importance of Firth Bias Reduction in Few-Shot Classification.**
Saba Ghaffari, Ehsan Saleh, David Forsyth, Yu-xiong Wang.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2110.02529)]
[[code]()][26] **ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning.**
Debasmit Das, Sungrack Yun, Fatih Porikli.
In ICLR, 2022.
[[paper](https://openreview.net/pdf?id=zRJu6mU2BaE)]
[[code]()][27] **Hierarchical Variational Memory for Few-shot Learning Across Domains.**
Yingjun Du, Xiantong Zhen, Ling Shao, Cees G. M. Snoek.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2112.08181)]
[[code]()][28] **Subspace Regularizers for Few-Shot Class Incremental Learning.**
Afra Feyza Akyürek, Ekin Akyürek, Derry Tanti Wijaya, Jacob Andreas.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2110.07059)]
[[code]()][29] **How to Train Your MAML to Excel in Few-Shot Classification.**
Han-Jia Ye, Wei-Lun Chao.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2106.16245)]
[[code]()][30] **Task Affinity with Maximum Bipartite Matching in Few-Shot Learning.**
Cat P. Le, Juncheng Dong, Mohammadreza Soltani, Vahid Tarokh.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2110.02399)]
[[code]()][31] **Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence Classification.**
Bing Su, Ji-Rong Wen.
In ICLR, 2022.
[[paper](https://openreview.net/pdf?id=p3DKPQ7uaAi)]
[[code]()][32] **Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners.**
Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen.
In ICLR, 2022.
[[paper](https://arxiv.org/abs/2108.13161)]
[[code]()][33] **Few-shot Learning via Dirichlet Tessellation Ensemble.**
Chunwei Ma, Ziyun Huang, Mingchen Gao, Jinhui Xu.
In ICLR, 2022.
[[paper](https://openreview.net/pdf?id=6kCiVaoQdx9)]
[[code]()][34] **Switch to Generalize: Domain-Switch Learning for Cross-Domain Few-Shot Classification.**
Zhengdong Hu, Yifan Sun, Yi Yang.
In ICLR, 2022.
[[paper](https://openreview.net/pdf?id=H-iABMvzIc)]
[[code]()][35] **Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification.**
Renrui Zhang, Zhang Wei, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2207.09519)]
[[code]()][36] **Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay.**
Huan Liu, Li Gu, Zhixiang Chi, Yang Wang, Yuanhao Yu, Jun Chen, Jin Tang.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2207.11213)]
[[code]()][37] **Self-Supervision Can Be a Good Few-Shot Learner.**
Yuning Lu, Liangjian Wen, Jianzhuang Liu, Yajing Liu, Xinmei Tian.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2207.09176)]
[[code]()][38] **tSF: Transformer-based Semantic Filter for Few-Shot Learning.**
Jinxiang Lai, Siqian Yang, Wenlong Liu, Yi Zeng, Zhongyi Huang, Wenlong Wu, Jun Liu, Bin-Bin Gao, Chengjie Wang.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2211.00868)]
[[code]()][39] **Adversarial Feature Augmentation for Cross-domain Few-shot Classification.**
Yanxu Hu, Andy J. Ma.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2208.11021)]
[[code]()][40] **Worst Case Matters for Few-Shot Recognition.**
Minghao Fu, Yun-Hao Cao, Jianxin Wu.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2203.06574)]
[[code]()][41] **DNA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment.**
Ziyu Jiang, Tianlong Chen, Xuxi Chen, Yu Cheng, Luowei Zhou, Lu Yuan, Ahmed Awadallah, Zhangyang Wang.
In ECCV, 2022.
[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136800229.pdf)]
[[code]()][42] **Learning Instance and Task-Aware Dynamic Kernels for Few Shot Learning.**
Rongkai Ma, Pengfei Fang, Gil Avraham, Yan Zuo, Tianyu Zhu, Tom Drummond, Mehrtash Harandi.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2112.03494)]
[[code]()][43] **Few-Shot Classification with Contrastive Learning.**
Zhanyuan Yang, Jinghua Wang, Yingying Zhu.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2209.08224)]
[[code]()][44] **Coarse-To-Fine Incremental Few-Shot Learning.**
Xiang Xiang, Yuwen Tan, Qian Wan, Jing Ma.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2111.14806)]
[[code]()][45] **Cross-Domain Cross-Set Few-Shot Learning via Learning Compact and Aligned Representations.**
Wentao Chen, Zhang Zhang, Wei Wang, Liang Wang, Zilei Wang, Tieniu Tan.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2207.07826)]
[[code]()][46] **Improving Few-Shot Learning through Multi-task Representation Learning Theory.**
Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2010.01992)]
[[code]()][47] **Few-Shot Class-Incremental Learning from an Open-Set Perspective.**
Can Peng, Kun Zhao, Tianren Wang, Meng Li, Brian C. Lovell.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2208.00147)]
[[code]()][48] **Tree Structure-Aware Few-Shot Image Classification via Hierarchical Aggregation.**
Min Zhang, Siteng Huang, Wenbin Li, Donglin Wang.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2207.06989)]
[[code]()][49] **S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning.**
Jayateja Kalla, Soma Biswas.
In ECCV, 2022.
[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850427.pdf)]
[[code]()][50] **Unsupervised Few-Shot Image Classification by Learning Features into Clustering Space.**
Shuo Li, Fang Liu, Zehua Hao, Kaibo Zhao, Licheng Jiao.
In ECCV, 2022.
[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136910406.pdf)]
[[code]()][51] **TransVLAD: Focusing on Locally Aggregated Descriptors for Few-Shot Learning.**
Haoquan Li, Laoming Zhang, Daoan Zhang, Lang Fu, Peng Yang, Jianguo Zhang.
In ECCV, 2022.
[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136800509.pdf)]
[[code]()][52] **Kernel Relative-prototype Spectral Filtering for Few-shot Learning.**
Tao Zhang, Wu Huang.
In ECCV, 2022.
[[paper](https://arxiv.org/abs/2207.11685)]
[[code]()][53] **Towards Practical Few-Shot Query Sets: Transductive Minimum Description Length Inference.**
Ségolène Tiffany Martin, Malik Boudiaf, Emilie Chouzenoux, Jean-Christophe Pesquet, Ismail Ben Ayed.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.14545)]
[[code]()][54] **Learning from Few Samples: Transformation-Invariant SVMs with Composition and Locality at Multiple Scales.**
Tao Liu, P. R. Kumar, Ruida Zhou, Xi Liu.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2109.12784)]
[[code]()][55] **Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks.**
Daiki Chijiwa, Shin'ya Yamaguchi, Atsutoshi Kumagai, Yasutoshi Ida.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2205.15619)]
[[code]()][56] **FeLMi : Few shot Learning with hard Mixup.**
Aniket Roy, Anshul Shah, Ketul Shah, Prithviraj Dhar, Anoop Cherian, Rama Chellappa.
In NeurIPS, 2022.
[[paper](https://openreview.net/pdf?id=xpdaDM_B4D)]
[[code]()][57] **Smoothed Embeddings for Certified Few-Shot Learning.**
Mikhail Pautov, Olesya Kuznetsova, Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2202.01186)]
[[code]()][58] **Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty.**
Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2202.01339)]
[[code]()][59] **Few-shot Relational Reasoning via Connection Subgraph Pretraining.**
Qian Huang, Hongyu Ren, Jure Leskovec.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.06722)]
[[code]()][60] **Graph Few-shot Learning with Task-specific Structures.**
Song Wang, Chen Chen, Jundong Li.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.12130)]
[[code]()][61] **Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion.**
Atsutoshi Kumagai, Tomoharu Iwata, Yasutoshi Ida, Yasuhiro Fujiwara.
In NeurIPS, 2022.
[[paper](https://openreview.net/pdf?id=eJM0aA5Qhhk)]
[[code]()][62] **Few-Shot Non-Parametric Learning with Deep Latent Variable Model.**
Zhiying Jiang, Yiqin Dai, Ji Xin, Ming Li, Jimmy Lin.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2206.11573)]
[[code]()][63] **Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graphs.**
Ruijie Wang, Zheng Li, Dachun Sun, Shengzhong Liu, Jinning Li, Bing Yin, Tarek Abdelzaher.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.08654)]
[[code]()][64] **Adaptive Distribution Calibration for Few-Shot Learning with Hierarchical Optimal Transport.**
Dandan Guo, Long Tian, He Zhao, Mingyuan Zhou, Hongyuan Zha.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.04144)]
[[code]()][65] **Flamingo: a Visual Language Model for Few-Shot Learning.**
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2204.14198)]
[[code]()][66] **Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the Centroid.**
Jing Xu, Xu Luo, Xinglin Pan, Wenjie Pei, Yanan Li, Zenglin Xu.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.16834)]
[[code]()][67] **Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization.**
Long-Kai Huang, Ying Wei.
In NeurIPS, 2022.
[[paper](https://openreview.net/pdf?id=fHUBa3gQno)]
[[code]()][68] **Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning.**
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin Raffel.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2205.05638)]
[[code]()][69] **Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification.**
Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard E. Turner.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2206.09843)]
[[code]()][70] **Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation.**
Yixiong Zou, Shanghang Zhang, Yuhua Li, Ruixuan Li.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2210.04524)]
[[code]()][71] **Set-based Meta-Interpolation for Few-Task Meta-Learning.**
Seanie Lee, Bruno Andreis, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2205.09990)]
[[code]()][72] **A Closer Look at Prototype Classifier for Few-shot Image Classification.**
Mingcheng Hou, Issei Sato.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2110.05076)]
[[code]()][73] **An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning.**
Xiu-Shen Wei, He-Yang Xu, Faen Zhang, Yuxin Peng, Wei Zhou.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2209.13777)]
[[code]()][74] **Rethinking Generalization in Few-Shot Classification.**
Markus Hiller, Rongkai Ma, Mehrtash Harandi, Tom Drummond.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2206.07267)]
[[code]()][75] **On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation.**
Markus Hiller, Mehrtash Harandi, Tom Drummond.
In NeurIPS, 2022.
[[paper](https://arxiv.org/abs/2206.07260)]
[[code]()]## YEAR 2021
[1] **ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning.**
Chaofan Chen, Xiaoshan Yang, Changsheng Xu, Xuhui Huang, Zhe Ma.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2106.08523)]
[[code]()][2] **Few-Shot Incremental Learning with Continually Evolved Classifiers.**
Chi Zhang, Nan Song, Guosheng Lin, Yun Zheng, Pan Pan, Yinghui Xu.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2104.03047)]
[[code]()][3] **Mutual CRF-GNN for Few-Shot Learning.**
Shixiang Tang, Dapeng Chen, Lei Bai, Kaijian Liu, Yixiao Ge, Wanli Ouyang.
In CVPR, 2021.
[[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Tang_Mutual_CRF-GNN_for_Few-Shot_Learning_CVPR_2021_paper.pdf)]
[[code]()][4] **Rethinking Class Relations: Absolute-relative Supervised and Unsupervised Few-shot Learning.**
Hongguang Zhang, Piotr Koniusz, Songlei Jian, Hongdong Li, Philip H. S. Torr.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2001.03919)]
[[code]()][5] **Learning Dynamic Alignment via Meta-filter for Few-shot Learning.**
Chengming Xu, Chen Liu, Li Zhang, Chengjie Wang, Jilin Li, Feiyue Huang, Xiangyang Xue, Yanwei Fu.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2103.13582)]
[[code]()][6] **Pareto Self-Supervised Training for Few-Shot Learning.**
Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, Donglin Wang.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2104.07841)]
[[code]()][7] **Reinforced Attention for Few-Shot Learning and Beyond.**
Jie Hong, Pengfei Fang, Weihao Li, Tong Zhang, Christian Simon, Mehrtash Harandi, Lars Petersson.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2104.04192)]
[[code]()][8] **Few-shot Open-set Recognition by Transformation Consistency.**
Minki Jeong, Seokeon Choi, Changick Kim.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2103.01537)]
[[code]()][9] **Few-Shot Classification with Feature Map Reconstruction Networks.**
Davis Wertheimer, Luming Tang, Bharath Hariharan.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2012.01506)]
[[code]()][10] **Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning.**
Kai Zhu, Yang Cao, Wei Zhai, Jie Cheng, Zheng-Jun Zha.
In CVPR, 2021.
[[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.pdf)]
[[code]()][11] **Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning.**
Ali Cheraghian, Shafin Rahman, Pengfei Fang, Soumava Kumar Roy, Lars Petersson, Mehrtash Harandi.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2103.04059)]
[[code]()][12] **Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning.**
Mamshad Nayeem Rizve, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2103.01315)]
[[code]()][13] **Prototype Completion with Primitive Knowledge for Few-Shot Learning.**
Baoquan Zhang, Xutao Li, Yunming Ye, Zhichao Huang, Lisai Zhang.
In CVPR, 2021.
[[paper](https://arxiv.org/abs/2009.04960)]
[[code]()][14] **Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier.**
Arkabandhu Chowdhury, Mingchao Jiang, Swarat Chaudhuri, Chris Jermaine.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2101.00562)]
[[code]()][15] **Z-Score Normalization, Hubness, and Few-Shot Learning.**
Nanyi Fei, Yizhao Gao, Zhiwu Lu, Tao Xiang.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Fei_Z-Score_Normalization_Hubness_and_Few-Shot_Learning_ICCV_2021_paper.pdf)]
[[code]()][16] **Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting.**
Anna Kukleva, Hilde Kuehne, Bernt Schiele.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2108.08165)]
[[code]()][17] **On the Importance of Distractors for Few-Shot Classification.**
Rajshekhar Das, Yu-Xiong Wang, JoséM.F. Moura.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2109.09883)]
[[code]()][18] **Universal Representation Learning from Multiple Domains for Few-shot Classification.**
Wei-Hong Li, Xialei Liu, Hakan Bilen.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2103.13841)]
[[code]()][19] **Pseudo-Loss Confidence Metric for Semi-Supervised Few-Shot Learning.**
Kai Huang, Jie Geng, Wen Jiang, Xinyang Deng, Zhe Xu.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Huang_Pseudo-Loss_Confidence_Metric_for_Semi-Supervised_Few-Shot_Learning_ICCV_2021_paper.pdf)]
[[code]()][20] **ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition.**
Daniela Massiceti, Luisa Zintgraf, John Bronskill, Lida Theodorou, Matthew Tobias Harris, Edward Cutrell, Cecily Morrison, Katja Hofmann, Simone Stumpf.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2104.03841)]
[[code]()][21] **Partner-Assisted Learning for Few-Shot Image Classification.**
Jiawei Ma, Hanchen Xie, Guangxing Han, Shih-Fu Chang, Aram Galstyan, Wael Abd-Almageed.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2109.07607)]
[[code]()][22] **Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning.**
Chengxiang Yin, Kun Wu, Zhengping Che, Bo Jiang, Zhiyuan Xu, Jian Tang.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Yin_Hierarchical_Graph_Attention_Network_for_Few-Shot_Visual-Semantic_Learning_ICCV_2021_paper.pdf)]
[[code]()][23] **Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning.**
Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaesik Min, Kyoung Mu Lee.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2110.03909)]
[[code]()][24] **Curvature Generation in Curved Spaces for Few-Shot Learning.**
Zhi Gao, Yuwei Wu, Yunde Jia, Mehrtash Harandi.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Gao_Curvature_Generation_in_Curved_Spaces_for_Few-Shot_Learning_ICCV_2021_paper.pdf)]
[[code]()][25] **A Multi-Mode Modulator for Multi-Domain Few-Shot Classification.**
Yanbin Liu, Juho Lee, Linchao Zhu, Ling Chen, Humphrey Shi, Yi Yang.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Liu_A_Multi-Mode_Modulator_for_Multi-Domain_Few-Shot_Classification_ICCV_2021_paper.pdf)]
[[code]()][26] **Variational Feature Disentangling for Fine-Grained Few-Shot Classification.**
Jingyi Xu, Hieu Le, Mingzhen Huang, ShahRukh Athar, Dimitris Samaras.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2010.03255)]
[[code]()][27] **Task-Aware Part Mining Network for Few-Shot Learning.**
Jiamin Wu, Tianzhu Zhang, Yongdong Zhang, Feng Wu.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Task-Aware_Part_Mining_Network_for_Few-Shot_Learning_ICCV_2021_paper.pdf)]
[[code]()][28] **Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning.**
Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, Xiaolong Wang.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Chen_Meta-Baseline_Exploring_Simple_Meta-Learning_for_Few-Shot_Learning_ICCV_2021_paper.pdf)]
[[code]()][29] **Relational Embedding for Few-Shot Classification.**
Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho.
In ICCV, 2021.
[[paper](http://arxiv.org/abs/2108.09666)]
[[code]()][30] **Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder.**
Hanwen Liang, Qiong Zhang, Peng Dai, Juwei Lu.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2108.05028)]
[[code]()][31] **Binocular Mutual Learning for Improving Few-shot Classification.**
Ziqi Zhou, Xi Qiu, Jiangtao Xie, Jianan Wu, Chi Zhang.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2108.12104)]
[[code]()][32] **Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition.**
Xueting Zhang, Debin Meng, Henry Gouk, Timothy Hospedales.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2101.02833)]
[[code]()][33] **Meta Navigator: Search for a Good Adaptation Policy for Few-shot Learning.**
Chi Zhang, Henghui Ding, Guosheng Lin, Ruibo Li, Changhu Wang, Chunhua Shen.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2109.05749)]
[[code]()][34] **Iterative label cleaning for transductive and semi-supervised few-shot learning.**
Michalis Lazarou, Tania Stathaki, Yannis Avrithis.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2012.07962)]
[[code]()][35] **Few-Shot and Continual Learning With Attentive Independent Mechanisms.**
Eugene Lee, Cheng-Han Huang, Chen-Yi Lee.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2107.14053)]
[[code]()][36] **Coarsely-Labeled Data for Better Few-Shot Transfer.**
Cheng Perng Phoo, Bharath Hariharan.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Phoo_Coarsely-Labeled_Data_for_Better_Few-Shot_Transfer_ICCV_2021_paper.pdf)]
[[code]()][37] **Synthesized Feature Based Few-Shot Class-Incremental Learning on a Mixture of Subspaces.**
Ali Cheraghian, Shafin Rahman, Sameera Ramasinghe, Pengfei Fang, Christian Simon, Lars Petersson, Mehrtash Harandi.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf)]
[[code]()][38] **Mixture-Based Feature Space Learning for Few-Shot Image Classification.**
Arman Afrasiyabi, Jean-François Lalonde, Christian Gagné.
In ICCV, 2021.
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Afrasiyabi_Mixture-Based_Feature_Space_Learning_for_Few-Shot_Image_Classification_ICCV_2021_paper.pdf)]
[[code]()][39] **Transductive Few-Shot Classification on the Oblique Manifold.**
Guodong Qi, Huimin Yu, Zhaohui Lu, Shuzhao Li.
In ICCV, 2021.
[[paper](https://arxiv.org/abs/2108.04009)]
[[code]()][40] **Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data.**
Ashraful Islam, Chun-Fu Chen, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Richard J. Radke.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2106.07807)]
[[code]()][41] **Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima.**
Guangyuan Shi, Jiaxin Chen, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2111.01549)]
[[code]()][42] **Realistic Evaluation of Transductive Few-Shot Learning.**
Olivier Veilleux, Malik Boudiaf, Pablo Piantanida, Ismail Ben Ayed.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2204.11181)]
[[code]()][43] **Rectifying the Shortcut Learning of Background for Few-Shot Learning.**
Xu Luo, Longhui Wei, Liangjian Wen, Jinrong Yang, Lingxi Xie, Zenglin Xu, Qi Tian.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2107.07746)]
[[code]()][44] **Learning to Learn Dense Gaussian Processes for Few-Shot Learning.**
Ze Wang, Zichen Miao, Xiantong Zhen, Qiang Qiu.
In NeurIPS, 2021.
[[paper](https://papers.nips.cc/paper/2021/file/6e2713a6efee97bacb63e52c54f0ada0-Paper.pdf)]
[[code]()][45] **DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples.**
Yi Xu, Jiandong Ding, Lu Zhang, Shuigeng Zhou.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2110.13740)]
[[code]()][46] **POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples.**
Duong H. Le, Khoi D. Nguyen, Khoi Nguyen, Quoc-Huy Tran, Rang Nguyen, Binh-Son Hua.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2206.04679)]
[[code]()][47] **On Episodes, Prototypical Networks, and Few-shot Learning.**
Steinar Laenen, Luca Bertinetto.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2012.09831)]
[[code]()][48] **Re-ranking for image retrieval and transductive few-shot classification.**
Xi SHEN, Yang Xiao, Shell Xu Hu, Othman Sbai, Mathieu Aubry.
In NeurIPS, 2021.
[[paper](https://papers.nips.cc/paper/2021/file/d9fc0cdb67638d50f411432d0d41d0ba-Paper.pdf)]
[[code]()][49] **The Role of Global Labels in Few-Shot Classification and How to Infer Them.**
Ruohan Wang, Massimiliano Pontil, Carlo Ciliberto.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2108.04055)]
[[code]()][50] **Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning.**
Nan Ding, Xi Chen, Tomer Levinboim, Sebastian Goodman, Radu Soricut.
In NeurIPS, 2021.
[[paper](https://arxiv.org/abs/2105.14099)]
[[code]()][51] **Free Lunch for Few-shot Learning: Distribution Calibration.**
Shuo Yang, Lu Liu, Min Xu.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2101.06395)]
[[code](https://github.com/ShuoYang-1998/ICLR2021-Oral_Distribution_Calibration)][52] **Self-training for Few-shot Transfer Across Extreme Task Differences.**
Cheng Perng Phoo, Bharath Hariharan.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2010.07734)]
[[code]()][53] **Wandering Within a World: Online Contextualized Few-Shot Learning.**
Mengye Ren, Michael L. Iuzzolino, Michael C. Mozer, Richard S. Zemel.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2007.04546)]
[[code]()][54] **Few-Shot Learning via Learning the Representation, Provably.**
Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2002.09434)]
[[code]()][55] **A Universal Representation Transformer Layer for Few-Shot Image Classification.**
Lu Liu, William Hamilton, Guodong Long, Jing Jiang, Hugo Larochelle.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2006.11702)]
[[code]()][56] **Concept Learners for Few-Shot Learning.**
Kaidi Cao, Maria Brbic, Jure Leskovec.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2007.07375)]
[[code]()][57] **IEPT: Instance-Level and Episode-Level Pretext Tasks for Few-Shot Learning.**
Manli Zhang, Jianhong Zhang, Zhiwu Lu, Tao Xiang, Mingyu Ding, Songfang Huang.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=xzqLpqRzxLq)]
[[code]()][58] **Incremental few-shot learning via vector quantization in deep embedded space.**
Kuilin Chen, Chi-Guhn Lee.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=3SV-ZePhnZM)]
[[code]()][59] **Few-Shot Bayesian Optimization with Deep Kernel Surrogates.**
Martin Wistuba, Josif Grabocka.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2101.07667)]
[[code]()][60] **Repurposing Pretrained Models for Robust Out-of-domain Few-Shot Learning.**
Namyeong Kwon, Hwidong Na, Gabriel Huang, Simon Lacoste-Julien.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=qkLMTphG5-h)]
[[code]()][61] **MELR: Meta-Learning via Modeling Episode-Level Relationships for Few-Shot Learning.**
Nanyi Fei, Zhiwu Lu, Tao Xiang, Songfang Huang.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=D3PcGLdMx0)]
[[code]()][62] **MetaNorm: Learning to Normalize Few-Shot Batches Across Domains.**
Yingjun Du, Xiantong Zhen, Ling Shao, Cees G. M. Snoek.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=9z_dNsC4B5t)]
[[code]()][63] **Constellation Nets for Few-Shot Learning.**
Weijian Xu, yifan xu, Huaijin Wang, Zhuowen Tu.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=vujTf_I8Kmc)]
[[code]()][64] **Bayesian Few-Shot Classification with One-vs-Each Pólya-Gamma Augmented Gaussian Processes.**
Jake Snell, Richard Zemel.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2007.10417)]
[[code]()][65] **BOIL: Towards Representation Change for Few-shot Learning.**
Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun.
In ICLR, 2021.
[[paper](https://openreview.net/forum?id=umIdUL8rMH)]
[[code]()][66] **Bowtie Networks: Generative Modeling for Joint Few-Shot Recognition and Novel-View Synthesis.**
Zhipeng Bao, Yu-Xiong Wang, Martial Hebert.
In ICLR, 2021.
[[paper](https://arxiv.org/abs/2008.06981)]
[[code]()]## YEAR 2020
[1] **Adaptive Subspaces for Few-Shot Learning.**
Christian Simon, Piotr Koniusz, Richard Nock, Mehrtash Harandi.
In CVPR, 2020.
[[paper](http://openaccess.thecvf.com/content_CVPR_2020/html/Simon_Adaptive_Subspaces_for_Few-Shot_Learning_CVPR_2020_paper.html)]
[[code](https://github.com/chrysts/dsn_fewshot)][2] **Learning to Select Base Classes for Few-shot Classification.**
Linjun Zhou, Peng Cui, Xu Jia, Shiqiang Yang, Qi Tian.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2004.00315)]
[[code]()][3] **Few-Shot Open-Set Recognition using Meta-Learning.**
Bo Liu, Hao Kang, Haoxiang Li, Gang Hua, Nuno Vasconcelos.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2005.13713)]
[[code]()][4] **Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions.**
Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, Fei Sha.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/1812.03664)]
[[code]()][5] **Few-Shot Pill Recognition.**
Suiyi Ling, Andreas Pastor, Jing Li, Zhaohui Che, Junle Wang, Jieun Kim, Patrick Le Callet.
In CVPR, 2020.
[[paper](http://openaccess.thecvf.com/content_CVPR_2020/html/Ling_Few-Shot_Pill_Recognition_CVPR_2020_paper.html)]
[[code]()][6] **Few-Shot Class-Incremental Learning.**
Xiaoyu Tao, Xiaopeng Hong, Xinyuan Chang, Songlin Dong, Xing Wei, Yihong Gong.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2004.10956)]
[[code]()][7] **DeepEMD: Differentiable Earth Mover's Distance for Few-Shot Learning.**
Chi Zhang, Yujun Cai, Guosheng Lin, Chunhua Shen.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2003.06777)]
[[code]()][8] **Meta-Learning of Neural Architectures for Few-Shot Learning.**
Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, Frank Hutter.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/1911.11090)]
[[code]()][9] **Boosting Few-Shot Learning With Adaptive Margin Loss.**
Aoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhenguo Li, Liwei Wang.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2005.13826)]
[[code]()][10] **Instance Credibility Inference for Few-Shot Learning.**
Yikai Wang, Chengming Xu, Chen Liu, Li Zhang, Yanwei Fu.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2003.11853)]
[[code]()][11] **TransMatch: A Transfer-Learning Scheme for Semi-Supervised Few-Shot Learning.**
Zhongjie Yu, Lin Chen, Zhongwei Cheng, Jiebo Luo.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/1912.09033)]
[[code]()][12] **DPGN: Distribution Propagation Graph Network for Few-shot Learning.**
Ling Yang, Liangliang Li, Zilun Zhang, Xinyu Zhou, Erjin Zhou, Yu Liu.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2003.14247)]
[[code]()][13] **Adversarial Feature Hallucination Networks for Few-Shot Learning.**
Kai Li, Yulun Zhang, Kunpeng Li, Yun Fu.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2003.13193)]
[[code]()][14] **Attentive Weights Generation for Few Shot Learning via Information Maximization.**
Yiluan Guo, Ngai-Man Cheung.
In CVPR, 2020.
[[paper](http://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Attentive_Weights_Generation_for_Few_Shot_Learning_via_Information_Maximization_CVPR_2020_paper.html)]
[[code]()][15] **Revisiting Pose-Normalization for Fine-Grained Few-Shot Recognition.**
Luming Tang, Davis Wertheimer, Bharath Hariharan.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/2004.00705)]
[[code]()][16] **Improved Few-Shot Visual Classification.**
Peyman Bateni, Raghav Goyal, Vaden Masrani, Frank Wood, Leonid Sigal.
In CVPR, 2020.
[[paper](https://arxiv.org/abs/1912.03432)]
[[code]()][17] **Model-Agnostic Boundary-Adversarial Sampling for Test-Time Generalization in Few-Shot learning.**
Jaekyeom Kim, Hyoungseok Kim, Gunhee Kim.
In ECCV, 2020.
[[paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123460579.pdf)]
[[code]()][18] **Prototype Rectification for Few-Shot Learning.**
Jinlu Liu, Liang Song, Yongqiang Qin.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/1911.10713)]
[[code]()][19] **Negative Margin Matters: Understanding Margin in Few-shot Classification.**
Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, Han Hu.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2003.12060)]
[[code]()][20] **Associative Alignment for Few-shot Image Classification.**
Arman Afrasiyabi, Jean-François Lalonde, Christian Gagné.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/1912.05094)]
[[code]()][21] **TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification.**
Moshe Lichtenstein, Prasanna Sattigeri, Rogerio Feris, Raja Giryes, Leonid Karlinsky.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2003.06670)]
[[code]()][22] **When Does Self-supervision Improve Few-shot Learning?**
Jong-Chyi Su, Subhransu Maji, Bharath Hariharan.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/1910.03560)]
[[code]()][23] **Incremental Meta-Learning via Indirect Discriminant Alignment.**
Qing Liu, Orchid Majumder, Alessandro Achille, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2002.04162)]
[[code]()][24] **Large-Scale Few-Shot Learning via Multi-Modal Knowledge Discovery.**
Shuo Wang, Jun Yue, Jianzhuang Liu, Qi Tian, Meng Wang.
In ECCV, 2020.
[[paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550715.pdf)]
[[code]()][25] **Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification.**
Nikita Dvornik, Cordelia Schmid, Julien Mairal.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2003.09338)]
[[code]()][26] **Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?**
Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2003.11539)]
[[code]()][27] **An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning.**
Yaoyao Liu, Bernt Schiele, Qianru Sun.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/1904.08479)]
[[code]()][28] **Impact of base dataset design on few-shot image classification.**
Othman Sbai, Camille Couprie, Mathieu Aubry.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2007.08872)]
[[code]()][29] **SEN: A Novel Feature Normalization Dissimilarity Measure for Prototypical Few-Shot Learning Networks.**
Van Nhan Nguyen, Sigurd Løkse, Kristoffer Wickstrøm, Michael Kampffmeyer, Davide Roverso, Robert Jenssen.
In ECCV, 2020.
[[paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123680120.pdf)]
[[code]()][30] **Embedding Propagation: Smoother Manifold for Few-Shot Classification.**
Pau Rodríguez, Issam Laradji, Alexandre Drouin, Alexandre Lacoste.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/2003.04151)]
[[code]()][31] **A Broader Study of Cross-Domain Few-Shot Learning.**
Yunhui Guo, Noel C. Codella, Leonid Karlinsky, James V. Codella, John R. Smith, Kate Saenko, Tajana Rosing, Rogerio Feris.
In ECCV, 2020.
[[paper](https://arxiv.org/abs/1912.07200)]
[[code]()][32] **Attentive Prototype Few-shot Learning with Capsule Network-based Embedding.**
Fangyu Wu, Jeremy S.Smith, Wenjin Lu, Chaoyi Pang, Bailing Zhang.
In ECCV, 2020.
[[paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123730239.pdf)]
[[code]()][33] **Transductive Information Maximization For Few-Shot Learning.**
Malik Boudiaf, Ziko Imtiaz Masud, Jérôme Rony, José Dolz, Pablo Piantanida, Ismail Ben Ayed.
In NeurIPS, 2020.
[[paper](https://arxiv.org/abs/2008.11297)]
[[code]()][34] **Interventional Few-Shot Learning.**
Zhongqi Yue, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua.
In NeurIPS, 2020.
[[paper](https://arxiv.org/abs/2009.13000)]
[[code]()][35] **OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection and Classification.**
Taewon Jeong, Heeyoung Kim.
In NeurIPS, 2020.
[[paper](https://papers.nips.cc/paper/2020/file/28e209b61a52482a0ae1cb9f5959c792-Paper.pdf)]
[[code]()][36] **Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels.**
Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey.
In NeurIPS, 2020.
[[paper](https://arxiv.org/abs/1910.05199)]
[[code]()][37] **Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.**
Micah Goldblum, Liam Fowl, Tom Goldstein.
In NeurIPS, 2020.
[[paper](https://arxiv.org/abs/1910.00982)]
[[code]()][38] **CrossTransformers: spatially-aware few-shot transfer.**
Carl Doersch, Ankush Gupta, Andrew Zisserman.
In NeurIPS, 2020.
[[paper](https://arxiv.org/abs/2007.11498)]
[[code]()][39] **Few-Shot Learning on Graphs via Super-Classes based on Graph Spectral Measures.**
Jatin Chauhan, Deepak Nathani, Manohar Kaul.
In ICLR, 2020.
[[paper](https://arxiv.org/abs/2002.12815)]
[[code]()][40] **A Theoretical Analysis of the Number of Shots in Few-Shot Learning.**
Tianshi Cao, Marc Law, Sanja Fidler.
In ICLR, 2020.
[[paper](https://arxiv.org/abs/1909.11722)]
[[code]()][41] **A Baseline for Few-Shot Image Classification.**
Guneet S. Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto.
In ICLR, 2020.
[[paper](https://arxiv.org/abs/1909.02729)]
[[code]()][42] **Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples.**
Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, Hugo Larochelle.
In ICLR, 2020.
[[paper](https://arxiv.org/abs/1903.03096)]
[[code]()][43] **Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation.**
Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, Ming-Hsuan Yang.
In ICLR, 2020.
[[paper](https://arxiv.org/abs/2001.08735)]
[[code]()]## YEAR 2019
[1] **Finding Task-Relevant Features for Few-Shot Learning by Category Traversal.**
Hongyang Li, David Eigen, Samuel Dodge, Matthew Zeiler, Xiaogang Wang.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1905.11116)]
[[code]()][2] **Edge-labeling Graph Neural Network for Few-shot Learning.**
Jongmin Kim, Taesup Kim, Sungwoong Kim, Chang D. Yoo.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1905.01436)]
[[code]()][3] **Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning.**
Spyros Gidaris, Nikos Komodakis.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1905.01102)]
[[code]()][4] **Meta-Transfer Learning for Few-Shot Learning.**
Qianru Sun, Yaoyao Liu, Tat-Seng Chua, Bernt Schiele.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1812.02391)]
[[code]()][5] **Few-Shot Learning via Saliency-guided Hallucination of Samples.**
Hongguang Zhang, Jing Zhang, Piotr Koniusz.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1904.03472)]
[[code]()][6] **RepMet: Representative-based metric learning for classification and one-shot object detection.**
Leonid Karlinsky, Joseph Shtok, Sivan Harary, Eli Schwartz, Amit Aides, Rogerio Feris, Raja Giryes, Alex M. Bronstein.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1806.04728)]
[[code]()][7] **Spot and Learn: A Maximum-Entropy Patch Sampler for Few-Shot Image Classification.**
Wen-Hsuan Chu, Yu-Jhe Li, Jing-Cheng Chang, Yu-Chiang Frank Wang.
In CVPR, 2019.
[[paper](http://openaccess.thecvf.com/content_CVPR_2019/html/Chu_Spot_and_Learn_A_Maximum-Entropy_Patch_Sampler_for_Few-Shot_Image_CVPR_2019_paper.html)]
[[code]()][8] **LaSO: Label-Set Operations networks for multi-label few-shot learning.**
Amit Alfassy, Leonid Karlinsky, Amit Aides, Joseph Shtok, Sivan Harary, Rogerio Feris, Raja Giryes, Alex M. Bronstein.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1902.09811)]
[[code]()][9] **Few-Shot Learning with Localization in Realistic Settings.**
Davis Wertheimer, Bharath Hariharan.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1904.08502)]
[[code]()][10] **Large-Scale Few-Shot Learning: Knowledge Transfer With Class Hierarchy.**
Aoxue Li, Tiange Luo, Zhiwu Lu, Tao Xiang, Liwei Wang.
In CVPR, 2019.
[[paper](http://openaccess.thecvf.com/content_CVPR_2019/html/Li_Large-Scale_Few-Shot_Learning_Knowledge_Transfer_With_Class_Hierarchy_CVPR_2019_paper.html)]
[[code]()][11] **Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning.**
Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, Jiebo Luo.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1903.12290)]
[[code]()][12] **Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders.**
Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1812.01784)]
[[code]()][13] **Dense Classification and Implanting for Few-Shot Learning.**
Yann Lifchitz, Yannis Avrithis, Sylvaine Picard, Andrei Bursuc.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1903.05050)]
[[code]()][14] **Task-Agnostic Meta-Learning for Few-shot Learning.**
Muhammad Abdullah Jamal, Guo-Jun Qi, Mubarak Shah.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1805.07722)]
[[code]()][15] **TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning.**
Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, Joseph E. Gonzalez.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1904.05967)]
[[code]()][16] **Image Deformation Meta-Networks for One-Shot Learning.**
Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, Martial Hebert.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1905.11641)]
[[code]()][17] **Variational Prototyping-Encoder: One-Shot Learning with Prototypical Images.**
Junsik Kim, Tae-Hyun Oh, Seokju Lee, Fei Pan, In So Kweon.
In CVPR, 2019.
[[paper](https://arxiv.org/abs/1904.08482)]
[[code]()][18] **Few-Shot Learning with Embedded Class Models and Shot-Free Meta Training.**
Avinash Ravichandran, Rahul Bhotika, Stefano Soatto.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1905.04398)]
[[code]()][19] **Few-Shot Image Recognition With Knowledge Transfer.**
Zhimao Peng, Zechao Li, Junge Zhang, Yan Li, Guo-Jun Qi, Jinhui Tang.
In ICCV, 2019.
[[paper](http://openaccess.thecvf.com/content_ICCV_2019/html/Peng_Few-Shot_Image_Recognition_With_Knowledge_Transfer_ICCV_2019_paper.html)]
[[code]()][20] **Variational Few-Shot Learning.**
Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, Xiaokang Yang.
In ICCV, 2019.
[[paper](http://openaccess.thecvf.com/content_ICCV_2019/html/Zhang_Variational_Few-Shot_Learning_ICCV_2019_paper.html)]
[[code]()][21] **Transductive Episodic-Wise Adaptive Metric for Few-Shot Learning.**
Limeng Qiao, Yemin Shi, Jia Li, Yaowei Wang, Tiejun Huang, Yonghong Tian.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1910.02224)]
[[code]()][22] **Diversity with Cooperation: Ensemble Methods for Few-Shot Classification.**
Nikita Dvornik, Cordelia Schmid, Julien Mairal.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1903.11341)]
[[code](https://github.com/dvornikita/fewshot_ensemble)][23] **Learning Compositional Representations for Few-Shot Recognition.**
Pavel Tokmakov, Yu-Xiong Wang, Martial Hebert.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1812.09213)]
[[code]()][24] **PARN: Position-Aware Relation Networks for Few-Shot Learning.**
Ziyang Wu, Yuwei Li, Lihua Guo, Kui Jia.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1909.04332)]
[[code]()][25] **Boosting Few-Shot Visual Learning with Self-Supervision.**
Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, Matthieu Cord.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1906.05186)]
[[code](https://github.com/valeoai/BF3S)][26] **Collect and Select: Semantic Alignment Metric Learning for Few-Shot Learning.**
Fusheng Hao, Fengxiang He, Jun Cheng, Lei Wang, Jianzhong Cao, Dacheng Tao.
In ICCV, 2019.
[[paper](http://openaccess.thecvf.com/content_ICCV_2019/html/Hao_Collect_and_Select_Semantic_Alignment_Metric_Learning_for_Few-Shot_Learning_ICCV_2019_paper.html)]
[[code](https://github.com/haofusheng/SAML)][27] **Few-Shot Learning with Global Class Representations.**
Tiange Luo, Aoxue Li, Tao Xiang, Weiran Huang, Liwei Wang.
In ICCV, 2019.
[[paper](https://arxiv.org/abs/1908.05257)]
[[code]()][28] **Cross Attention Network for Few-shot Classification.**
Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, Xilin Chen.
In NeurIPS, 2019.
[[paper](https://arxiv.org/abs/1910.07677)]
[[code]()][29] **Adaptive Cross-Modal Few-Shot Learning.**
Chen Xing, Negar Rostamzadeh, Boris N. Oreshkin, Pedro O. Pinheiro.
In NeurIPS, 2019.
[[paper](https://arxiv.org/abs/1902.07104)]
[[code]()][30] **Incremental Few-Shot Learning with Attention Attractor Networks.**
Mengye Ren, Renjie Liao, Ethan Fetaya, Richard S. Zemel.
In NeurIPS, 2019.
[[paper](https://arxiv.org/abs/1810.07218)]
[[code]()][31] **Unsupervised Meta-Learning For Few-Shot Image Classification.**
Siavash Khodadadeh, Ladislau Bölöni, Mubarak Shah.
In NeurIPS, 2019.
[[paper](https://arxiv.org/abs/1811.11819)]
[[code]()][32] **Learning to Self-Train for Semi-Supervised Few-Shot Classification.**
Xinzhe Li, Qianru Sun, Yaoyao Liu, Shibao Zheng, Qin Zhou, Tat-Seng Chua, Bernt Schiele.
In NeurIPS, 2019.
[[paper](https://arxiv.org/abs/1906.00562)]
[[code]()][33] **Adaptive Posterior Learning: few-shot learning with a surprise-based memory module.**
Tiago Ramalho, Marta Garnelo.
In ICLR, 2019.
[[paper](https://arxiv.org/abs/1902.02527)]
[[code]()][34] **A Closer Look at Few-shot Classification.**
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang.
In ICLR, 2019.
[[paper](https://arxiv.org/abs/1904.04232)]
[[code]()][35] **Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning.**
Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang.
In ICLR, 2019.
[[paper](https://arxiv.org/abs/1805.10002)]
[[code]()]## YEAR 2018
[1] **Learning to Compare: Relation Network for Few-Shot Learning.**
Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, Timothy M. Hospedales.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1711.06025)]
[[code]()][2] **Dynamic Few-Shot Visual Learning without Forgetting.**
Spyros Gidaris, Nikos Komodakis.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1804.09458)]
[[code](https://github.com/gidariss/FewShotWithoutForgetting)][3] **Few-Shot Image Recognition by Predicting Parameters from Activations.**
Siyuan Qiao, Chenxi Liu, Wei Shen, Alan Yuille.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1706.03466)]
[[code]()][4] **CLEAR: Cumulative LEARning for One-Shot One-Class Image Recognition.**
Jedrzej Kozerawski, Matthew Turk.
In CVPR, 2018.
[[paper](http://openaccess.thecvf.com/content_cvpr_2018/html/Kozerawski_CLEAR_Cumulative_LEARning_CVPR_2018_paper.html)]
[[code]()][5] **Memory Matching Networks for One-Shot Image Recognition.**
Qi Cai, Yingwei Pan, Ting Yao, Chenggang Yan, Tao Mei.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1804.08281)]
[[code]()][6] **Low-shot learning with large-scale diffusion.**
Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1706.02332)]
[[code]()][7] **Low-Shot Learning with Imprinted Weights.**
Hang Qi, Matthew Brown, David G. Lowe.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1712.07136)]
[[code]()][8] **Low-Shot Learning from Imaginary Data.**
Yu-Xiong Wang, Ross Girshick, Martial Hebert, Bharath Hariharan.
In CVPR, 2018.
[[paper](https://arxiv.org/abs/1801.05401)]
[[code]()][9] **TADAM: Task dependent adaptive metric for improved few-shot learning.**
Boris N. Oreshkin, Pau Rodriguez, Alexandre Lacoste.
In NeurIPS, 2018.
[[paper](https://arxiv.org/abs/1805.10123)]
[[code]()][10] **MetaGAN: An Adversarial Approach to Few-Shot Learning.**
Ruixiang ZHANG, Tong Che, Zoubin Ghahramani, Yoshua Bengio, Yangqiu Song.
In NeurIPS, 2018.
[[paper](https://papers.nips.cc/paper/7504-metagan-an-adversarial-approach-to-few-shot-learning)]
[[code]()][11] **Delta-encoder: an effective sample synthesis method for few-shot object recognition.**
Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Rogerio Feris, Abhishek Kumar, Raja Giryes, Alex M. Bronstein.
In NeurIPS, 2018.
[[paper](https://arxiv.org/abs/1806.04734)]
[[code]()][12] **Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks.**
Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang.
In NeurIPS, 2018.
[[paper](https://arxiv.org/abs/1810.11730)]
[[code]()][13] **Few-Shot Learning with Graph Neural Networks.**
Victor Garcia, Joan Bruna.
In ICLR, 2018.
[[paper](https://arxiv.org/abs/1711.04043)]
[[code]()][14] **Meta-Learning for Semi-Supervised Few-Shot Classification.**
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, Richard S. Zemel.
In ICLR, 2018.
[[paper](https://arxiv.org/abs/1803.00676)]
[[code]()]## YEAR 2017
[1] **Few-Shot Object Recognition from Machine-Labeled Web Images.**
Zhongwen Xu, Linchao Zhu, Yi Yang.
In CVPR, 2017.
[[paper](https://arxiv.org/abs/1612.06152)]
[[code]()][2] **Multi-Attention Network for One Shot Learning.**
Peng Wang, Lingqiao Liu, Chunhua Shen, Zi Huang, Anton van den Hengel, Heng Tao Shen.
In CVPR, 2017.
[[paper](http://openaccess.thecvf.com/content_cvpr_2017/html/Wang_Multi-Attention_Network_for_CVPR_2017_paper.html)]
[[code]()][3] **Low-shot Visual Recognition by Shrinking and Hallucinating Features.**
Bharath Hariharan, Ross Girshick.
In ICCV, 2017.
[[paper](https://arxiv.org/abs/1606.02819)]
[[code]()][4] **Few-Shot Learning Through an Information Retrieval Lens.**
Eleni Triantafillou, Richard Zemel, Raquel Urtasun.
In NIPS, 2017.
[[paper](https://arxiv.org/abs/1707.02610)]
[[code]()][5] **Prototypical Networks for Few-shot Learning.**
Jake Snell, Kevin Swersky, Richard S. Zemel.
In NIPS, 2017.
[[paper](https://arxiv.org/abs/1703.05175)]
[[code]()][6] **Optimization as a Model for Few-Shot Learning.**
Sachin Ravi, Hugo Larochelle.
In ICLR, 2017.
[[paper](https://openreview.net/forum?id=rJY0-Kcll)]
[[code]()]## YEAR 2016
[1] **Learning feed-forward one-shot learners.**
Luca Bertinetto, João F. Henriques, Jack Valmadre, Philip H. S. Torr, Andrea Vedaldi.
In NIPS, 2016.
[[paper](https://arxiv.org/abs/1606.05233)]
[[code]()][2] **Matching Networks for One Shot Learning.**
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra.
In NIPS, 2016.
[[paper](https://arxiv.org/abs/1606.04080)]
[[code]()]## YEAR 2015
[1] **One Shot Learning via Compositions of Meaningful Patches.**
Alex Wong, Alan L. Yuille.
In ICCV, 2015.
[[paper](http://openaccess.thecvf.com/content_iccv_2015/html/Wong_One_Shot_Learning_ICCV_2015_paper.html)]
[[code]()]