https://github.com/Vision-Intelligence-and-Robots-Group/Best-Incremental-Learning
An Incremental Learning, Continual Learning, and Life-Long Learning Repository
https://github.com/Vision-Intelligence-and-Robots-Group/Best-Incremental-Learning
catastrophic-forgetting continual-learning continuous-learning incremental-learning lifelong-learning
Last synced: 5 months ago
JSON representation
An Incremental Learning, Continual Learning, and Life-Long Learning Repository
- Host: GitHub
- URL: https://github.com/Vision-Intelligence-and-Robots-Group/Best-Incremental-Learning
- Owner: Vision-Intelligence-and-Robots-Group
- Created: 2022-03-30T11:24:09.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-05-11T08:36:55.000Z (over 1 year ago)
- Last Synced: 2024-11-15T05:32:38.649Z (11 months ago)
- Topics: catastrophic-forgetting, continual-learning, continuous-learning, incremental-learning, lifelong-learning
- Homepage:
- Size: 1.22 MB
- Stars: 555
- Watchers: 19
- Forks: 50
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Best Incremental Learning
Incremental Learning Repository: A collection of documents, papers, source code, and talks for incremental learning.
**Keywords:** Incremental Learning, Continual Learning, Continuous Learning, Lifelong Learning, Catastrophic Forgetting
> **CATALOGUE**
>
> [Quick Start](#quick-start) :sparkles: [Survey](#survey) :sparkles: [Papers by Categories](#papers-by-categories) :sparkles: [Datasets](#datasets) :sparkles: [Tutorial, Workshop, & Talks](#workshop)
>
> [Competitions](#competitions) :sparkles: [Awesome Reference](#awesome-reference) :sparkles: [Full Paper List](#paper-list)## 1 Quick Start
[Continual Learning | Papers With Code](https://paperswithcode.com/task/continual-learning)
[Incremental Learning | Papers With Code](https://paperswithcode.com/task/incremental-learning)
[Class Incremental Learning from the Past to Present by 思悥 | 知乎 ](https://zhuanlan.zhihu.com/p/490308909?utm_source=wechat_session&utm_medium=social&utm_oi=1162267494193799168&utm_campaign=shareopn) (In Chinese)
[A Little Survey of Incremental Learning | 知乎](https://zhuanlan.zhihu.com/p/301117945) (In Chinese)
**Origin of the Study**
+ Catastrophic Forgetting, Rehearsal and Pseudorehearsal(1995)[[paper]](https://www.tandfonline.com/doi/abs/10.1080/09540099550039318)
+ Catastrophic forgetting in connectionist networks(1999)[[paper]](https://www.sciencedirect.com/science/article/pii/S1364661399012942)
+ Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem(1989)[[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0079742108605368)
**Toolbox & Framework**
+ **[CLHive]** [[code]](https://github.com/naderAsadi/CLHive)
+ **[PTIL]** Prompt-based Incremental Learning Toolbox [[code]](https://github.com/Vision-Intelligence-and-Robots-Group/Prompt-based-CL-Toolbox)
+ **[LAMDA-PILOT]** PILOT: A Pre-Trained Model-Based Continual Learning Toolbox(arXiv 2023)[[paper]](https://arxiv.org/abs/2309.07117)[[code]](https://github.com/sun-hailong/LAMDA-PILOT)+ **[FACIL]** Class-incremental learning: survey and performance evaluation on image classification(TPAMI 2022)[[paper]](https://arxiv.org/abs/2010.15277)[[code]](https://github.com/mmasana/FACIL)
+ **[Avalanche]** Avalanche: An End-to-End Library for Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021W/CLVision/html/Lomonaco_Avalanche_An_End-to-End_Library_for_Continual_Learning_CVPRW_2021_paper.html)[[code]](https://github.com/ContinualAI/avalanche)
+ **[PyCIL]** PyCIL: A Python Toolbox for Class-Incremental Learning(arXiv 2021)[[paper]](https://arxiv.org/abs/2112.12533)[[code]](https://github.com/G-U-N/PyCIL)
+ **[Mammoth]** An Extendible (General) Continual Learning Framework for Pytorch [[code]](https://github.com/aimagelab/mammoth)
+ **[PyContinual]** An Easy and Extendible Framework for Continual Learning[[code]](https://github.com/ZixuanKe/PyContinual)
**Books**
+ Lifelong Machine Learning [[Link]](https://www.cs.uic.edu/~liub/lifelong-machine-learning.html)
## 2 Survey
### 2.1 Surveys
+ A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning(arXiv 2023)[[github]](https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning)+ Deep Class-Incremental Learning: A Survey(arXiv 2023)[[paper]](https://arxiv.org/pdf/2302.03648.pdf)[[code]](https://github.com/zhoudw-zdw/CIL_Survey/)
+ A Comprehensive Survey of Continual Learning: Theory, Method and Application(arxiv 2023)[[paper]](https://arxiv.org/abs/2302.00487)
+ **[FACIL]** Class-incremental learning: survey and performance evaluation on image classification(TPAMI 2022)[[paper]](https://arxiv.org/abs/2010.15277)[[code]](https://github.com/mmasana/FACIL)
+ Online Continual Learning in Image Classification: An Empirical Survey (Neurocomputing 2021)[[paper]](https://arxiv.org/abs/2101.10423)
+ A continual learning survey: Defying forgetting in classification tasks (TPAMI 2021) [[paper]](https://ieeexplore.ieee.org/abstract/document/9349197)
+ Rehearsal revealed: The limits and merits of revisiting samples in continual learning (ICCV 2021)[[paper]](https://arxiv.org/abs/2104.07446)
+ Continual Lifelong Learning in Natural Language Processing: A Survey (COLING 2020) [[paper]](https://www.aclweb.org/anthology/2020.coling-main.574/)
+ A Comprehensive Study of Class Incremental Learning Algorithms for Visual Tasks (Neural Networks 2020) [[paper]](https://arxiv.org/abs/2011.01844)
+ Embracing Change: Continual Learning in Deep Neural Networks(Trends in Cognitive Sciences 2020)[[paper]](https://www.sciencedirect.com/science/article/pii/S1364661320302199)
+ Towards Continual Reinforcement Learning: A Review and Perspectives(arXiv 2020)[[paper]](https://arxiv.org/abs/2012.13490)
+ Class-incremental learning: survey and performance evaluation(arXiv 2020) [[paper]](https://arxiv.org/abs/2010.15277)
+ A comprehensive, application-oriented study of catastrophic forgetting in DNNs (ICLR 2019) [[paper]](https://openreview.net/forum?id=BkloRs0qK7)
+ Three scenarios for continual learning (arXiv 2019) [[paper]](https://arxiv.org/abs/1904.07734v1)
+ Continual lifelong learning with neural networks: A review(arXiv 2019)[[paper]](https://arxiv.org/abs/1802.07569)
+ 类别增量学习研究进展和性能评价 (自动化学报 2023)[[paper]](http://www.aas.net.cn/cn/article/doi/10.16383/j.aas.c220588)
### 2.2 Analysis & Study
+ How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning?(NeurIPS 2022)[[paper]](https://openreview.net/forum?id=c0l2YolqD2T)
+ **[WPTP]** A Theoretical Study on Solving Continual Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2211.02633)[[code]](https://github.com/k-gyuhak/WPTP)
+ The Challenges of Continuous Self-Supervised Learning(ECCV 2022)[[peper]](https://arxiv.org/abs/2203.12710)
+ Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2203.14383)
+ A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.13917)[[code]](https://github.com/YaqianZhang/RepeatedAugmentedRehearsal)
+ Exploring Example Influence in Continual Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.12241)
+ Biological underpinnings for lifelong learning machines(Nat. Mach. Intell. 2022)[[paper]](https://www.nature.com/articles/s42256-022-00452-0)
+ Probing Representation Forgetting in Supervised and Unsupervised Continual Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Davari_Probing_Representation_Forgetting_in_Supervised_and_Unsupervised_Continual_Learning_CVPR_2022_paper.html)[[code]](https://github.com/rezazzr/Probing-Representation-Forgetting)
+ **[OpenLORIS-Object]** Towards Lifelong Object Recognition: A Dataset and Benchmark(Pattern Recognit 2022)[[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0031320322003004)
+ Probing Representation Forgetting in Supervised and Unsupervised Continual Learning (CVPR 2022) [[paper]](https://arxiv.org/abs/2203.13381)
+ Learngene: From Open-World to Your Learning Task (AAAI 2022) [[paper]](https://arxiv.org/pdf/2106.06788.pdf)
+ Continual Normalization: Rethinking Batch Normalization for Online Continual Learning (ICLR 2022) [[paper]](https://openreview.net/forum?id=vwLLQ-HwqhZ)
+ **[CLEVA-Compass]** CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability (ICLR 2022) [[paper]](https://openreview.net/pdf?id=rHMaBYbkkRJ)[[code]](https://github.com/ml-research/CLEVA-Compass)
+ Learning curves for continual learning in neural networks: Self-knowledge transfer and forgetting (ICLR 2022) [[paper]](https://openreview.net/pdf?id=tFgdrQbbaa)
+ **[CKL]** Towards Continual Knowledge Learning of Language Models (ICLR 2022) [[paper]](https://openreview.net/pdf?id=vfsRB5MImo9)
+ Pretrained Language Model in Continual Learning: A Comparative Study (ICLR 2022) [[paper]](https://openreview.net/pdf?id=figzpGMrdD)
+ Effect of scale on catastrophic forgetting in neural networks (ICLR 2022) [[paper]](https://openreview.net/pdf?id=GhVS8_yPeEa)
+ LifeLonger: A Benchmark for Continual Disease Classification(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.05737)
+ **[CDDB]** A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials(arXiv 2022)[[paper]](https://arxiv.org/abs/2205.05467)
+ **[BN Tricks]** Diagnosing Batch Normalization in Class Incremental Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.08025)
+ Architecture Matters in Continual Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.00275)
+ Learning where to learn: Gradient sparsity in meta and continual learning(NeurIPS 2021) [[paper]](https://proceedings.neurips.cc/paper/2021/hash/2a10665525774fa2501c2c8c4985ce61-Abstract.html)
+ Continuous Coordination As a Realistic Scenario for Lifelong Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/nekoei21a.html)
+ Understanding the Role of Training Regimes in Continual Learning (NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/518a38cc9a0173d0b2dc088166981cf8-Abstract.html)
+ Optimal Continual Learning has Perfect Memory and is NP-HARD (ICML 2020)[[paper]](https://proceedings.mlr.press/v119/knoblauch20a.html)
### 2.3 Settings
+ **[FSCIL]** Few-shot Class Incremental Learning [[Link]](https://github.com/xyutao/fscil)
+ **[DCIL]** Decentralized Class Incremental Learning [[paper]](https://ieeexplore.ieee.org/document/9932643)[[Setting]](https://github.com/Vision-Intelligence-and-Robots-Group/DCIL)
## 3 Papers by Categories
**Tips:** you can use ctrl+F to match abbreviations with articles, or browse the [paper list](#paper-list) below.
### 3.1 From an Algorithm Perspective
| | Network Structure | Rehearsal |
| :--: | :----------------------------------------------------------: | :----------------------------------------------------------: |
|2024|**SEED**(ICLR 2024)[[paper]](https://openreview.net/forum?id=sSyytcewxe)
**CAMA**(ICLR 2024)[[paper]](https://openreview.net/forum?id=7M0EzjugaN)[[code]](https://github.com/snumprlab/cl-alfred)
**SFR**(ICLR 2024)[[paper]](https://openreview.net/attachment?id=2dhxxIKhqz&name=pdf)[[code]](https://aaltoml.github.io/sfr/)
**HLOP**(ICLR 2024)[[paper]](https://openreview.net/forum?id=MeB86edZ1P)
**TPL**(ICLR 2024)[[paper]](https://openreview.net/forum?id=8QfK9Dq4q0)[[code]](https://github.com/linhaowei1/TPL)
**EFC**(ICLR 2024)[[paper]](https://openreview.net/forum?id=7D9X2cFnt1)
**PICLE**(ICLR 2024)[[paper]](https://openreview.net/forum?id=MVe2dnWPCu)
**OVOR**(ICLR 2024)[[paper]](https://openreview.net/forum?id=FbuyDzZTPt)[[code]](https://github.com/jpmorganchase/ovor)
**PEC**(ICLR 2024)[[paper]](https://openreview.net/forum?id=DJZDgMOLXQ)[[code]](https://github.com/michalzajac-ml/pec)
**refresh learning**(ICLR 2024)[[paper]](https://openreview.net/forum?id=BE5aK0ETbp)
**POCON**(WACV 2024)[[paper]](https://arxiv.org/pdf/2309.06086.pdf)
**CLTA**(WACV 2024)[[paper]](https://arxiv.org/abs/2308.09544)[[code]](https://github.com/fszatkowski/cl-teacher-adaptation)
**FG-KSR**(AAAI 2024)[[paper]](https://arxiv.org/abs/2312.12722)[[code]](https://github.com/scok30/vit-cil) | **MOSE**(CVPR 2024)[[paper]](https://arxiv.org/abs/2404.00417)[[code]](https://github.com/AnAppleCore/MOSE)
**AISEOCL**(Pattern Recognition 2024)[[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0031320323009354#:~:text=We%20propose%20a%20novel%20adaptive,the%20same%20class%20or%20not)
**AF-FCL**(ICLR 2024)[[paper]](https://openreview.net/forum?id=ShQrnAsbPI)[[code]](https://anonymous.4open.science/r/AF-FCL-7D65)
**DietCL**(ICLR 2024)[[paper]](https://openreview.net/forum?id=Xvfz8NHmCj)
**BGS**(ICLR 2024)[[paper]](https://openreview.net/forum?id=3Y7r6xueJJ)
**DMU**(WACV 2024)[[paper]](https://openaccess.thecvf.com/content/WACV2024/papers/Raghavan_Online_Class-Incremental_Learning_for_Real-World_Food_Image_Classification_WACV_2024_paper.pdf)[[code]](https://gitlab.com/viper-purdue/OCIL-real-world-food-image-classification) |
| 2023 |**A-Prompts** (arXiv 2023)[[paper]](https://arxiv.org/abs/2303.13898)
**ESN**(AAAI 2023)[[paper]](https://arxiv.org/abs/2211.15969)[[code]](https://github.com/iamwangyabin/ESN)
**RevisitingCIL**(arXiv 2023)[[paper]](https://arxiv.org/abs/2303.07338)[[code]](https://github.com/zhoudw-zdw/RevisitingCIL)
**LwP**(ICLR 2023)[[paper]](https://openreview.net/forum?id=gfPUokHsW-)
**SDMLP**(ICLR 2023)[[paper]](https://openreview.net/forum?id=JknGeelZJpHP)
**SaLinA**(ICLR 2023)[[paper]](https://openreview.net/forum?id=ZloanUtG4a)[[code]](https://github.com/facebookresearch/salina/tree/main/salina_cl)
**BEEF**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=iP77_axu0h3)[[code]](https://github.com/G-U-N/ICLR23-BEEF)
**WaRP**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=kPLzOfPfA2l)
**OBC**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=18XzeuYZh_)
**NC-FSCIL**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=y5W8tpojhtJ)[[code]](https://github.com/NeuralCollapseApplications/FSCIL)
**iVoro**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=zJXg_Wmob03)
**DAS**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=m_GDIItaI3o)
**Progressive Prompts**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=UJTgQBc91_)
**SDP**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=qco4ekz2Epm)[[code]](https://github.com/yonseivnl/sdp)
**iLDR**(ICLR 2023)[[paper]](https://arxiv.org/pdf/2202.05411.pdf)
**SoftNet-FSCIL**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=z57WK5lGeHd)[[code]](https://github.com/ihaeyong/SoftNet-FSCIL)
**PAR**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05288.pdf)
**PETAL**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.09713.pdf)[[code]](https://github.com/dhanajitb/petal)
**SAVC**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.00426.pdf)[[code]](https://github.com/zysong0113/SAVC)
**CODA-Prompt**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2211.13218.pdf)[[code]](https://github.com/GT-RIPL/CODA-Prompt) | **FeTrIL**(WACV 2023)[[paper]](https://arxiv.org/abs/2211.13131)[[code]](https://github.com/GregoirePetit/FeTrIL)
**ESMER**(ICLR 2023)[[paper]](https://openreview.net/forum?id=zlbci7019Z3)[[code]](https://github.com/NeurAI-Lab/ESMER)
**MEMO**(ICLR 2023)[[paper]](https://arxiv.org/abs/2205.13218)[[code]](https://github.com/wangkiw/ICLR23-MEMO)
**CUDOS**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=ih0uFRFhaZZ)
**ACGAN**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=cRxYWKiTan)[[code]](https://github.com/daiqing98/FedCIL)
**TAMiL**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=-M0TNnyWFT5)[[code]](https://github.com/NeurAI-Lab/TAMiL)
**RSOI**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.10177.pdf)[[code]](https://github.com/feifeiobama/InfluenceCL)
**TBBN**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2201.12559.pdf)
**AMSS**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05015.pdf)
**DGCL**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03931.pdf)
**PCR**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.04408.pdf)[[code]](https://github.com/FelixHuiweiLin/PCR)
**FMWISS**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2302.14250.pdf)
**CL-DETR**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03110.pdf)[[code]](https://github.com/yaoyao-liu/CL-DETR)
**PIVOT**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.04842.pdf)
**CIM-CIL**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.14042.pdf)[[code]](https://github.com/xfflzl/CIM-CIL)
**DNE**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.12696.pdf) |
| 2022 | **RD-IOD**(ACM Trans 2022)[[paper]](https://dl.acm.org/doi/abs/10.1145/3472393)
**NCM**(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.05491)
**IPP**(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.03410)
**Incremental-DETR**(arXiv 2022)[[paper]](https://arxiv.org/abs/2205.04042)
**ELI**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Joseph_Energy-Based_Latent_Aligner_for_Incremental_Learning_CVPR_2022_paper.html)
**CASSLE**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Fini_Self-Supervised_Models_Are_Continual_Learners_CVPR_2022_paper.html)[[code]](https://github.com/DonkeyShot21/cassle)
**iFS-RCNN**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Nguyen_iFS-RCNN_An_Incremental_Few-Shot_Instance_Segmenter_CVPR_2022_paper.html)
**WILSON**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Cermelli_Incremental_Learning_in_Semantic_Segmentation_From_Image_Labels_CVPR_2022_paper.html)[[code]](https://github.com/fcdl94/WILSON)
**Connector**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Lin_Towards_Better_Plasticity-Stability_Trade-Off_in_Incremental_Learning_A_Simple_Linear_CVPR_2022_paper.html)[[code]](https://github.com/lingl1024/Connector)
**PAD**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.13167)
**ERD**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.02136)[[code]](https://github.com/Hi-FT/ERD)
**AFC**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.00895)[[code]](https://github.com/kminsoo/AFC)
**FACT**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06953)[[code]](https://github.com/zhoudw-zdw/CVPR22-Fact)
**L2P**(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.08654)[[code]](https://github.com/google-research/l2p)
**MEAT**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.11684)[[code]](https://github.com/zju-vipa/MEAT-TIL)
**RCIL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.05402)[[code]](https://github.com/zhangchbin/RCIL)
**ZITS**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.00867)[[code]](https://github.com/DQiaole/ZITS_inpainting)
**MTPSL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.14893)[[code]](https://github.com/VICO-UoE/MTPSL)
**MMA**(CVPR-Workshop 2022)[[paper]](https://arxiv.org/abs/2204.08766)
**CoSCL**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136860249.pdf)[[code]](https://github.com/lywang3081/CoSCL)
**AdNS**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.12061)
**ProCA**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.10856)[[code]](https://github.com/Hongbin98/ProCA)
**R-DFCIL**(ECCV 2022)[[paper]](https://arxiv.org/abs/2203.13104)[[code]](https://github.com/jianzhangcs/R-DFCIL)
**S3C**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850427.pdf)[[code]](https://github.com/JAYATEJAK/S3C)
**H^2^**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136710518.pdf)
**DualPrompt**(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04799)
**ALICE**(ECCV 2022)[[paper]](https://arxiv.org/pdf/2208.00147.pdf)[[code]](https://github.com/CanPeng123/FSCIL_ALICE)
**RU-TIL**(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.09074.pdf)[[code]](https://github.com/CSIPlab/task-increment-rank-update)
**FOSTER**(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04662)
**SSR**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=boJy41J-tnQ)[[code]](https://github.com/feyzaakyurek/subspace-reg)
**RGO**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)
**TRGP**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=iEvAf8i6JjO)
**AGCN**(ICME 2022)[[paper]](https://arxiv.org/abs/2203.05534)[[code]](https://github.com/Kaile-Du/AGCN)
**WSN**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/kang22b/kang22b.pdf)[[code]](https://github.com/ihaeyong/WSN)
**NISPA**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/gurbuz22a/gurbuz22a.pdf)[[code]](https://github.com/BurakGurbuz97/NISPA)
**S-FSVI**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/rudner22a/rudner22a.pdf)[[code]](https://github.com/timrudner/S-FSVI)
**CUBER**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2211.00789)
**ADA**(NeurIPS 2022)[[paper]](https://www.amazon.science/publications/memory-efficient-continual-learning-with-transformers)
**CLOM**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.04524)
**S-Prompt**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2207.12819)
**ALIFE**(NIPS 2022)[[paper]](https://arxiv.org/abs/2210.06816)
**PMT**(NIPS 2022)[[paper]](https://arxiv.org/abs/2112.07066)
**STCISS**(TNNLS 2022)[[paper]](https://arxiv.org/abs/2012.03362)
**DSN**(TPAMI 2022)[[paper]](https://ieeexplore.ieee.org/document/9779071)
**MgSvF**(TPAMI 2022)[[paper]](https://arxiv.org/abs/2006.15524)
**TransIL**(WACV 2022)[[paper]](https://arxiv.org/pdf/2110.08421.pdf) | **NER-FSCIL**(ACL 2022)[[paper]](https://aclanthology.org/2022.acl-long.43/)
**LIMIT**(arXiv 2022)[[paper]](https://arxiv.org/abs/2203.17030)
**EMP**(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.07275)
**SPTM**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.html)
**BER**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Toldo_Bring_Evanescent_Representations_to_Life_in_Lifelong_Class_Incremental_Learning_CVPR_2022_paper.html)
**Sylph**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yin_Sylph_A_Hypernetwork_Framework_for_Incremental_Few-Shot_Object_Detection_CVPR_2022_paper.html)
**MetaFSCIL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.html)
**FCIL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Federated_Class-Incremental_Learning_CVPR_2022_paper.html)[[code]](https://github.com/conditionWang/FCIL)
**FILIT**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.html)
**PuriDivER**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Bang_Online_Continual_Learning_on_a_Contaminated_Data_Stream_With_Blurry_CVPR_2022_paper.html)[[code]](https://github.com/clovaai/puridiver)
**SNCL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Bayesian_Sparse_Networks_With_Full_Experience_Replay_for_Continual_CVPR_2022_paper.html)
**DVC**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Gu_Not_Just_Selection_but_Exploration_Online_Class-Incremental_Continual_Learning_via_CVPR_2022_paper.html)[[code]](https://github.com/YananGu/DVC)
**CVS**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wan_Continual_Learning_for_Visual_Search_With_Backward_Consistent_Feature_Embedding_CVPR_2022_paper.html)
**CPL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Continual_Predictive_Learning_From_Videos_CVPR_2022_paper.html)
**GCR**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Tiwari_GCR_Gradient_Coreset_Based_Replay_Buffer_Selection_for_Continual_Learning_CVPR_2022_paper.html)
**LVT**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Continual_Learning_With_Lifelong_Vision_Transformer_CVPR_2022_paper.html)
**vCLIMB**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Villa_vCLIMB_A_Novel_Video_Class_Incremental_Learning_Benchmark_CVPR_2022_paper.html)[[code]](https://vclimb.netlify.app/)
**Learn-to-Imagine**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.08932)[[code]](https://github.com/TOM-tym/Learn-to-Imagine)
**DCR**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.04078)
**DIY-FSCIL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.14843)
**C-FSCIL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.16588)[[code]](https://github.com/IBM/constrained-FSCIL)
**SSRE**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06359)
**CwD**(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.04731)[[code]](https://github.com/Yujun-Shi/CwD)
**MSL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.03970)
**DyTox**(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.11326)[[code]](https://github.com/arthurdouillard/dytox)
**X-DER**(ECCV 2022)[[paper]](https://arxiv.org/abs/2201.00766)
**clsss-iNCD**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08605)[[code]](https://github.com/OatmealLiu/class-iNCD)
**ARI**(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.12967)[[code]](https://github.com/bhrqw/ARI)
**Long-Tailed-CIL**(ECCV 2022)[[paper]](https://arxiv.org/abs/2210.00266)[[code]](https://github.com/xialeiliu/Long-Tailed-CIL)
**LIRF**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08224)
**DSDM**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850721.pdf)[[code]](https://github.com/Julien-pour/Dynamic-Sparse-Distributed-Memory)
**CVT**(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.13516.pdf)
**TwF**(ECCV 2022)[[paper]](https://arxiv.org/abs/2206.00388)[[code]](https://github.com/mbosc/twf)
**CSCCT**(ECCV 2022)[[paper]](https://cscct.github.io)[[code]](https://github.com/ashok-arjun/CSCCT)
**DLCFT**(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.08112)
**ERDR**(ECCV2022)[[paper]](https://arxiv.org/pdf/2207.11213.pdf)
**NCDwF**(ECCV2022)[[paper]](https://arxiv.org/abs/2207.10659)
**CoMPS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=PVJ6j87gOHz)
**i-fuzzy**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=nrGGfMbY_qK)[[code]](https://github.com/naver-ai/i-Blurry)
**CLS-ER**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=uxxFrDwrE7Y)[[code]](https://github.com/NeurAI-Lab/CLS-ER)
**MRDC**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=a7H7OucbWaU)[[code]](https://github.com/andrearosasco/DistilledReplay)
**OCS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=f9D-5WNG4Nv)
**InfoRS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=IpctgL7khPp)
**ER-AML**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=N8MaByOzUfb)[[code]](https://github.com/pclucas14/aml)
**FAS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=metRpM4Zrcb)
**LUMP**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=9Hrka5PA7LW)
**CF-IL**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=RxplU3vmBx)[[code]](https://github.com/MozhganPourKeshavarz/Cost-Free-Incremental-Learning)
**LFPT5**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=HCRVf71PMF)[[code]](https://github.com/qcwthu/Lifelong-Fewshot-Language-Learning)
**Model Zoo**(ICLR 2022)[[paper]](https://arxiv.org/abs/2106.03027)
**OCM**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/guo22g/guo22g.pdf)[[code]](https://github.com/gydpku/OCM)
**DRO**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/wang22v/wang22v.pdf)[[code]](https://github.com/joey-wang123/DRO-Task-free)
**EAK**(ICPR 2022)[[paper]](https://arxiv.org/abs/2206.02577)
**RAR**(NeurIPS 2022)[[paper]](https://openreview.net/forum?id=XEoih0EwCwL&referrer=%5Bthe%20profile%20of%20Tianyi%20Zhou%5D(%2Fprofile%3Fid%3D~Tianyi_Zhou2))
**LiDER**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06443)
**SparCL**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.09476)
**ClonEx-SAC**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.13900)
**ODDL**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06579)
**CSSL**(PRL 2022)[[paper]](https://arxiv.org/abs/2108.06552)
**MBP**(TNNLS 2022)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9705128)
**CandVot**(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/He_Online_Continual_Learning_via_Candidates_Voting_WACV_2022_paper.pdf)
**FlashCards**(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/Gopalakrishnan_Knowledge_Capture_and_Replay_for_Continual_Learning_WACV_2022_paper.pdf) |
| 2021 | **Meta-DR**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Volpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.html)
**continual cross-modal retrieval**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021W/CLVision/html/Wang_Continual_Learning_in_Cross-Modal_Retrieval_CVPRW_2021_paper.html)
**DER**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Yan_DER_Dynamically_Expandable_Representation_for_Class_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Rhyssiyan/DER-ClassIL.pytorch)
**EFT**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Verma_Efficient_Feature_Transformations_for_Discriminative_and_Generative_Continual_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/vkverma01/EFT)
**PASS**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Impression2805/CVPR21_PASS)
**GeoDL**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/chrysts/geodesic_continual_learning)
**IL-ReduNet**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Incremental_Learning_via_Rate_Reduction_CVPR_2021_paper.pdf)
**PIGWM**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Image_De-Raining_via_Continual_Learning_CVPR_2021_paper.pdf)
**BLIP**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.pdf)[[code]](https://github.com/Yujun-Shi/BLIP)
**Adam-NSCL**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Training_Networks_in_Null_Space_of_Feature_Covariance_for_Continual_CVPR_2021_paper.pdf)[[code]](https://github.com/ShipengWang/Adam-NSCL)
**PLOP**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Douillard_PLOP_Learning_Without_Forgetting_for_Continual_Semantic_Segmentation_CVPR_2021_paper.pdf)[[code]](https://github.com/arthurdouillard/CVPR2021_PLOP)
**SDR**(CVPR 2021)[[paper]](https://lttm.dei.unipd.it/paper_data/SDR/)[[code]](https://github.com/LTTM/SDR)
**SKD**(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.04059)
**Always Be Dreaming**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Smith_Always_Be_Dreaming_A_New_Approach_for_Data-Free_Class-Incremental_Learning_ICCV_2021_paper.html)[[code]](https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL)
**SPB**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Striking_a_Balance_Between_Stability_and_Plasticity_for_Class-Incremental_Learning_ICCV_2021_paper.pdf)
**Else-Net**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper.pdf)
**LCwoF-Framework**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kukleva_Generalized_and_Incremental_Few-Shot_Learning_by_Explicit_Learning_and_Calibration_ICCV_2021_paper.pdf)
**AFEC**(NeurIPS 2021)[[paper]](https://openreview.net/pdf/72a18fad6fce88ef0286e9c7582229cf1c8d9f93.pdf)[[code]](https://github.com/lywang3081/AFEC)
**F2M**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=ALvt7nXa2q)[[code]](https://github.com/moukamisama/F2M)
**NCL**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=W9250bXDgpK)[[code]](https://github.com/tachukao/ncl)
**BCL**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=u1XV9BPAB9)[[code]](https://github.com/krm9c/Balanced-Continual-Learning)
**Posterior Meta-Replay**(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/761b42cfff120aac30045f7a110d0256-Abstract.html)
**MARK**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=hHTctAv9Lvh)[[code]](https://github.com/JuliousHurtado/meta-training-setup)
**Co-occur**(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/ffc58105bf6f8a91aba0fa2d99e6f106-Abstract.html)[[code]](https://github.com/dongnana777/bridging-non-co-occurrence)
**LINC**(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/LINC_paper_AAAI_2021_camera_ready.pdf)
**CLNER**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-7791.MonaikulN.pdf)
**CLIS**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-2989.ZhengE.pdf)
**PCL**(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/AAAI2021_PCL.pdf)
**MAS3**(AAAI 2021)[[paper]](https://arxiv.org/abs/2009.12518)
**FSLL**(AAAI 2021)[[paper]](https://arxiv.org/pdf/2103.00991.pdf)
**VAR-GPs**(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kapoor21b.html)
**BSA**(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kumar21a.html)
**GPM**(ICLR 2021)[[paper]](https://arxiv.org/abs/2103.09762)[[code]](https://github.com/sahagobinda/GPM)

| **TMN**(TNNLS 2021)[[paper]](https://ieeexplore.ieee.org/document/9540230?mkt_tok=NzU2LUdQSC04OTkAAAGEWh3nzSNX8-bTkVna2NbuB0POeJj2Og3psx0tXhIg9QWKppanhkVXCPQQMF_mCm4oXM9ds24H4-usCcZ06Vy9lezgWYCQrpxt6YPWkhuvj-E)
**RKD**(AAAI 2021)[[paper]](https://ojs.aaai.org/index.php/AAAI/article/view/16213)
**AANets**(CVPR 2021)[[paper]](https://class-il.mpi-inf.mpg.de/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)
**ORDisCo**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_ORDisCo_Effective_and_Efficient_Usage_of_Incremental_Unlabeled_Data_for_CVPR_2021_paper.pdf)
**DDE**(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.01737)[[code]](https://github.com/JoyHuYY1412/DDE_CIL)
**IIRC**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.pdf)
**Hyper-LifelongGAN**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhai_Hyper-LifelongGAN_Scalable_Lifelong_Learning_for_Image_Conditioned_Generation_CVPR_2021_paper.pdf)
**CEC**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Few-Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.pdf)
**iMTFA**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Ganea_Incremental_Few-Shot_Instance_Segmentation_CVPR_2021_paper.pdf)
**RM**(CVPR 2021)[[paper]](https://ieeexplore.ieee.org/document/9577808)
**LOGD**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Tang_Layerwise_Optimization_by_Gradient_Decomposition_for_Continual_Learning_CVPR_2021_paper.pdf)
**SPPR**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.html)
**LReID**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Pu_Lifelong_Person_Re-Identification_via_Adaptive_Knowledge_Accumulation_CVPR_2021_paper.pdf)[[code]](https://github.com/TPCD/LifelongReID)
**SS-IL**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Ahn_SS-IL_Separated_Softmax_for_Incremental_Learning_ICCV_2021_paper.pdf)
**TCD**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Park_Class-Incremental_Learning_for_Action_Recognition_in_Videos_ICCV_2021_paper.pdf)
**CLOC**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Cai_Online_Continual_Learning_With_Natural_Distribution_Shifts_An_Empirical_Study_ICCV_2021_paper.html)[[code]](https://github.com/IntelLabs/continuallearning)
**CoPE**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/De_Lange_Continual_Prototype_Evolution_Learning_Online_From_Non-Stationary_Data_Streams_ICCV_2021_paper.pdf)[[code]](https://github.com/Mattdl/ContinualPrototypeEvolution)
**Co2L**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cha_Co2L_Contrastive_Continual_Learning_ICCV_2021_paper.pdf)[[code]](https://github.com/chaht01/co2l)
**SPR**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_Continual_Learning_on_Noisy_Data_Streams_via_Self-Purified_Replay_ICCV_2021_paper.pdf)
**NACL**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Rostami_Detection_and_Continual_Learning_of_Novel_Face_Presentation_Attacks_ICCV_2021_paper.html)
**CL-HSCNet**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_Continual_Learning_for_Image-Based_Camera_Localization_ICCV_2021_paper.html)[[code]](https://github.com/AaltoVision/CL_HSCNet)
**RECALL**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Maracani_RECALL_Replay-Based_Continual_Learning_in_Semantic_Segmentation_ICCV_2021_paper.html)[[code]](https://github.com/lttm/recall)
**VAE**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf)
**ERT**(ICPR 2021)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9412614)[[code]](https://github.com/hastings24/rethinking_er)
**KCL**(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/derakhshani21a.html)[[code]](https://github.com/mmderakhshani/KCL)
**MLIOD**(TPAMI 2021)[[paper]](https://arxiv.org/abs/2003.08798)[[code]](https://github.com/JosephKJ/iOD)
**BNS**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/ac64504cc249b070772848642cffe6ff-Abstract.html)
**FS-DGPM**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=q1eCa1kMfDd)
**SSUL**(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/5a9542c773018268fc6271f7afeea969-Paper.pdf)
**DualNet**(NeurIPS 2021)[[paper]](https://openreview.net/pdf?id=eQ7Kh-QeWnO)
**classAug**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf)
**GMED**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/f45a1078feb35de77d26b3f7a52ef502-Abstract.html)
**BooVAE**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)[[code]](https://github.com/AKuzina/BooVAE)
**GeMCL**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html)
**RMM**(NIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/1cbcaa5abbb6b70f378a3a03d0c26386-Paper.pdf)[[code]](https://github.com/aminbana/gemcl)
**LSF**(IJCAI 2021)[[paper]](https://www.ijcai.org/proceedings/2021/0137.pdf)
**ASER**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9988.ShimD.pdf)[[code]](https://github.com/RaptorMai/online-continual-learning)
**CML**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-4847.WuT.pdf)[[code]](https://github.com/wutong8023/AAAI-CML)
**HAL**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9700.ChaudhryA.pdf)
**MDMT**(AAAI 2021)[[paper]](https://arxiv.org/abs/2012.07236)
**AU**(WACV 2021)[[paper]](https://openaccess.thecvf.com/content/WACV2021/html/Kurmi_Do_Not_Forget_to_Attend_to_Uncertainty_While_Mitigating_Catastrophic_WACV_2021_paper.html)
**IDBR**(NAACL 2021)[[paper]](https://www.aclweb.org/anthology/2021.naacl-main.218.pdf)[[code]](https://github.com/GT-SALT/IDBR)
**COIL**(ACM MM 2021)[[paper]](https://arxiv.org/pdf/2107.12654.pdf)
|
| 2020 | **CWR\***(CVPR 2020)[[paper]](https://arxiv.org/abs/1907.03799v3)
**MiB**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Cermelli_Modeling_the_Background_for_Incremental_Learning_in_Semantic_Segmentation_CVPR_2020_paper.pdf)[[code]](https://github.com/fcdl94/MiB)
**K-FAC**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Continual_Learning_With_Extended_Kronecker-Factored_Approximate_Curvature_CVPR_2020_paper.html)
**SDC**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Semantic_Drift_Compensation_for_Class-Incremental_Learning_CVPR_2020_paper.html)[[code]](https://github.com/yulu0724/SDC-IL)
**NLTF**(AAAI 2020) [[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6617)
**CLCL**(ICLR 2020)[[paper]](https://openreview.net/forum?id=rklnDgHtDS)[[code]](https://github.com/yli1/CLCL)
**APD**(ICLR 2020)[[paper]](https://arxiv.org/pdf/1902.09432.pdf)
**HYPERCL**(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJgwNerKvB)[[code]](https://github.com/chrhenning/hypercl)
**CN-DPM**(ICLR 2020)[[paper]](https://arxiv.org/pdf/2001.00689.pdf)
**UCB**(ICLR 2020)[[paper]](https://openreview.net/forum?id=HklUCCVKDB)[[code]](https://github.com/SaynaEbrahimi/UCB)
**CLAW**(ICLR 2020)[[paper]](https://openreview.net/forum?id=Hklso24Kwr)
**CAT**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/d7488039246a405baf6a7cbc3613a56f-Paper.pdf)[[code]](https://github.com/ZixuanKe/CAT)
**AGS-CL**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html)
**MERLIN**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/a5585a4d4b12277fee5cad0880611bc6-Paper.pdf)[[code]](https://github.com/mattriemer/mer)
**OSAKA**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Paper.pdf)[[code]](https://github.com/ElementAI/osaka)
**RATT**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c2964caac096f26db222cb325aa267cb-Paper.pdf)
**CCLL**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/b3b43aeeacb258365cc69cdaf42a68af-Abstract.html)
**CIDA**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58601-0_4)
**GraphSAIL**(CIKM 2020)[[paper]](https://dl.acm.org/doi/abs/10.1145/3340531.3412754)
**ANML**(ECAI 2020)[[paper]](https://arxiv.org/abs/2002.09571)[[code]](https://github.com/uvm-neurobotics-lab/ANML)
**ICWR**(BMVC 2020)[[paper]](https://arxiv.org/pdf/2008.13710.pdf)
**DAM**(TPAMI 2020)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)
**OGD**(PMLR 2020)[[paper]](http://proceedings.mlr.press/v108/farajtabar20a.html)
**MC-OCL**(ECCV2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58604-1_43)[[code]](https://github.com/DonkeyShot21/batch-level-distillation)
**RCM**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_41)[[code]](https://github.com/menelaoskanakis/RCM)
**OvA-INN**(IJCNN 2020)[[paper]](https://ieeexplore.ieee.org/abstract/document/9206766)
**XtarNet**(ICLM 2020)[[paper]](http://proceedings.mlr.press/v119/yoon20b/yoon20b.pdf)[[code]](https://github.com/EdwinKim3069/XtarNet)
**DMC**(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Zhang_Class-incremental_Learning_via_Deep_Model_Consolidation_WACV_2020_paper.html)
| **iTAML**(CVPR 2020)[[paper]](https://arxiv.org/pdf/2003.11652.pdf)[[code]](https://github.com/brjathu/iTAML)
**FSCIL**(CVPR 2020)[[paper]](https://arxiv.org/pdf/2004.10956.pdf)[[code]](https://github.com/xyutao/fscil)
**GFR**(CVPR 2020)[[paper]](https://ieeexplore.ieee.org/document/9150851/#:~:text=Generative%20Feature%20Replay%20For%20Class-Incremental%20Learning%20Abstract%3A%20Humans,that%20the%20task-ID%20is%20unknown%20at%20inference%20time.)[[code]](https://github.com/xialeiliu/GFR-IL)
**OSIL**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/He_Incremental_Learning_in_Online_Scenario_CVPR_2020_paper.html)
**ONCE**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Perez-Rua_Incremental_Few-Shot_Object_Detection_CVPR_2020_paper.html)
**WA**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhao_Maintaining_Discrimination_and_Fairness_in_Class_Incremental_Learning_CVPR_2020_paper.pdf)[[code]](https://github.com/hugoycj/Incremental-Learning-with-Weight-Aligning)
**CGATE**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Abati_Conditional_Channel_Gated_Networks_for_Task-Aware_Continual_Learning_CVPR_2020_paper.html)[[code]](https://github.com/lit-leo/cgate)
**Mnemonics Training**(CVPR 2020)[[paper]](https://class-il.mpi-inf.mpg.de/mnemonics-training/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)
**MEGA**(NeurIPS 2020)[[paper]](https://par.nsf.gov/servlets/purl/10233158)
**GAN Memory**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/bf201d5407a6509fa536afc4b380577e-Abstract.html)[[code]](https://github.com/MiaoyunZhao/GANmemory_LifelongLearning)
**Coreset**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/aa2a77371374094fe9e0bc1de3f94ed9-Paper.pdf)
**FROMP**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Paper.pdf)[[code]](https://github.com/team-approx-bayes/fromp)
**DER**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Paper.pdf)[[code]](https://github.com/aimagelab/mammoth)
**InstAParam**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Paper.pdf)
**BOCL**(AAAI 2020)[[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6060)
**REMIND**(ECCV 2020)[[paper]](https://arxiv.org/pdf/1910.02509v3)[[code]](https://github.com/tyler-hayes/REMIND)
**ACL**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58621-8_23)[[code]](https://github.com/facebookresearch/Adversarial-Continual-Learning)
**TPCIL**(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640256.pdf)
**GDumb**(ECCV 2020)[[paper]](https://www.robots.ox.ac.uk/~tvg/publications/2020/gdumb.pdf)[[code]](https://github.com/drimpossible/GDumb)
**PRS**(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123580409.pdf)
**PODNet**(ECCV 2020)[[paper]](https://arxiv.org/abs/2004.13513)[[code]](https://github.com/arthurdouillard/incremental_learning.pytorch)
**FA**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58517-4_41)
**L-VAEGAN**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_46)
**Piggyback GAN**(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123660392.pdf)[[code]](https://github.com/arunmallya/piggyback)
**IDA**(ECCV 2020)[[paper]](https://arxiv.org/abs/2002.04162)
**RCM**(ECCV 2020)[[paper]](https://arxiv.org/abs/2007.12540)
**LAMOL**(ICLR 2020)[[paper]](https://openreview.net/forum?id=Skgxcn4YDS)[[code]](https://github.com/chho33/LAMOL)
**FRCL**(ICLR 2020)[[paper]](https://arxiv.org/abs/1901.11356)[[code]](https://github.com/AndreevP/FRCL)
**GRS**(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJlsFpVtDB)
**Brain-inspired replay**(Natrue Communications 2020)[[paper]](https://www.nature.com/articles/s41467-020-17866-2)[[code]](https://github.com/GMvandeVen/brain-inspired-replay)
**CLIFER**(FG 2020)[[paper]](https://ieeexplore.ieee.org/document/9320226)
**ScaIL**(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Belouadah_ScaIL_Classifier_Weights_Scaling_for_Class_Incremental_Learning_WACV_2020_paper.html)[[code]](https://github.com/EdenBelouadah/class-incremental-learning)
**ARPER**(EMNLP 2020)[[paper]](https://arxiv.org/abs/2010.00910)
**DnR**(COLING 2020)[[paper]](https://www.aclweb.org/anthology/2020.coling-main.318.pdf)
**ADER**(RecSys 2020)[[paper]](https://arxiv.org/abs/2007.12000)[[code]](https://github.com/DoubleMuL/ADER)
**MUC**(ECCV 2020)[[paper]](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123710698.pdf)[[code]](https://github.com/liuyudut/MUC)
|
| 2019 | **LwM**(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/8953962)
**CPG**(NeurIPS 2019)[[paper]](https://arxiv.org/pdf/1910.06562v1.pdf)[[code]](https://github.com/ivclab/CPG)
**UCL**(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/2c3ddf4bf13852db711dd1901fb517fa-Paper.pdf)
**OML**(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/f4dd765c12f2ef67f98f3558c282a9cd-Abstract.html)[[code]](https://github.com/Khurramjaved96/mrcl)
**ALASSO**(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Park_Continual_Learning_by_Asymmetric_Loss_Approximation_With_Single-Side_Overestimation_ICCV_2019_paper.pdf)
**Learn-to-Grow**(PMLR 2019)[[paper]](http://proceedings.mlr.press/v97/li19m/li19m.pdf)
**OWM**(Nature Machine Intelligence 2019)[[paper]](https://www.nature.com/articles/s42256-019-0080-x#Sec2)[[code]](https://github.com/beijixiong3510/OWM)
| **LUCIR**(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html)[[code]](https://github.com/hshustc/CVPR19_Incremental_Learning)
**TFCL**(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aljundi_Task-Free_Continual_Learning_CVPR_2019_paper.pdf)
**GD**(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/9010368)[[code]](https://github.com/kibok90/iccv2019-inc)
**DGM**(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Ostapenko_Learning_to_Remember_A_Synaptic_Plasticity_Driven_Framework_for_Continual_CVPR_2019_paper.pdf)
**BiC**(CVPR 2019)[[paper]](https://arxiv.org/abs/1905.13260)[[code]](https://github.com/wuyuebupt/LargeScaleIncrementalLearning)
**MER**(ICLR 2019)[[paper]](https://openreview.net/pdf?id=B1gTShAct7)[[code]](https://github.com/mattriemer/mer)
**PGMA**(ICLR 2019)[[paper]](https://openreview.net/forum?id=ryGvcoA5YX)
**A-GEM**(ICLR 2019)[[paper]](https://arxiv.org/pdf/1812.00420.pdf)[[code]](https://github.com/facebookresearch/agem)
**IL2M**(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009019)
**ILCAN**(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009031)
**Lifelong GAN**(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/html/Zhai_Lifelong_GAN_Continual_Learning_for_Conditional_Image_Generation_ICCV_2019_paper.html)
**GSS**(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf)
**ER**(NIPS 2019)[[paper]](https://arxiv.org/abs/1811.11682)
**MIR**(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html)[[code]](https://github.com/optimass/Maximally_Interfered_Retrieval)
**RPS-Net**(NIPS 2019)[[paper]](https://www.researchgate.net/profile/Salman-Khan-62/publication/333617650_Random_Path_Selection_for_Incremental_Learning/links/5d04905ea6fdcc39f11b7355/Random-Path-Selection-for-Incremental-Learning.pdf)
**CLEER**(IJCAI 2019)[[paper]](https://arxiv.org/abs/1903.04566)
**PAE**(ICMR 2019)[[paper]](https://dl.acm.org/doi/10.1145/3323873.3325053)[[code]](https://github.com/ivclab/PAE)
|
| 2018 | **PackNet**(CVPR 2018)[[paper]](https://openaccess.thecvf.com/content_cvpr_2018/html/Mallya_PackNet_Adding_Multiple_CVPR_2018_paper.html)[[code]](https://github.com/arunmallya/packnet)
**OLA**(NIPS 2018)[[paper]](https://proceedings.neurips.cc/paper/2018/hash/f31b20466ae89669f9741e047487eb37-Abstract.html)
**RCL**(NIPS 2018)[[paper]](http://papers.nips.cc/paper/7369-reinforced-continual-learning.pdf)[[code]](https://github.com/xujinfan/Reinforced-Continual-Learning)
**MARL**(ICLR 2018)[[paper]](https://openreview.net/forum?id=ry8dvM-R-)
**DEN**(ICLR 2018)[[paper]](https://openreview.net/forum?id=Sk7KsfW0-)[[code]](https://github.com/jaehong31/DEN)
**P&C**(ICML 2018)[[paper]](https://arxiv.org/abs/1805.06370)
**Piggyback**(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/papers/Arun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.pdf)[[code]](https://github.com/arunmallya/piggyback)
**RWalk**(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/html/Arslan_Chaudhry__Riemannian_Walk_ECCV_2018_paper.html)
**MAS**(ECCV 2018)[[paper]](https://arxiv.org/pdf/1711.09601.pdf)[[code]](https://github.com/rahafaljundi/MAS-Memory-Aware-Synapses)
**R-EWC**(ICPR 2018)[[paper]](https://ieeexplore.ieee.org/abstract/document/8545895)[[code]](https://github.com/xialeiliu/RotateNetworks)
**HAT**(PMLR 2018)[[paper]](http://proceedings.mlr.press/v80/serra18a.html)[[code]](https://github.com/joansj/hat)
| **MeRGANs**(NIPS 2018)[[paper]](https://arxiv.org/abs/1809.02058)[[code]](https://github.com/WuChenshen/MeRGAN)
**EEIL**(ECCV 2018)[[paper]](https://arxiv.org/abs/1807.09536)[[code]](https://github.com/fmcp/EndToEndIncrementalLearning)
**Adaptation by Distillation**(ECCV 2018)[[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Saihui_Hou_Progressive_Lifelong_Learning_ECCV_2018_paper.pdf)
**ESGR**(BMVC 2018)[[paper]](http://bmvc2018.org/contents/papers/0325.pdf)[[code]](https://github.com/TonyPod/ESGR)
**VCL**(ICLR 2018)[[paper]](https://arxiv.org/pdf/1710.10628.pdf#page=13&zoom=100,110,890)
**FearNet**(ICLR 2018)[[paper]](https://openreview.net/forum?id=SJ1Xmf-Rb)
**DGDMN**(ICLR 2018)[[paper]](https://openreview.net/forum?id=BkVsWbbAW)
|
| 2017 | **Expert Gate**(CVPR 2017)[[paper]](https://openaccess.thecvf.com/content_cvpr_2017/papers/Aljundi_Expert_Gate_Lifelong_CVPR_2017_paper.pdf)[[code]](https://github.com/wannabeOG/ExpertNet-Pytorch)
**ILOD**(ICCV 2017)[[paper]](https://openaccess.thecvf.com/content_ICCV_2017/papers/Shmelkov_Incremental_Learning_of_ICCV_2017_paper.pdf)[[code]](https://github.com/kshmelkov/incremental_detectors)
**EBLL**(ICCV2017)[[paper]](https://arxiv.org/abs/1704.01920)
**IMM**(NIPS 2017)[[paper]](https://arxiv.org/abs/1703.08475)[[code]](https://github.com/btjhjeon/IMM_tensorflow)
**SI**(ICML 2017)[[paper]](http://proceedings.mlr.press/v70/zenke17a/zenke17a.pdf)[[code]](https://github.com/ganguli-lab/pathint)
**EWC**(PNAS 2017)[[paper]](https://arxiv.org/abs/1612.00796)[[code]](https://github.com/stokesj/EWC)
| **iCARL**(CVPR 2017)[[paper]](https://arxiv.org/abs/1611.07725)[[code]](https://github.com/srebuffi/iCaRL)
**GEM**(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/hash/f87522788a2be2d171666752f97ddebb-Abstract.html)[[code]](https://github.com/facebookresearch/GradientEpisodicMemory)
**DGR**(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/file/0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf)[[code]](https://github.com/kuc2477/pytorch-deep-generative-replay)
|
| 2016 | **LwF**(ECCV 2016)[[paper]](https://link.springer.com/chapter/10.1007/978-3-319-46493-0_37)[[code]](https://github.com/lizhitwo/LearningWithoutForgetting)
| |### 3.2 From a Data Deployment Perspective
**Data decentralized incremental learning**
+ **[DCID]** Deep Class Incremental Learning from Decentralized Data(TNNLS 2022)[[paper]](https://ieeexplore.ieee.org/document/9932643)[[code]](https://github.com/Vision-Intelligence-and-Robots-Group/DCIL)
+ **[GLFC]** Federated Class-Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.11473)[[code]](https://github.com/conditionWang/FCIL)
+ **[FedWeIT]** Federated Continual Learning with Weighted Inter-client Transfer(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/yoon21b.html)[[code]](https://github.com/wyjeong/FedWeIT)**Data centralized incremental learning**
All other studies aforementioned except those already in the 'Decentralized' section.
## 4 Datasets
| datasets | describes |
| :----------------------------------------------------------- | :----------------------------------------------------------- |
| [ImageNet](https://image-net.org) | There are 1.28 million training images and 50,000 validation images in over 1,000 categories. Usually crop into 224×224 color image |
| [TinyImageNet](https://www.kaggle.com/c/tiny-imagenet) | Contains 100,000 64×64 color images of 200 categories (500 per category). Each class has 500 training images, 50 validation images, and 50 test images. |
| [MiniImageNet](https://lyy.mpi-inf.mpg.de/mtl/download/Lmzjm9tX.html) | This dataset is a subset of ImageNet used for few-shot learning. It consists of 60, 000 colour images of size 84 × 84 with 100 classes, each having 600 examples. |
| [SubImageNet](https://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html) | This dataset is a 100-class subset of ImageNet's **random sample**, which contains approximately 130,000 images for training and 5,000 images for testing. |
| [CIFAR-10/100](https://www.cs.toronto.edu/~kriz/cifar.html) | Both datasets contain 60,000 natural RGB images of the size 32 × 32, including 50,000 training and 10,000 test images. CIFAR10 has 10 classes, while CIFAR100 has 100 classes. |
| [CORe50](https://vlomonaco.github.io/core50/) | This dataset consists of 164,866 128×128 RGB-D images: 11 sessions × 50 objects × (around 300) frames per session.
[Github](https://github.com/vlomonaco/core50)
[CORe50: a New Dataset and Benchmark for Continuous Object Recognition](http://proceedings.mlr.press/v78/lomonaco17a.html)
|
| [OpenLORIS-Object](https://www.sciencedirect.com/science/article/pii/S0031320322003004?via%3Dihub) | This is the first real-world dataset for robotic vision with independent and quantifiable environmental factors, compared with other lifelong learning datasets, with 186 instances, 63 categories and 2,138,050 images. |## 5 Lecture, Tutorial, Workshop, & Talks
**Life-Long learning | 李宏毅**
Life-long Learning: [[ppt]](https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/life_v2.pptx) [[pdf]](https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/life_v2.pdf)
Catastrophic Forgetting [[Chinese]](https://youtu.be/rWF9sg5w6Zk) [[English]](https://youtu.be/yAX8Ydfek_I)
Mitigating Catastrophic Forgetting [[Chinese]](https://youtu.be/Y9Jay_vxOsM) [[English]](https://youtu.be/-2r4cqDP4BY)
Meta Learning : Learn to Learn [[Chinese]](https://www.youtube.com/watch?v=xoastiYx9JU)
**Continual AI Lecture**
[Open World Lifelong Learning | A Continual Machine Learning Course](http://owll-lab.com/teaching/cl_lecture/)
[Prompting-based Continual Learning | Continual AI](https://www.youtube.com/watch?v=19bylhGhfAw)
**VALSE Webinar** (In Chinese)
[20211215【学无止境:深度连续学习】洪晓鹏:记忆拓扑保持的深度增量学习方法](https://www.bilibili.com/video/BV1Qi4y197uf?spm_id_from=333.999.0.0)
[20211215【学无止境:深度连续学习】李玺:基于深度神经网络的持续性学习理论与方法](https://www.bilibili.com/video/BV1XR4y1W7mr?spm_id_from=333.999.0.0)
**ACM MULTIMEDIA**
[ACM2021 Few-shot Learning for Multi-Modality Tasks](https://ingrid725.github.io/ACM-Multimedia-2021/)
**CVPR Workshop**
[CVPR 2022 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2022/overview)
[CVPR2021 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2021)
[CVPR2020 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2020/overview)
[CVPR2017 Continuous and
Open-Set Learning
Workshop](https://erodner.github.io/continuouslearningcvpr2017/)**ICML Tutorial/Workshop**
[ICML 2021 Workshop on Theory and Foundation of Continual Learning](https://sites.google.com/view/cl-theory-icml2021)
[ICML 2021 Tutorial on Continual Learning with Deep Architectures](https://sites.google.com/view/cltutorial-icml2021)
[ICML2020 Workshop on Continual Learning](https://sites.google.com/view/cl-icml/)
**NeurIPS Workshop**
[NeurIPS2021 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning](http://www.robot-learning.ml/2021/)
[NeurIPS2018 Continual learning Workshop](https://sites.google.com/view/continual2018/home)
[NeurIPS2016 Continual Learning and Deep Networks Workshop](https://sites.google.com/site/cldlnips2016/)
**IJCAI Workshop**
[IJCAI 2021 International Workshop on Continual Semi-Supervised Learning](https://sites.google.com/view/sscl-workshop-ijcai-2021/overview)
**ContinualAI wiki**
[A Non-profit Research Organization and Open Community on Continual Learning for AI](https://www.continualai.org/)
**CoLLAs**
[Conference on Lifelong Learning Agents - CoLLAs 2022](https://lifelong-ml.cc/)
## 6 Competitions
**achieved**
[3rd CLVISION CVPR Workshop Challenge 2022](https://sites.google.com/view/clvision2022/challenge)
[IJCAI 2021 - International Workshop on Continual Semi-Supervised Learning](https://sites.google.com/view/sscl-workshop-ijcai-2021/)
[2rd CLVISION CVPR Workshop Challenge 2021](https://eval.ai/web/challenges/challenge-page/829/overview)
[1rd CLVISION CVPR Workshop Challenge 2020](https://sites.google.com/view/clvision2020/challenge)
## 7 Awesome Reference
[1] https://github.com/xialeiliu/Awesome-Incremental-Learning
## 8 Contact Us
Should there be any concerns on this page, please don't hesitate to let us know via [hongxiaopeng@ieee.org](mailto:hongxiaopeng@ieee.org) or [xl330@126.com](mailto:xl330@126.com).
# Full Paper List
## arXiv (If accepted, welcome corrections)
+ Continual Instruction Tuning for Large Multimodal Models [[paper]](https://arxiv.org/abs/2311.16206)
+ Continual Adversarial Defense [[paper]](https://arxiv.org/abs/2312.09481)[[code]](https://github.com/cc13qq/CAD)
+ Class-Prototype Conditional Diffusion Model for Continual Learning with Generative Replay [[paper]](https://arxiv.org/abs/2312.06710)[[code]](https://github.com/dnkhanh45/cpdm)
+ Class Incremental Learning for Adversarial Robustnes [[paper]](https://arxiv.org/pdf/2312.03289.pdf)
+ KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All [[paper]](https://arxiv.org/abs/2311.15414)
+ Prompt Gradient Projection for Continual Learning [[paper]](https://openreview.net/forum?id=EH2O3h7sBI)## 2024
+ **[MOSE]** Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation(CVPR 2024) [[paper]](https://arxiv.org/abs/2404.00417)[[code]](https://github.com/AnAppleCore/MOSE)
+ **[AISEOCL]** Adaptive instance similarity embedding for online continual learning (Pattern Recognition 2024) [[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0031320323009354#:~:text=We%20propose%20a%20novel%20adaptive,the%20same%20class%20or%20not)
+ **[SEED]** Divide and not forget: Ensemble of selectively trained experts in Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=sSyytcewxe)
+ **[CAMA]** Online Continual Learning for Interactive Instruction Following Agents(ICLR 2024) [[paper]](https://openreview.net/forum?id=7M0EzjugaN)[[code]](https://github.com/snumprlab/cl-alfred)
+ **[SFR]]** Function-space Parameterization of Neural Networks for Sequential Learning(ICLR2024) [[paper]](https://openreview.net/attachment?id=2dhxxIKhqz&name=pdf)[[code]](https://aaltoml.github.io/sfr/)
+ **[HLOP]** Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks(ICLR 2024) [[paper]](https://openreview.net/forum?id=MeB86edZ1P)
+ **[TPL]** Class Incremental Learning via Likelihood Ratio Based Task Prediction(ICLR 2024) [[paper]](https://openreview.net/forum?id=8QfK9Dq4q0)[[code]](https://github.com/linhaowei1/TPL)
+ **[AF-FCL]** Accurate Forgetting for Heterogeneous Federated Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=ShQrnAsbPI)[[code]](https://anonymous.4open.science/r/AF-FCL-7D65)
+ **[EFC]** Elastic Feature Consolidation For Cold Start Exemplar-Free Incremental Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=7D9X2cFnt1)
+ **[DietCL]** Continual Learning on a Diet:Learning from Sparsely Labeled Streams Under Constrained Computation(ICLR 2024) [[paper]](https://openreview.net/forum?id=Xvfz8NHmCj)
+ **[PICLE]** A Probabilistic Framework for Modular Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=MVe2dnWPCu)
+ **OVOR** OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=FbuyDzZTPt)[[code]](https://github.com/jpmorganchase/ovor)
+ **[BGS]** Continual Learning in the Presence of Spurious Correlations: Analyses and a Simple Baseline(ICLR 2024) [[paper]](https://openreview.net/forum?id=3Y7r6xueJJ)
+ **[PEC]** Prediction Error-based Classification for Class-Incremental Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=DJZDgMOLXQ)[[code]](https://github.com/michalzajac-ml/pec)
+ **[refresh learning]** A Unified and General Framework for Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=BE5aK0ETbp)
+ **[CPPO]** CPPO: Continual Learning for Reinforcement Learning with Human Feedback(ICLR 2024) [[paper]](https://openreview.net/forum?id=86zAUE80pP)
+ **[JARe]** Scalable Language Model with Generalized Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=mz8owj4DXu)
+ **[POCON]** Plasticity-Optimized Complementary Networks for Unsupervised Continual(WACV 2024) [[paper]](https://arxiv.org/pdf/2309.06086.pdf)
+ **[DMU]** Online Class-Incremental Learning For Real-World Food Image Classification(WACV 2024) [[paper]](https://openaccess.thecvf.com/content/WACV2024/papers/Raghavan_Online_Class-Incremental_Learning_for_Real-World_Food_Image_Classification_WACV_2024_paper.pdf)[[code]](https://gitlab.com/viper-purdue/OCIL-real-world-food-image-classification)
+ **[CLTA]** Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning(WACV 2024) [[paper]](https://arxiv.org/abs/2308.09544)[[code]](https://github.com/fszatkowski/cl-teacher-adaptation)
+ **[FG-KSR]** Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class Incremental Learning(AAAI 2024) [[paper]](https://arxiv.org/abs/2312.12722)[[code]](https://github.com/scok30/vit-cil)## 2023
+ **[PRD]** Prototype-Sample Relation Distillation: Towards Replay-FreeContinual Learning(ICML 2023) [[paper]](https://arxiv.org/pdf/2303.14771.pdf)
+ A Unified Continual Learning Framework with General Parameter-Efficient Tuning(ICCV 2023) [[paper]](https://arxiv.org/abs/2303.10070)[[code]](https://github.com/gqk/LAE?tab=readme-ov-file)
+ Cross-Modal Alternating Learning with Task-Aware Representations for Continual Learning(TMM 2023) [[paper]](https://ieeexplore.ieee.org/abstract/document/10347466)[[code]](https://csgaobb.github.io/)
+ Semantic Knowledge Guided Class-Incremental Learning(TCSVT 2023) [[paper]](https://ieeexplore.ieee.org/document/10083158)
+ Non-Exemplar Class-Incremental Learning via Adaptive Old Class Reconstruction(ACM MM 2023) [[paper]](https://dl.acm.org/doi/10.1145/3581783.3611926)[[code]](https://github.com/Mysteriousplayer/POLO-NECIL)
+ **[HiDe-Prompt]** Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality(NeurIPS 2023)[[paper]](https://arxiv.org/abs/2310.07234)[[code]](https://github.com/thu-ml/HiDe-Prompt)
+ TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion(NeurIPS 2023)[[paper]](https://arxiv.org/abs/2310.08217)
+ **[AdaB2N]** Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation(NeurIPS 2023)[[paper]](https://arxiv.org/abs/2310.08855)[[code]]](https://github.com/lvyilin/AdaB2N)
+ Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Moon_Online_Class_Incremental_Learning_on_Stochastic_Blurry_Task_Boundary_via_ICCV_2023_paper.pdf)
+ Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering(ICCV 2023)[[paper]]( https://openaccess.thecvf.com/content/ICCV2023/papers/Qian_Decouple_Before_Interact_Multi-Modal_Prompt_Learning_for_Continual_Visual_Question_ICCV_2023_paper.pdf)
+ Prototype Reminiscence and Augmented Asymmetric Knowledge Aggregation for Non-Exemplar Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Shi_Prototype_Reminiscence_and_Augmented_Asymmetric_Knowledge_Aggregation_for_Non-Exemplar_Class-Incremental_ICCV_2023_paper.pdf)
+ When Prompt-based Incremental Learning Does Not Meet Strong Pretraining(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Tang_When_Prompt-based_Incremental_Learning_Does_Not_Meet_Strong_Pretraining_ICCV_2023_paper.pdf)
+ Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Hsieh_Class-incremental_Continual_Learning_for_Instance_Segmentation_with_Image-level_Weak_Supervision_ICCV_2023_paper.pdf)
+ Dynamic Residual Classifier for Class Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_Dynamic_Residual_Classifier_for_Class_Incremental_Learning_ICCV_2023_paper.pdf)
+ Audio-Visual Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Pian_Audio-Visual_Class-Incremental_Learning_ICCV_2023_paper.pdf)
+ First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Panos_First_Session_Adaptation_A_Strong_Replay-Free_Baseline_for_Class-Incremental_Learning_ICCV_2023_paper.pdf)
+ Self-Organizing Pathway Expansion for Non-Exemplar Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhu_Self-Organizing_Pathway_Expansion_for_Non-Exemplar_Class-Incremental_Learning_ICCV_2023_paper.pdf)
+ Heterogeneous Forgetting Compensation for Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Dong_Heterogeneous_Forgetting_Compensation_for_Class-Incremental_Learning_ICCV_2023_paper.pdf)
+ Masked Autoencoders are Efficient Class Incremental Learners(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhai_Masked_Autoencoders_are_Efficient_Class_Incremental_Learners_ICCV_2023_paper.pdf)
+ Knowledge Restore and Transfer for Multi-Label Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Dong_Knowledge_Restore_and_Transfer_for_Multi-Label_Class-Incremental_Learning_ICCV_2023_paper.pdf)
+ Space-time Prompting for Video Class-incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Pei_Space-time_Prompting_for_Video_Class-incremental_Learning_ICCV_2023_paper.pdf)
+ CLNeRF: Continual Learning Meets NeRF(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Cai_CLNeRF_Continual_Learning_Meets_NeRF_ICCV_2023_paper.pdf)
+ Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Al_Kader_Hammoud_Rapid_Adaptation_in_Online_Continual_Learning_Are_We_Evaluating_It_ICCV_2023_paper.pdf)
+ Exemplar-Free Continual Transformer with Convolutions(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Roy_Exemplar-Free_Continual_Transformer_with_Convolutions_ICCV_2023_paper.pdf)
+ Self-Evolved Dynamic Expansion Model for Task-Free Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ye_Self-Evolved_Dynamic_Expansion_Model_for_Task-Free_Continual_Learning_ICCV_2023_paper.pdf)
+ Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Hsieh_Class-incremental_Continual_Learning_for_Instance_Segmentation_with_Image-level_Weak_Supervision_ICCV_2023_paper.pdf)
+ Contrastive Continuity on Augmentation Stability Rehearsal for Continual Self-Supervised Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Cheng_Contrastive_Continuity_on_Augmentation_Stability_Rehearsal_for_Continual_Self-Supervised_Learning_ICCV_2023_paper.pdf)
+ Measuring Asymmetric Gradient Discrepancy in Parallel Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Lyu_Measuring_Asymmetric_Gradient_Discrepancy_in_Parallel_Continual_Learning_ICCV_2023_paper.pdf)
+ Wasserstein Expansible Variational Autoencoder for Discriminative and Generative Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ye_Wasserstein_Expansible_Variational_Autoencoder_for_Discriminative_and_Generative_Continual_Learning_ICCV_2023_paper.pdf)
+ Data Augmented Flatness-aware Gradient Projection for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Yang_Data_Augmented_Flatness-aware_Gradient_Projection_for_Continual_Learning_ICCV_2023_paper.pdf)
+ A Unified Continual Learning Framework with General Parameter-Efficient Tuning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Gao_A_Unified_Continual_Learning_Framework_with_General_Parameter-Efficient_Tuning_ICCV_2023_paper.pdf)
+ Introducing Language Guidance in Prompt-based Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Khan_Introducing_Language_Guidance_in_Prompt-based_Continual_Learning_ICCV_2023_paper.pdf)
+ Continual Learning for Personalized Co-speech Gesture Generation(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ahuja_Continual_Learning_for_Personalized_Co-speech_Gesture_Generation_ICCV_2023_paper.pdf)
+ Growing a Brain with Sparsity-Inducing Generation for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Jin_Growing_a_Brain_with_Sparsity-Inducing_Generation_for_Continual_Learning_ICCV_2023_paper.pdf)
+ Towards Realistic Evaluation of Industrial Continual Learning Scenarios with an Emphasis on Energy Consumption and Computational Footprint(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Chavan_Towards_Realistic_Evaluation_of_Industrial_Continual_Learning_Scenarios_with_an_ICCV_2023_paper.pdf)
+ Class-Incremental Grouping Network for Continual Audio-Visual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Mo_Class-Incremental_Grouping_Network_for_Continual_Audio-Visual_Learning_ICCV_2023_paper.pdf)
+ ICICLE: Interpretable Class Incremental Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Rymarczyk_ICICLE_Interpretable_Class_Incremental_Continual_Learning_ICCV_2023_paper.pdf)
+ Online Prototype Learning for Online Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Wei_Online_Prototype_Learning_for_Online_Continual_Learning_ICCV_2023_paper.pdf)
+ NAPA-VQ: Neighborhood-Aware Prototype Augmentation with Vector Quantization for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Malepathirana_NAPA-VQ_Neighborhood-Aware_Prototype_Augmentation_with_Vector_Quantization_for_Continual_Learning_ICCV_2023_paper.pdf)
+ Few-shot Continual Infomax Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Gu_Few-shot_Continual_Infomax_Learning_ICCV_2023_paper.pdf)
+ SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_SLCA_Slow_Learner_with_Classifier_Alignment_for_Continual_Learning_on_ICCV_2023_paper.pdf)
+ Instance and Category Supervision are Alternate Learners for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Tian_Instance_and_Category_Supervision_are_Alternate_Learners_for_Continual_Learning_ICCV_2023_paper.pdf)
+ Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zheng_Preventing_Zero-Shot_Transfer_Degradation_in_Continual_Learning_of_Vision-Language_Models_ICCV_2023_paper.pdf)
+ CLR: Channel-wise Lightweight Reprogramming for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ge_CLR_Channel-wise_Lightweight_Reprogramming_for_Continual_Learning_ICCV_2023_paper.pdf)
+ Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Cho_Complementary_Domain_Adaptation_and_Generalization_for_Unsupervised_Continual_Domain_Shift_ICCV_2023_paper.pdf)
+ TARGET: Federated Class-Continual Learning via Exemplar-Free Distillation(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_TARGET_Federated_Class-Continual_Learning_via_Exemplar-Free_Distillation_ICCV_2023_paper.pdf)
+ CBA: Improving Online Continual Learning via Continual Bias Adaptor(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Wang_CBA_Improving_Online_Continual_Learning_via_Continual_Bias_Adaptor_ICCV_2023_paper.pdf)
+ Continual Zero-Shot Learning through Semantically Guided Generative Random Walks(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_Continual_Zero-Shot_Learning_through_Semantically_Guided_Generative_Random_Walks_ICCV_2023_paper.pdf)
+ A Soft Nearest-Neighbor Framework for Continual Semi-Supervised Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Kang_A_Soft_Nearest-Neighbor_Framework_for_Continual_Semi-Supervised_Learning_ICCV_2023_paper.pdf)
+ Online Continual Learning on Hierarchical Label Expansion(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Lee_Online_Continual_Learning_on_Hierarchical_Label_Expansion_ICCV_2023_paper.pdf)
+ Investigating the Catastrophic Forgetting in Multimodal Large Language Models (NeurIPS Workshop 23) [[paper]](https://arxiv.org/abs/2309.10313)
+ Generating Instance-level Prompts for Rehearsal-free Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Jung_Generating_Instance-level_Prompts_for_Rehearsal-free_Continual_Learning_ICCV_2023_paper.pdf)
+ Heterogeneous Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/abs/2306.08593)
+ Partial Hypernetworks for Continual Learning(CoLLAs 2023)[[paper]](https://arxiv.org/abs/2306.10724)
+ Learnability and Algorithm for Continual Learning(ICML 2023)[[paper]](https://arxiv.org/abs/2306.12646)
+ Parameter-Level Soft-Masking for Continual Learning(ICML 2023)[[paper]](https://arxiv.org/abs/2306.14775)
+ Improving Online Continual Learning Performance and Stability with Temporal Ensembles(CoLLAs 2023)[[paper]](https://arxiv.org/abs/2306.16817)
+ Exploring Continual Learning for Code Generation Models(ACL 2023)[[paper]](https://arxiv.org/abs/2307.02435)
+ **[Fed-CPrompt]** Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning(FL-ICML 2023)[[paper]](https://arxiv.org/abs/2307.04869)
+ Online Continual Learning for Robust Indoor Object Recognition(ICCV 2023)[[paper]](https://arxiv.org/abs/2307.09827)
+ Proxy Anchor-based Unsupervised Learning for Continuous Generalized Category Discovery(ICCV 2023)[[paper]](https://arxiv.org/abs/2307.10943)
+ **[XLDA]** XLDA: Linear Discriminant Analysis for Scaling Continual Learning to Extreme Classification at the Edge[ICML 2023][[paper]](https://arxiv.org/abs/2307.11317)
+ **[CLR]** CLR: Channel-wise Lightweight Reprogramming for Continual Learning(ICCV 2023)[[paper]](https://arxiv.org/abs/2307.11386)
+ **[CS-VQLA]** Revisiting Distillation for Continual Learning on Visual Question Localized-Answering in Robotic Surgery(MICCAI 2023)[[paper]](https://arxiv.org/abs/2307.12045)[[code]](https://github.com/longbai1006/CS-VQLA)
+ Online Prototype Learning for Online Continual Learning(ICCV 2023)[[paper]](https://arxiv.org/abs/2308.00301)[[code]](https://github.com/weilllllls/OnPro)
+ Cost-effective On-device Continual Learning over Memory Hierarchy with Miro(ACM MobiCom 23)[[paper]](https://arxiv.org/abs/2308.06053)
+ **[CBA]** CBA: Improving Online Continual Learning via Continual Bias Adaptor(ICCV 2023)[[paper]](https://arxiv.org/abs/2308.06925)
+ **[A-Prompts]** Remind of the Past: Incremental Learning with Analogical Prompts(arXiv 2023)[[paper]](https://arxiv.org/abs/2303.13898)
+ **[ESN]** Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference(AAAI 2023)[[paper]](https://arxiv.org/abs/2211.15969)[[code]](https://github.com/iamwangyabin/ESN)
+ **[RevisitingCIL]** Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need(arXiv 2023)[[paper]](https://arxiv.org/abs/2303.07338)[[code]](https://github.com/zhoudw-zdw/RevisitingCIL)
+ **[LwP]** Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting(ICLR 2023)[[paper]](https://openreview.net/forum?id=gfPUokHsW-)
+ **[SDMLP]** Sparse Distributed Memory is a Continual Learner(ICLR 2023)[[paper]](https://openreview.net/forum?id=JknGeelZJpHP)
+ **[SaLinA]** Building a Subspace of Policies for Scalable Continual Learning(ICLR 2023)[[paper]](https://openreview.net/forum?id=ZloanUtG4a)[[code]](https://github.com/facebookresearch/salina/tree/main/salina_cl)
+ **[BEEF]** BEEF:Bi-Compatible Class-Incremental Learning via Energy-Based Expansion and Fusion(ICLR 2023)[[paper]](https://openreview.net/pdf?id=iP77_axu0h3)[[code]](https://github.com/G-U-N/ICLR23-BEEF)
+ **[WaRP]** Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=kPLzOfPfA2l)
+ **[OBC]** Online Bias Correction for Task-Free Continual Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=18XzeuYZh_)
+ **[NC-FSCIL]** Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=y5W8tpojhtJ)[[code]](https://github.com/NeuralCollapseApplications/FSCIL)
+ **[iVoro]** Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=zJXg_Wmob03)
+ **[DAS]** Continual Learning of Language Models(ICLR 2023)[[paper]](https://openreview.net/pdf?id=m_GDIItaI3o)
+ **[Progressive Prompts]** Progressive Prompts: Continual Learning for Language Models without Forgetting(ICLR 2023)[[paper]](https://openreview.net/pdf?id=UJTgQBc91_)
+ **[SDP]** Online Boundary-Free Continual Learning by Scheduled Data Prior(ICLR 2023)[[paper]](https://openreview.net/pdf?id=qco4ekz2Epm)[[code]](https://github.com/yonseivnl/sdp)
+ **[iLDR]** Incremental Learning of Structured Memory via Closed-Loop Transcription(ICLR 2023)[[paper]](https://arxiv.org/pdf/2202.05411.pdf)
+ **[SoftNet-FSCIL]** On the Soft-Subnetwork for Few-Shot Class Incremental Learning On the Soft-Subnetwork for Few-Shot Class Incremental Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=z57WK5lGeHd)[[code]](https://github.com/ihaeyong/SoftNet-FSCIL)
+ **[ESMER]** Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning(ICLR 2023)[[paper]](https://openreview.net/forum?id=zlbci7019Z3)[[code]](https://github.com/NeurAI-Lab/ESMER)
+ **[MEMO]** A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning(ICLR 2023)[[paper]](https://arxiv.org/abs/2205.13218)[[code]](https://github.com/wangkiw/ICLR23-MEMO)
+ **[CUDOS]** Continual Unsupervised Disentangling of Self-Organizing Representations(ICLR 2023)[[paper]](https://openreview.net/pdf?id=ih0uFRFhaZZ)
+ **[ACGAN]** Better Generative Replay for Continual Federated Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=cRxYWKiTan)[[code]](https://github.com/daiqing98/FedCIL)
+ **[TAMiL]** Task-Aware Information Routing from Common Representation Space in Lifelong Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=-M0TNnyWFT5)[[code]](https://github.com/NeurAI-Lab/TAMiL)
+ **[FeTrIL]** Feature Translation for Exemplar-Free Class-Incremental Learning(WACV 2023)[[paper]](https://arxiv.org/abs/2211.13131)[[code]](https://github.com/GregoirePetit/FeTrIL)
+ **[RSOI]** Regularizing Second-Order Influences for Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.10177.pdf)[[code]](https://github.com/feifeiobama/InfluenceCL)
+ **[TBBN]** Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2201.12559.pdf)
+ **[AMSS]** Continual Semantic Segmentation with Automatic Memory Sample Selection(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05015.pdf)
+ **[DGCL]** Exploring Data Geometry for Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03931.pdf)
+ **[PCR]** PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.04408.pdf)[[code]](https://github.com/FelixHuiweiLin/PCR)
+ **[FMWISS]** Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation(CVPR 2023)[[paper]](https://arxiv.org/pdf/2302.14250.pdf)
+ **[CL-DETR]** Continual Detection Transformer for Incremental Object Detection(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03110.pdf)[[code]](https://github.com/yaoyao-liu/CL-DETR)
+ **[PIVOT]** PIVOT: Prompting for Video Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.04842.pdf)
+ **[CIM-CIL]** Class-Incremental Exemplar Compression for Class-Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.14042.pdf)[[code]](https://github.com/xfflzl/CIM-CIL)
+ **[DNE]** Dense Network Expansion for Class Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.12696.pdf)
+ **[PAR]** Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05288.pdf)
+ **[PETAL]** A Probabilistic Framework for Lifelong Test-Time Adaptation(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.09713.pdf)[[code]](https://github.com/dhanajitb/petal)
+ **[SAVC]** Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.00426.pdf)[[code]](https://github.com/zysong0113/SAVC)
+ **[CODA-Prompt]** CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2211.13218.pdf)[[code]](https://github.com/GT-RIPL/CODA-Prompt)## 2022
+ **[RD-IOD]** RD-IOD: Two-Level Residual-Distillation-Based Triple-Network for Incremental Object Detection(ACM Trans 2022)[[paper]](https://dl.acm.org/doi/abs/10.1145/3472393)
+ **[NCM]** Exemplar-free Online Continual Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.05491)
+ **[IPP]** Incremental Prototype Prompt-tuning with Pre-trained Representation for Class Incremental Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.03410)
+ **[Incremental-DETR]** Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2205.04042)
+ **[ELI]** Energy-Based Latent Aligner for Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Joseph_Energy-Based_Latent_Aligner_for_Incremental_Learning_CVPR_2022_paper.html)
+ **[CASSLE]** Self-Supervised Models Are Continual Learners(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Fini_Self-Supervised_Models_Are_Continual_Learners_CVPR_2022_paper.html)[[code]](https://github.com/DonkeyShot21/cassle)
+ **[iFS-RCNN]** iFS-RCNN: An Incremental Few-Shot Instance Segmenter(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Nguyen_iFS-RCNN_An_Incremental_Few-Shot_Instance_Segmenter_CVPR_2022_paper.html)
+ **[WILSON]** Incremental Learning in Semantic Segmentation From Image Labels(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Cermelli_Incremental_Learning_in_Semantic_Segmentation_From_Image_Labels_CVPR_2022_paper.html)[[code]](https://github.com/fcdl94/WILSON)
+ **[Connector]** Towards Better Plasticity-Stability Trade-Off in Incremental Learning: A Simple Linear Connector(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Lin_Towards_Better_Plasticity-Stability_Trade-Off_in_Incremental_Learning_A_Simple_Linear_CVPR_2022_paper.html)[[code]](https://github.com/lingl1024/Connector)
+ **[PAD]** Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.13167)
+ **[ERD]** Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.02136)[[code]](https://github.com/Hi-FT/ERD)
+ **[AFC]** Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.00895)[[code]](https://github.com/kminsoo/AFC)
+ **[FACT]** Forward Compatible Few-Shot Class-Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06953)[[code]](https://github.com/zhoudw-zdw/CVPR22-Fact)
+ **[L2P]** Learning to Prompt for Continual Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.08654)[[code]](https://github.com/google-research/l2p)
+ **[MEAT]** Meta-attention for ViT-backed Continual Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.11684)[[code]](https://github.com/zju-vipa/MEAT-TIL)
+ **[RCIL]** Representation Compensation Networks for Continual Semantic Segmentation(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.05402)[[code]](https://github.com/zhangchbin/RCIL)
+ **[ZITS]** Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.00867)[[code]](https://github.com/DQiaole/ZITS_inpainting)
+ **[MTPSL]** Learning Multiple Dense Prediction Tasks from Partially Annotated Data(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.14893)[[code]](https://github.com/VICO-UoE/MTPSL)
+ **[MMA]** Modeling Missing Annotations for Incremental Learning in Object Detection(CVPR-Workshop 2022)[[paper]](https://arxiv.org/abs/2204.08766)
+ **[CoSCL]** CoSCL: Cooperation of Small Continual
Learners is Stronger than a Big One(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136860249.pdf)[[code]](https://github.com/lywang3081/CoSCL)
+ **[AdNS]** Balancing Stability and Plasticity through Advanced Null Space in Continual Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.12061)
+ **[ProCA]** Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.10856)[[code]](https://github.com/Hongbin98/ProCA)
+ **[R-DFCIL]** R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2203.13104)[[code]](https://github.com/jianzhangcs/R-DFCIL)
+ **[S3C]** S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850427.pdf)[[code]](https://github.com/JAYATEJAK/S3C)
+ **[H^2^]** Helpful or Harmful: Inter-Task Association in Continual Learning(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136710518.pdf)
+ **[DualPrompt]** DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04799)
+ **[ALICE]** Few-Shot Class Incremental Learning From an Open-Set Perspective(ECCV 2022)[[paper]](https://arxiv.org/pdf/2208.00147.pdf)[[code]](https://github.com/CanPeng123/FSCIL_ALICE)
+ **[RU-TIL]** Incremental Task Learning with Incremental Rank Updates(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.09074.pdf)[[code]](https://github.com/CSIPlab/task-increment-rank-update)
+ **[FOSTER]** FOSTER: Feature Boosti ng and Compression for Class-Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04662)
+ **[SSR]** Subspace Regularizers for Few-Shot Class Incremental Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=boJy41J-tnQ)[[code]](https://github.com/feyzaakyurek/subspace-reg)
+ **[RGO]** Continual Learning with Recursive Gradient Optimization(ICLR 2022)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)
+ **[TRGP]** TRGP: Trust Region Gradient Projection for Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=iEvAf8i6JjO)
+ **[AGCN]** AGCN: Augmented Graph Convolutional Network for Lifelong Multi-Label Image Recognition(ICME 2022)[[paper]](https://arxiv.org/abs/2203.05534)[[code]](https://github.com/Kaile-Du/AGCN)
+ **[WSN]** Forget-free Continual Learning with Winning Subnetworks(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/kang22b/kang22b.pdf)[[code]](https://github.com/ihaeyong/WSN)
+ **[NISPA]** NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/gurbuz22a/gurbuz22a.pdf)[[code]](https://github.com/BurakGurbuz97/NISPA)
+ **[S-FSVI]** Continual Learning via Sequential Function-Space Variational Inference(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/rudner22a/rudner22a.pdf)[[code]](https://github.com/timrudner/S-FSVI)
+ **[CUBER]** Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2211.00789)
+ **[ADA]** Memory Efficient Continual Learning with Transformers(NeurIPS 2022)[[paper]](https://www.amazon.science/publications/memory-efficient-continual-learning-with-transformers)
+ **[CLOM]** Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.04524)
+ **[S-Prompt]** S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2207.12819)
+ **[ALIFE]** ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation(NIPS 2022)[[paper]](https://arxiv.org/abs/2210.06816)
+ **[PMT]** Continual Learning In Environments With Polynomial Mixing Times(NIPS 2022)[[paper]](https://arxiv.org/abs/2112.07066)
+ **[STCISS]** Self-training for class-incremental semantic segmentation(TNNLS 2022)[[paper]](https://arxiv.org/abs/2012.03362)
+ **[DSN]** Dynamic Support Network for Few-shot Class Incremental Learning(TPAMI 2022)[[paper]](https://ieeexplore.ieee.org/document/9779071)
+ **[MgSvF]** MgSvF: Multi-Grained Slow vs. Fast Framework for Few-Shot Class-Incremental Learning(TPAMI 2022)[[paper]](https://arxiv.org/abs/2006.15524)
+ **[TransIL]** Dataset Knowledge Transfer for Class-Incremental Learning without Memory(WACV 2022)[[paper]](https://arxiv.org/pdf/2110.08421.pdf)
+ **[NER-FSCIL]** Few-Shot Class-Incremental Learning for Named Entity Recognition(ACL 2022)[[paper]](https://aclanthology.org/2022.acl-long.43/)
+ **[LIMIT]** Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks(arXiv 2022)[[paper]](https://arxiv.org/abs/2203.17030)
+ **[EMP]** Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.07275)
+ **[SPTM]** Class-Incremental Learning With Strong Pre-Trained Model(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.html)
+ **[BER]** Bring Evanescent Representations to Life in Lifelong Class Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Toldo_Bring_Evanescent_Representations_to_Life_in_Lifelong_Class_Incremental_Learning_CVPR_2022_paper.html)
+ **[Sylph]** Sylph: A Hypernetwork Framework for Incremental Few-Shot Object Detection(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yin_Sylph_A_Hypernetwork_Framework_for_Incremental_Few-Shot_Object_Detection_CVPR_2022_paper.html)
+ **[MetaFSCIL]** MetaFSCIL: A Meta-Learning Approach for Few-Shot Class Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.html)
+ **[FCIL]** Federated Class-Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Federated_Class-Incremental_Learning_CVPR_2022_paper.html)[[code]](https://github.com/conditionWang/FCIL)
+ **[FILIT]** Few-Shot Incremental Learning for Label-to-Image Translation(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.html)
+ **[PuriDivER]** Online Continual Learning on a Contaminated Data Stream With Blurry Task Boundaries(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Bang_Online_Continual_Learning_on_a_Contaminated_Data_Stream_With_Blurry_CVPR_2022_paper.html)[[code]](https://github.com/clovaai/puridiver)
+ **[SNCL]** Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Bayesian_Sparse_Networks_With_Full_Experience_Replay_for_Continual_CVPR_2022_paper.html)
+ **[DVC]** Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Gu_Not_Just_Selection_but_Exploration_Online_Class-Incremental_Continual_Learning_via_CVPR_2022_paper.html)[[code]](https://github.com/YananGu/DVC)
+ **[CVS]** Continual Learning for Visual Search With Backward Consistent Feature Embedding(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wan_Continual_Learning_for_Visual_Search_With_Backward_Consistent_Feature_Embedding_CVPR_2022_paper.html)
+ **[CPL]** Continual Predictive Learning From Videos(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Continual_Predictive_Learning_From_Videos_CVPR_2022_paper.html)
+ **[GCR]** GCR: Gradient Coreset Based Replay Buffer Selection for Continual Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Tiwari_GCR_Gradient_Coreset_Based_Replay_Buffer_Selection_for_Continual_Learning_CVPR_2022_paper.html)
+ **[LVT]** Continual Learning With Lifelong Vision Transformer(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Continual_Learning_With_Lifelong_Vision_Transformer_CVPR_2022_paper.html)
+ **[vCLIMB]** vCLIMB: A Novel Video Class Incremental Learning Benchmark(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Villa_vCLIMB_A_Novel_Video_Class_Incremental_Learning_Benchmark_CVPR_2022_paper.html)[[code]](https://vclimb.netlify.app/)
+ **[Learn-to-Imagine]** Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.08932)[[code]](https://github.com/TOM-tym/Learn-to-Imagine)
+ **[DCR]** General Incremental Learning with Domain-aware Categorical Representations(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.04078)
+ **[DIY-FSCIL]** Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.14843)
+ **[C-FSCIL]** Constrained Few-shot Class-incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.16588)[[code]](https://github.com/IBM/constrained-FSCIL)
+ **[SSRE]** Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06359)
+ **[CwD]** Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.04731)[[code]](https://github.com/Yujun-Shi/CwD)
+ **[MSL]** On Generalizing Beyond Domains in Cross-Domain Continual Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.03970)
+ **[DyTox]** DyTox: Transformers for Continual Learning with DYnamic TOken Expansion(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.11326)[[code]](https://github.com/arthurdouillard/dytox)
+ **[X-DER]** Class-Incremental Continual Learning into the eXtended DER-vers(ECCV 2022)[[paper]](https://arxiv.org/abs/2201.00766)
+ **[clsss-iNCD]** Class-incremental Novel Class Discovery(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08605)[[code]](https://github.com/OatmealLiu/class-iNCD)
+ **[ARI]** Anti-Retroactive Interference for Lifelong Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.12967)[[code]](https://github.com/bhrqw/ARI)
+ **[Long-Tailed-CIL]** Long-Tailed Class Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2210.00266)[[code]](https://github.com/xialeiliu/Long-Tailed-CIL)
+ **[LIRF]** Learning with Recoverable Forgetting(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08224)
+ **[DSDM]** Online Task-free Continual Learning with Dynamic Sparse Distributed Memory(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850721.pdf)[[code]](https://github.com/Julien-pour/Dynamic-Sparse-Distributed-Memory)
+ **[CVT]** Online Continual Learning with Contrastive Vision Transformer(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.13516.pdf)
+ **[TwF]** Transfer without Forgetting(ECCV 2022)[[paper]](https://arxiv.org/abs/2206.00388)[[code]](https://github.com/mbosc/twf)
+ **[CSCCT]** Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer(ECCV 2022)[[paper]](https://cscct.github.io)[[code]](https://github.com/ashok-arjun/CSCCT)
+ **[DLCFT]** DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.08112)
+ **[ERDR]** Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay(ECCV2022)[[paper]](https://arxiv.org/pdf/2207.11213.pdf)
+ **[NCDwF]** Novel Class Discovery without Forgetting(ECCV2022)[[paper]](https://arxiv.org/abs/2207.10659)
+ **[CoMPS]** CoMPS: Continual Meta Policy Search(ICLR 2022)[[paper]](https://openreview.net/pdf?id=PVJ6j87gOHz)
+ **[i-fuzzy]** Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference(ICLR 2022)[[paper]](https://openreview.net/pdf?id=nrGGfMbY_qK)[[code]](https://github.com/naver-ai/i-Blurry)
+ **[CLS-ER]** Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System(ICLR 2022)[[paper]](https://openreview.net/pdf?id=uxxFrDwrE7Y)[[code]](https://github.com/NeurAI-Lab/CLS-ER)
+ **[MRDC]** Memory Replay with Data Compression for Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=a7H7OucbWaU)[[code]](https://github.com/andrearosasco/DistilledReplay)
+ **[OCS]** Online Coreset Selection for Rehearsal-based Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=f9D-5WNG4Nv)
+ **[InfoRS]** Information-theoretic Online Memory Selection for Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=IpctgL7khPp)
+ **[ER-AML]** New Insights on Reducing Abrupt Representation Change in Online Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=N8MaByOzUfb)[[code]](https://github.com/pclucas14/aml)
+ **[FAS]** Continual Learning with Filter Atom Swapping(ICLR 2022)[[paper]](https://openreview.net/pdf?id=metRpM4Zrcb)
+ **[LUMP]** Rethinking the Representational Continuity: Towards Unsupervised Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=9Hrka5PA7LW)
+ **[CF-IL]** Looking Back on Learned Experiences For Class/task Incremental Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=RxplU3vmBx)[[code]](https://github.com/MozhganPourKeshavarz/Cost-Free-Incremental-Learning)
+ **[LFPT5]** LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5(ICLR 2022)[[paper]](https://openreview.net/pdf?id=HCRVf71PMF)[[code]](https://github.com/qcwthu/Lifelong-Fewshot-Language-Learning)
+ **[Model Zoo]** Model Zoo: A Growing Brain That Learns Continually(ICLR 2022)[[paper]](https://arxiv.org/abs/2106.03027)
+ **[OCM]** Online Continual Learning through Mutual Information Maximization(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/guo22g/guo22g.pdf)[[code]](https://github.com/gydpku/OCM)
+ **[DRO]** Improving Task-free Continual Learning by Distributionally Robust Memory Evolution(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/wang22v/wang22v.pdf)[[code]](https://github.com/joey-wang123/DRO-Task-free)
+ **[EAK]** Effects of Auxiliary Knowledge on Continual Learning(ICPR 2022)[[paper]](https://arxiv.org/abs/2206.02577)
+ **[RAR]** Retrospective Adversarial Replay for Continual Learning(NeurIPS 2022)[[paper]](https://openreview.net/forum?id=XEoih0EwCwL&referrer=%5Bthe%20profile%20of%20Tianyi%20Zhou%5D(%2Fprofile%3Fid%3D~Tianyi_Zhou2))
+ **[LiDER]** On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06443)
+ **[SparCL]** SparCL: Sparse Continual Learning on the Edge(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.09476)
+ **[ClonEx-SAC]** Disentangling Transfer in Continual Reinforcement Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.13900)
+ **[ODDL]** Task-Free Continual Learning via Online Discrepancy Distance Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06579)
+ **[CSSL]** Continual semi-supervised learning through contrastive interpolation consistency(PRL 2022)[[paper]](https://arxiv.org/abs/2108.06552)
+ **[MBP]** Model Behavior Preserving for Class-Incremental Learning(TNNLS 2022)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9705128)
+ **[CandVot]** Online Continual Learning via Candidates Voting(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/He_Online_Continual_Learning_via_Candidates_Voting_WACV_2022_paper.pdf)
+ **[FlashCards]** Knowledge Capture and Replay for Continual Learning(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/Gopalakrishnan_Knowledge_Capture_and_Replay_for_Continual_Learning_WACV_2022_paper.pdf)
+ **[Meta-DR]** Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Volpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.html)
+ **[continual cross-modal retrieval]** Continual learning in cross-modal retrieval(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021W/CLVision/html/Wang_Continual_Learning_in_Cross-Modal_Retrieval_CVPRW_2021_paper.html)
+ **[DER]** DER:Dynamically expandable representation for class incremental learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Yan_DER_Dynamically_Expandable_Representation_for_Class_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Rhyssiyan/DER-ClassIL.pytorch)
+ **[EFT]** Efficient Feature Transformations for Discriminative and Generative Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Verma_Efficient_Feature_Transformations_for_Discriminative_and_Generative_Continual_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/vkverma01/EFT)
+ **[PASS]** Prototype Augmentation and Self-Supervision for Incremental Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Impression2805/CVPR21_PASS)
+ **[GeoDL]** On Learning the Geodesic Path for Incremental Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/chrysts/geodesic_continual_learning)
+ **[IL-ReduNet]** Incremental Learning via Rate Reduction(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Incremental_Learning_via_Rate_Reduction_CVPR_2021_paper.pdf)
+ **[PIGWM]** Image De-raining via Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Image_De-Raining_via_Continual_Learning_CVPR_2021_paper.pdf)
+ **[BLIP]** Continual Learning via Bit-Level Information Preserving(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.pdf)[[code]](https://github.com/Yujun-Shi/BLIP)
+ **[Adam-NSCL]** Training Networks in Null Space of Feature Covariance for Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Training_Networks_in_Null_Space_of_Feature_Covariance_for_Continual_CVPR_2021_paper.pdf)[[code]](https://github.com/ShipengWang/Adam-NSCL)
+ **[PLOP]** PLOP: Learning without Forgetting for Continual Semantic Segmentation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Douillard_PLOP_Learning_Without_Forgetting_for_Continual_Semantic_Segmentation_CVPR_2021_paper.pdf)[[code]](https://github.com/arthurdouillard/CVPR2021_PLOP)
+ **[SDR]** Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations(CVPR 2021)[[paper]](https://lttm.dei.unipd.it/paper_data/SDR/)[[code]](https://github.com/LTTM/SDR)
+ **[SKD]** Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.04059)
+ **[SPB]** Striking a balance between stability and plasticity for class-incremental learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Striking_a_Balance_Between_Stability_and_Plasticity_for_Class-Incremental_Learning_ICCV_2021_paper.pdf)
+ **[Else-Net]** Else-Net: Elastic Semantic Network for Continual Action Recognition from Skeleton Data(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper.pdf)
+ **[LCwoF-Framework]** Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kukleva_Generalized_and_Incremental_Few-Shot_Learning_by_Explicit_Learning_and_Calibration_ICCV_2021_paper.pdf)
+ **[AFEC]** AFEC: Active Forgetting of Negative Transfer in Continual Learning(NeurIPS 2021)[[paper]](https://openreview.net/pdf/72a18fad6fce88ef0286e9c7582229cf1c8d9f93.pdf)[[code]](https://github.com/lywang3081/AFEC)
+ **[F2M]** Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=ALvt7nXa2q)[[code]](https://github.com/moukamisama/F2M)
+ **[NCL]** Natural continual learning: success is a journey, not (just) a destination(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=W9250bXDgpK)[[code]](https://github.com/tachukao/ncl)
+ **[BCL]** Formalizing the Generalization-Forgetting Trade-off in Continual Learning(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=u1XV9BPAB9)[[code]](https://github.com/krm9c/Balanced-Continual-Learning)
+ **[Posterior Meta-Replay]** Posterior Meta-Replay for Continual Learning(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/761b42cfff120aac30045f7a110d0256-Abstract.html)
+ **[MARK]** Optimizing Reusable Knowledge for Continual Learning via Metalearning(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=hHTctAv9Lvh)[[code]](https://github.com/JuliousHurtado/meta-training-setup)
+ **[Co-occur]** Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/ffc58105bf6f8a91aba0fa2d99e6f106-Abstract.html)[[code]](https://github.com/dongnana777/bridging-non-co-occurrence)
+ **[LINC]** Lifelong and Continual Learning Dialogue Systems: Learning during Conversation(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/LINC_paper_AAAI_2021_camera_ready.pdf)
+ **[CLNER]** Continual learning for named entity recognition(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-7791.MonaikulN.pdf)
+ **[CLIS]** A Continual Learning Framework for Uncertainty-Aware Interactive Image Segmentation(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-2989.ZhengE.pdf)
+ **[PCL]** Continual Learning by Using Information of Each Class Holistically(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/AAAI2021_PCL.pdf)
+ **[MAS3]** Unsupervised Model Adaptation for Continual Semantic Segmentation(AAAI 2021)[[paper]](https://arxiv.org/abs/2009.12518)
+ **[FSLL]** Few-Shot Lifelong Learning(AAAI 2021)[[paper]](https://arxiv.org/pdf/2103.00991.pdf)
+ **[VAR-GPs]** Variational Auto-Regressive Gaussian Processes for Continual Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kapoor21b.html)
+ **[BSA]** Bayesian Structural Adaptation for Continual Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kumar21a.html)
+ **[GPM]** Gradient projection memory for continual learning(ICLR 2021)[[paper]](https://arxiv.org/abs/2103.09762)[[code]](https://github.com/sahagobinda/GPM)
+ **[TMN]** Triple-Memory Networks: A Brain-Inspired Method for Continual Learning(TNNLS 2021)[[paper]](https://ieeexplore.ieee.org/document/9540230?mkt_tok=NzU2LUdQSC04OTkAAAGEWh3nzSNX8-bTkVna2NbuB0POeJj2Og3psx0tXhIg9QWKppanhkVXCPQQMF_mCm4oXM9ds24H4-usCcZ06Vy9lezgWYCQrpxt6YPWkhuvj-E)
+ **[RKD]** Few-Shot Class-Incremental Learning via Relation Knowledge Distillation(AAAI 2021)[[paper]](https://ojs.aaai.org/index.php/AAAI/article/view/16213)
+ **[AANets]** Adaptive aggregation networks for class-incremental learning(CVPR 2021)[[paper]](https://class-il.mpi-inf.mpg.de/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)
+ **[ORDisCo]** ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_ORDisCo_Effective_and_Efficient_Usage_of_Incremental_Unlabeled_Data_for_CVPR_2021_paper.pdf)
+ **[DDE]** Distilling Causal Effect of Data in Class-Incremental Learning(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.01737)[[code]](https://github.com/JoyHuYY1412/DDE_CIL)
+ **[IIRC]** IIRC: Incremental Implicitly-Refined Classification(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.pdf)
+ **[Hyper-LifelongGAN]** Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhai_Hyper-LifelongGAN_Scalable_Lifelong_Learning_for_Image_Conditioned_Generation_CVPR_2021_paper.pdf)
+ **[CEC]** Few-Shot Incremental Learning with Continually Evolved Classifiers(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Few-Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.pdf)
+ **[iMTFA]** Incremental Few-Shot Instance Segmentation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Ganea_Incremental_Few-Shot_Instance_Segmentation_CVPR_2021_paper.pdf)
+ **[RM]** Rainbow memory: Continual learning with a memory of diverse samples(CVPR 2021)[[paper]](https://ieeexplore.ieee.org/document/9577808)
+ **[LOGD]** Layerwise Optimization by Gradient Decomposition for Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Tang_Layerwise_Optimization_by_Gradient_Decomposition_for_Continual_Learning_CVPR_2021_paper.pdf)
+ **[SPPR]** Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.html)
+ **[LReID]** Lifelong Person Re-Identification via Adaptive Knowledge Accumulation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Pu_Lifelong_Person_Re-Identification_via_Adaptive_Knowledge_Accumulation_CVPR_2021_paper.pdf)[[code]](https://github.com/TPCD/LifelongReID)
+ **[SS-IL]** SS-IL: Separated Softmax for Incremental Learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Ahn_SS-IL_Separated_Softmax_for_Incremental_Learning_ICCV_2021_paper.pdf)
+ **[TCD]** Class-Incremental Learning for Action Recognition in Videos(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Park_Class-Incremental_Learning_for_Action_Recognition_in_Videos_ICCV_2021_paper.pdf)
+ **[CLOC]** Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Cai_Online_Continual_Learning_With_Natural_Distribution_Shifts_An_Empirical_Study_ICCV_2021_paper.html)[[code]](https://github.com/IntelLabs/continuallearning)
+ **[CoPE]** Continual Prototype Evolution:Learning Online from Non-Stationary Data Streams(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/De_Lange_Continual_Prototype_Evolution_Learning_Online_From_Non-Stationary_Data_Streams_ICCV_2021_paper.pdf)[[code]](https://github.com/Mattdl/ContinualPrototypeEvolution)
+ **[Co2L]** Co2L: Contrastive Continual Learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cha_Co2L_Contrastive_Continual_Learning_ICCV_2021_paper.pdf)[[code]](https://github.com/chaht01/co2l)
+ **[SPR]** Continual Learning on Noisy Data Streams via Self-Purified Replay(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_Continual_Learning_on_Noisy_Data_Streams_via_Self-Purified_Replay_ICCV_2021_paper.pdf)
+ **[NACL]** Detection and Continual Learning of Novel Face Presentation Attacks(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Rostami_Detection_and_Continual_Learning_of_Novel_Face_Presentation_Attacks_ICCV_2021_paper.html)
+ **[Always Be Dreaming]** Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Smith_Always_Be_Dreaming_A_New_Approach_for_Data-Free_Class-Incremental_Learning_ICCV_2021_paper.html)[[code]](https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL)
+ **[CL-HSCNet]** Continual Learning for Image-Based Camera Localization(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_Continual_Learning_for_Image-Based_Camera_Localization_ICCV_2021_paper.html)[[code]](https://github.com/AaltoVision/CL_HSCNet)
+ **[RECALL]** RECALL: Replay-based Continual Learning in Semantic Segmentation(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Maracani_RECALL_Replay-Based_Continual_Learning_in_Semantic_Segmentation_ICCV_2021_paper.html)[[code]](https://github.com/lttm/recall)
+ **[VAE]** Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf)
+ **[ERT]** Rethinking Experience Replay: a Bag of Tricks for Continual Learning(ICPR 2021)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9412614)[[code]](https://github.com/hastings24/rethinking_er)
+ **[KCL]** Kernel Continual Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/derakhshani21a.html)[[code]](https://github.com/mmderakhshani/KCL)
+ **[MLIOD]** Incremental Object Detection via Meta-Learning(TPAMI 2021)[[paper]](https://arxiv.org/abs/2003.08798)[[code]](https://github.com/JosephKJ/iOD)
+ **[BNS]** BNS: Building Network Structures Dynamically for Continual Learning(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/ac64504cc249b070772848642cffe6ff-Abstract.html)
+ **[FS-DGPM]** Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=q1eCa1kMfDd)
+ **[SSUL]** SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/5a9542c773018268fc6271f7afeea969-Paper.pdf)
+ **[DualNet]** DualNet: Continual Learning, Fast and Slow(NeurIPS 2021)[[paper]](https://openreview.net/pdf?id=eQ7Kh-QeWnO)
+ **[classAug]** Class-Incremental Learning via Dual Augmentation(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf)
+ **[GMED]** Gradient-based Editing of Memory Examples for Online Task-free Continual Learning(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/f45a1078feb35de77d26b3f7a52ef502-Abstract.html)
+ **[BooVAE]** BooVAE: Boosting Approach for Continual Learning of VAE(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)[[code]](https://github.com/AKuzina/BooVAE)
+ **[GeMCL]** Generative vs. Discriminative: Rethinking The Meta-Continual Learning(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html)
+ **[RMM]** RMM: Reinforced Memory Management for Class-Incremental Learning(NIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/1cbcaa5abbb6b70f378a3a03d0c26386-Paper.pdf)[[code]](https://github.com/aminbana/gemcl)
+ **[LSF]** Learning with Selective Forgetting(IJCAI 2021)[[paper]](https://www.ijcai.org/proceedings/2021/0137.pdf)
+ **[ASER]** Online Class-Incremental Continual Learning with Adversarial Shapley Value(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9988.ShimD.pdf)[[code]](https://github.com/RaptorMai/online-continual-learning)
+ **[CML]** Curriculum-Meta Learning for Order-Robust Continual Relation Extraction(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-4847.WuT.pdf)[[code]](https://github.com/wutong8023/AAAI-CML)
+ **[HAL]** Using Hindsight to Anchor Past Knowledge in Continual Learning(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9700.ChaudhryA.pdf)
+ **[MDMT]** Multi-Domain Multi-Task Rehearsal for Lifelong Learning(AAAI 2021)[[paper]](https://arxiv.org/abs/2012.07236)
+ **[AU]** Do Not Forget to Attend to Uncertainty While Mitigating Catastrophic Forgetting(WACV 2021)[[paper]](https://openaccess.thecvf.com/content/WACV2021/html/Kurmi_Do_Not_Forget_to_Attend_to_Uncertainty_While_Mitigating_Catastrophic_WACV_2021_paper.html)
+ **[IDBR]** Continual Learning for Text Classification with Information Disentanglement Based Regularization(NAACL 2021)[[paper]](https://www.aclweb.org/anthology/2021.naacl-main.218.pdf)[[code]](https://github.com/GT-SALT/IDBR)
+ **[COIL]** Co-Transport for Class-Incremental Learning(ACM MM 2021)[[paper]](https://arxiv.org/pdf/2107.12654.pdf)## 2020
+ **[CWR\*]** Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches(CVPR 2020)[[paper]](https://arxiv.org/abs/1907.03799v3)
+ **[MiB]** Modeling the Background for Incremental Learning in Semantic Segmentation(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Cermelli_Modeling_the_Background_for_Incremental_Learning_in_Semantic_Segmentation_CVPR_2020_paper.pdf)[[code]](https://github.com/fcdl94/MiB)
+ **[K-FAC]** Continual Learning with Extended Kronecker-factored Approximate Curvature(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Continual_Learning_With_Extended_Kronecker-Factored_Approximate_Curvature_CVPR_2020_paper.html)
+ **[SDC]** Semantic Drift Compensation for Class-Incremental Learning(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Semantic_Drift_Compensation_for_Class-Incremental_Learning_CVPR_2020_paper.html)[[code]](https://github.com/yulu0724/SDC-IL)
+ **[NLTF]** Incremental Multi-Domain Learning with Network Latent Tensor Factorization(AAAI 2020)[[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6617)
+ **[CLCL]** Compositional Continual Language Learning(ICLR 2020)[[paper]](https://openreview.net/forum?id=rklnDgHtDS)[[code]](https://github.com/yli1/CLCL)
+ **[APD]** Scalable and Order-robust Continual Learning with Additive Parameter Decomposition(ICLR 2020)[[paper]](https://arxiv.org/pdf/1902.09432.pdf)
+ **[HYPERCL]** Continual learning with hypernetworks(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJgwNerKvB)[[code]](https://github.com/chrhenning/hypercl)
+ **[CN-DPM]** A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning(ICLR 2020)[[paper]](https://arxiv.org/pdf/2001.00689.pdf)
+ **[UCB]** Uncertainty-guided Continual Learning with Bayesian Neural Networks(ICLR 2020)[[paper]](https://openreview.net/forum?id=HklUCCVKDB)[[code]](https://github.com/SaynaEbrahimi/UCB)
+ **[CLAW]** Continual Learning with Adaptive Weights(ICLR 2020)[[paper]](https://openreview.net/forum?id=Hklso24Kwr)
+ **[CAT]** Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/d7488039246a405baf6a7cbc3613a56f-Paper.pdf)[[code]](https://github.com/ZixuanKe/CAT)
+ **[AGS-CL]** Continual Learning with Node-Importance based Adaptive Group Sparse Regularization(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html)
+ **[MERLIN]** Meta-Consolidation for Continual Learning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/a5585a4d4b12277fee5cad0880611bc6-Paper.pdf)[[code]](https://github.com/mattriemer/mer)
+ **[OSAKA]** Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Paper.pdf)[[code]](https://github.com/ElementAI/osaka)
+ **[RATT]** RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c2964caac096f26db222cb325aa267cb-Paper.pdf)
+ **[CCLL]** Calibrating CNNs for Lifelong Learning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/b3b43aeeacb258365cc69cdaf42a68af-Abstract.html)
+ **[CIDA]** Class-Incremental Domain Adaptation(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58601-0_4)
+ **[GraphSAIL]** GraphSAIL: Graph Structure Aware Incremental Learning for Recommender Systems(CIKM 2020)[[paper]](https://dl.acm.org/doi/abs/10.1145/3340531.3412754)
+ **[ANML]** Learning to Continually Learn(ECAI 2020)[[paper]](https://arxiv.org/abs/2002.09571)[[code]](https://github.com/uvm-neurobotics-lab/ANML)
+ **[ICWR]** Initial Classifier Weights Replay for Memoryless Class Incremental Learning(BMVC 2020)[[paper]](https://arxiv.org/pdf/2008.13710.pdf)
+ **[DAM]** Incremental Learning Through Deep Adaptation(TPAMI 2020)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)
+ **[OGD]** Orthogonal Gradient Descent for Continual Learning(PMLR 2020)[[paper]](http://proceedings.mlr.press/v108/farajtabar20a.html)
+ **[MC-OCL]** Online Continual Learning under Extreme Memory Constraints(ECCV2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58604-1_43)[[code]](https://github.com/DonkeyShot21/batch-level-distillation)
+ **[RCM]** Reparameterizing convolutions for incremental multi-task learning without task interference(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_41)[[code]](https://github.com/menelaoskanakis/RCM)
+ **[OvA-INN]** OvA-INN: Continual Learning with Invertible Neural Networks(IJCNN 2020)[[paper]](https://ieeexplore.ieee.org/abstract/document/9206766)
+ **[XtarNet]** XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning(ICLM 2020)[[paper]](http://proceedings.mlr.press/v119/yoon20b/yoon20b.pdf)[[code]](https://github.com/EdwinKim3069/XtarNet)
+ **[DMC]** Class-incremental learning via deep model consolidation(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Zhang_Class-incremental_Learning_via_Deep_Model_Consolidation_WACV_2020_paper.html)
+ **[iTAML]** iTAML : An Incremental Task-Agnostic Meta-learning Approach(CVPR 2020)[[paper]](https://arxiv.org/pdf/2003.11652.pdf)[[code]](https://github.com/brjathu/iTAML)
+ **[FSCIL]** Few-Shot Class-Incremental Learning(CVPR 2020)[[paper]](https://arxiv.org/pdf/2004.10956.pdf)[[code]](https://github.com/xyutao/fscil)
+ **[GFR]** Generative feature replay for class-incremental learning(CVPR 2020)[[paper]](https://ieeexplore.ieee.org/document/9150851/#:~:text=Generative%20Feature%20Replay%20For%20Class-Incremental%20Learning%20Abstract%3A%20Humans,that%20the%20task-ID%20is%20unknown%20at%20inference%20time.)[[code]](https://github.com/xialeiliu/GFR-IL)
+ **[OSIL]** Incremental Learning In Online Scenario(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/He_Incremental_Learning_in_Online_Scenario_CVPR_2020_paper.html)
+ **[ONCE]** Incremental Few-Shot Object Detection(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Perez-Rua_Incremental_Few-Shot_Object_Detection_CVPR_2020_paper.html)
+ **[WA]** Maintaining discrimination and fairness in class incremental learning(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhao_Maintaining_Discrimination_and_Fairness_in_Class_Incremental_Learning_CVPR_2020_paper.pdf)[[code]](https://github.com/hugoycj/Incremental-Learning-with-Weight-Aligning)
+ **[CGATE]** Conditional Channel Gated Networks for Task-Aware Continual Learning(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Abati_Conditional_Channel_Gated_Networks_for_Task-Aware_Continual_Learning_CVPR_2020_paper.html)[[code]](https://github.com/lit-leo/cgate)
+ **[Mnemonics Training]** Mnemonics Training: Multi-Class Incremental Learning without Forgetting(CVPR 2020)[[paper]](https://class-il.mpi-inf.mpg.de/mnemonics-training/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)
+ **[MEGA]** Improved schemes for episodic memory based lifelong learning algorithm(NeurIPS 2020)[[paper]](https://par.nsf.gov/servlets/purl/10233158)
+ **[GAN Memory]** GAN Memory with No Forgetting(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/bf201d5407a6509fa536afc4b380577e-Abstract.html)[[code]](https://github.com/MiaoyunZhao/GANmemory_LifelongLearning)
+ **[Coreset]** Coresets via Bilevel Optimization for Continual Learning and Streaming(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/aa2a77371374094fe9e0bc1de3f94ed9-Paper.pdf)
+ **[FROMP]** Continual Deep Learning by Functional Regularisation of Memorable Past(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Paper.pdf)[[code]](https://github.com/team-approx-bayes/fromp)
+ **[DER]** Dark Experience for General Continual Learning: a Strong, Simple Baseline(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Paper.pdf)[[code]](https://github.com/aimagelab/mammoth)
+ **[InstAParam]** Mitigating Forgetting in Online Continual Learning via Instance-Aware Parameterization(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Paper.pdf)
+ **[BOCL]** Bi-Objective Continual Learning: Learning "New" While Consolidating "Known"(AAAI 2020)[[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6060)
+ **[REMIND]** Remind your neural network to prevent catastrophic forgetting(ECCV 2020)[[paper]](https://arxiv.org/pdf/1910.02509v3)[[code]](https://github.com/tyler-hayes/REMIND)
+ **[ACL]** Adversarial Continual Learning(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58621-8_23)[[code]](https://github.com/facebookresearch/Adversarial-Continual-Learning)
+ **[TPCIL]** Topology-Preserving Class-Incremental Learning(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640256.pdf)
+ **[GDumb]** GDumb:A simple approach that questions our progress in continual learning(ECCV 2020)[[paper]](https://www.robots.ox.ac.uk/~tvg/publications/2020/gdumb.pdf)[[code]](https://github.com/drimpossible/GDumb)
+ **[PRS]** Imbalanced Continual Learning with Partitioning Reservoir Sampling(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123580409.pdf)
+ **[PODNet]** Pooled Outputs Distillation for Small-Tasks Incremental Learning(ECCV 2020)[[paper]](https://arxiv.org/abs/2004.13513)[[code]](https://github.com/arthurdouillard/incremental_learning.pytorch)
+ **[FA]** Memory-Efficient Incremental Learning Through Feature Adaptation(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58517-4_41)
+ **[L-VAEGAN]** Learning latent representions across multiple data domains using Lifelong VAEGAN(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_46)
+ **[Piggyback GAN]** Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123660392.pdf)[[code]](https://github.com/arunmallya/piggyback)
+ **[IDA]** Incremental Meta-Learning via Indirect Discriminant Alignment(ECCV 2020)[[paper]](https://arxiv.org/abs/2002.04162)
+ **[RCM]** Reparameterizing Convolutions for Incremental Multi-Task Learning Without Task Interference(ECCV 2020)[[paper]](https://arxiv.org/abs/2007.12540)
+ **[LAMOL]** LAMOL: LAnguage MOdeling for Lifelong Language Learning(ICLR 2020)[[paper]](https://openreview.net/forum?id=Skgxcn4YDS)[[code]](https://github.com/chho33/LAMOL)
+ **[FRCL]** Functional Regularisation for Continual Learning with Gaussian Processes(ICLR 2020)[[paper]](https://arxiv.org/abs/1901.11356)[[code]](https://github.com/AndreevP/FRCL)
+ **[GRS]** Continual Learning with Bayesian Neural Networks for Non-Stationary Data(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJlsFpVtDB)
+ **[Brain-inspired replay]** Brain-inspired replay for continual learning with artificial neural networks(Natrue Communications 2020)[[paper]](https://www.nature.com/articles/s41467-020-17866-2)[[code]](https://github.com/GMvandeVen/brain-inspired-replay)
+ **[ScaIL]** ScaIL: Classifier Weights Scaling for Class Incremental Learning(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Belouadah_ScaIL_Classifier_Weights_Scaling_for_Class_Incremental_Learning_WACV_2020_paper.html)[[code]](https://github.com/EdenBelouadah/class-incremental-learning)
+ **[CLIFER]** CLIFER: Continual Learning with Imagination for Facial Expression Recognition(FG 2020)[[paper]](https://ieeexplore.ieee.org/document/9320226)
+ **[ARPER]** Continual Learning for Natural Language Generation in Task-oriented Dialog Systems(EMNLP 2020)[[paper]](https://arxiv.org/abs/2010.00910)
+ **[DnR]** Distill and Replay for Continual Language Learning(COLING 2020)[[paper]](https://www.aclweb.org/anthology/2020.coling-main.318.pdf)
+ **[ADER]** ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation(RecSys 2020)[[paper]](https://arxiv.org/abs/2007.12000)[[code]](https://github.com/DoubleMuL/ADER)
+ **[MUC]** More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning(ECCV 2020)[[paper]](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123710698.pdf)[[code]](https://github.com/liuyudut/MUC)## 2019
+ **[LwM]** Learning without memorizing(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/8953962)
+ **[CPG]** Compacting, picking and growing for unforgetting continual learning(NeurIPS 2019)[[paper]](https://arxiv.org/pdf/1910.06562v1.pdf)[[code]](https://github.com/ivclab/CPG)
+ **[UCL]** Uncertainty-based continual learning with adaptive regularization(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/2c3ddf4bf13852db711dd1901fb517fa-Paper.pdf)
+ **[OML]** Meta-Learning Representations for Continual Learning(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/f4dd765c12f2ef67f98f3558c282a9cd-Abstract.html)[[code]](https://github.com/Khurramjaved96/mrcl)
+ **[ALASSO]** Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Park_Continual_Learning_by_Asymmetric_Loss_Approximation_With_Single-Side_Overestimation_ICCV_2019_paper.pdf)
+ **[Learn-to-Grow]** Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting(PMLR 2019)[[paper]](http://proceedings.mlr.press/v97/li19m/li19m.pdf)
+ **[OWM]** Continual Learning of Context-dependent Processing in Neural Networks(Nature Machine Intelligence 2019)[[paper]](https://www.nature.com/articles/s42256-019-0080-x#Sec2)[[code]](https://github.com/beijixiong3510/OWM)
+ **[LUCIR]** Learning a Unified Classifier Incrementally via Rebalancing(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html)[[code]](https://github.com/hshustc/CVPR19_Incremental_Learning)
+ **[TFCL]** Task-Free Continual Learning(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aljundi_Task-Free_Continual_Learning_CVPR_2019_paper.pdf)
+ **[GD-WILD]** Overcoming catastrophic forgetting with unlabeled data in the wild(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/9010368)[[code]](https://github.com/kibok90/iccv2019-inc)
+ **[DGM]** Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Ostapenko_Learning_to_Remember_A_Synaptic_Plasticity_Driven_Framework_for_Continual_CVPR_2019_paper.pdf)
+ **[BiC]** Large Scale Incremental Learning(CVPR 2019)[[paper]](https://arxiv.org/abs/1905.13260)[[code]](https://github.com/wuyuebupt/LargeScaleIncrementalLearning)
+ **[MER]** Learning to learn without forgetting by maximizing transfer and minimizing interference(ICLR 2019)[[paper]](https://openreview.net/pdf?id=B1gTShAct7)[[code]](https://github.com/mattriemer/mer)
+ **[PGMA]** Overcoming catastrophic forgetting for continual learning via model adaptation(ICLR 2019)[[paper]](https://openreview.net/forum?id=ryGvcoA5YX)
+ **[A-GEM]** Efficient Lifelong Learning with A-GEM(ICLR 2019)[[paper]](https://arxiv.org/pdf/1812.00420.pdf)[[code]](https://github.com/facebookresearch/agem)
+ **[IL2M]** Class incremental learning with dual memory(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009019)
+ **[ILCAN]** Incremental learning using conditional adversarial networks(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009031)
+ **[Lifelong GAN]** Lifelong GAN: Continual Learning for Conditional Image Generation(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/html/Zhai_Lifelong_GAN_Continual_Learning_for_Conditional_Image_Generation_ICCV_2019_paper.html)
+ **[GSS]** Gradient based sample selection for online continual learning(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf)
+ **[ER]** Experience Replay for Continual Learning(NIPS 2019)[[paper]](https://arxiv.org/abs/1811.11682)
+ **[MIR]** Online Continual Learning with Maximal Interfered Retrieval(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html)[[code]](https://github.com/optimass/Maximally_Interfered_Retrieval)
+ **[RPS-Net]** Random Path Selection for Incremental Learning(NIPS 2019)[[paper]](https://www.researchgate.net/profile/Salman-Khan-62/publication/333617650_Random_Path_Selection_for_Incremental_Learning/links/5d04905ea6fdcc39f11b7355/Random-Path-Selection-for-Incremental-Learning.pdf)
+ **[CLEER]** Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay(IJCAI 2019)[[paper]](https://arxiv.org/abs/1903.04566)
+ **[PAE]** Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning(ICMR 2019)[[paper]](https://dl.acm.org/doi/10.1145/3323873.3325053)[[code]](https://github.com/ivclab/PAE)## 2018
+ **[PackNet]** PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning(CVPR 2018)[[paper]](https://openaccess.thecvf.com/content_cvpr_2018/html/Mallya_PackNet_Adding_Multiple_CVPR_2018_paper.html)[[code]](https://github.com/arunmallya/packnet)
+ **[OLA]** Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting(NIPS 2018)[[paper]](https://proceedings.neurips.cc/paper/2018/hash/f31b20466ae89669f9741e047487eb37-Abstract.html)
+ **[RCL]** Reinforced Continual Learning(NIPS 2018)[[paper]](http://papers.nips.cc/paper/7369-reinforced-continual-learning.pdf)[[code]](https://github.com/xujinfan/Reinforced-Continual-Learning)
+ **[MARL]** Routing networks: Adaptive selection of non-linear functions for multi-task learning(ICLR 2018)[[paper]](https://openreview.net/forum?id=ry8dvM-R-)
+ **[P&C]** Progress & Compress: A scalable framework for continual learning(ICML 2018)[[paper]](https://arxiv.org/abs/1805.06370)
+ **[DEN]** Lifelong Learning with Dynamically Expandable Networks(ICLR 2018)[[paper]](https://openreview.net/forum?id=Sk7KsfW0-)[[code]](https://github.com/jaehong31/DEN)
+ **[Piggyback]** Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/papers/Arun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.pdf)[[code]](https://github.com/arunmallya/piggyback)
+ **[RWalk]** Riemanian Walk for Incremental Learning: Understanding Forgetting and Intransigence(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/html/Arslan_Chaudhry__Riemannian_Walk_ECCV_2018_paper.html)
+ **[MAS]** Memory Aware Synapses: Learning What not to Forget(ECCV 2018)[[paper]](https://arxiv.org/pdf/1711.09601.pdf)[[code]](https://github.com/rahafaljundi/MAS-Memory-Aware-Synapses)
+ **[R-EWC]** Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting(ICPR 2018)[[paper]](https://ieeexplore.ieee.org/abstract/document/8545895)[[code]](https://github.com/xialeiliu/RotateNetworks)
+ **[HAT]** Overcoming Catastrophic Forgetting with Hard Attention to the Task(PMLR 2018)[[paper]](http://proceedings.mlr.press/v80/serra18a.html)[[code]](https://github.com/joansj/hat)
+ **[MeRGANs]** Memory Replay GANs:learning to generate images from new categories without forgetting(NIPS 2018)[[paper]](https://arxiv.org/abs/1809.02058)[[code]](https://github.com/WuChenshen/MeRGAN)
+ **[EEIL]** End-to-End Incremental Learning(ECCV 2018)[[paper]](https://arxiv.org/abs/1807.09536)[[code]](https://github.com/fmcp/EndToEndIncrementalLearning)
+ **[Adaptation by Distillation]** Lifelong Learning via Progressive Distillation and Retrospection(ECCV 2018)[[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Saihui_Hou_Progressive_Lifelong_Learning_ECCV_2018_paper.pdf)
+ **[ESGR]** Exemplar-Supported Generative Reproduction for Class Incremental Learning(BMVC 2018)[[paper]](http://bmvc2018.org/contents/papers/0325.pdf)[[code]](https://github.com/TonyPod/ESGR)
+ **[VCL]** Variational Continual Learning(ICLR 2018)[[paper]](https://arxiv.org/pdf/1710.10628.pdf#page=13&zoom=100,110,890)
+ **[FearNet]** FearNet: Brain-Inspired Model for Incremental Learning(ICLR 2018)[[paper]](https://openreview.net/forum?id=SJ1Xmf-Rb)
+ **[DGDMN]** Deep Generative Dual Memory Network for Continual Learning(ICLR 2018)[[paper]](https://openreview.net/forum?id=BkVsWbbAW)## 2017
+ **[Expert Gate]** Expert Gate: Lifelong learning with a network of experts(CVPR 2017)[[paper]](https://openaccess.thecvf.com/content_cvpr_2017/papers/Aljundi_Expert_Gate_Lifelong_CVPR_2017_paper.pdf)[[code]](https://github.com/wannabeOG/ExpertNet-Pytorch)
+ **[ILOD]** Incremental Learning of Object Detectors without Catastrophic Forgetting(ICCV 2017)[[paper]](https://openaccess.thecvf.com/content_ICCV_2017/papers/Shmelkov_Incremental_Learning_of_ICCV_2017_paper.pdf)[[code]](https://github.com/kshmelkov/incremental_detectors)
+ **[EBLL]** Encoder Based Lifelong Learning(ICCV2017)[[paper]](https://arxiv.org/abs/1704.01920)
+ **[IMM]** Overcoming Catastrophic Forgetting by Incremental Moment Matching(NIPS 2017)[[paper]](https://arxiv.org/abs/1703.08475)[[code]](https://github.com/btjhjeon/IMM_tensorflow)
+ **[SI]** Continual Learning through Synaptic Intelligence(ICML 2017)[[paper]](http://proceedings.mlr.press/v70/zenke17a/zenke17a.pdf)[[code]](https://github.com/ganguli-lab/pathint)
+ **[EWC]** Overcoming Catastrophic Forgetting in Neural Networks(PNAS 2017)[[paper]](https://arxiv.org/abs/1612.00796)[[code]](https://github.com/stokesj/EWC)
+ **[iCARL]** iCaRL: Incremental Classifier and Representation Learning(CVPR 2017)[[paper]](https://arxiv.org/abs/1611.07725)[[code]](https://github.com/srebuffi/iCaRL)
+ **[GEM]** Gradient Episodic Memory for Continual Learning(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/hash/f87522788a2be2d171666752f97ddebb-Abstract.html)[[code]](https://github.com/facebookresearch/GradientEpisodicMemory)
+ **[DGR]** Continual Learning with Deep Generative Replay(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/file/0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf)[[code]](https://github.com/kuc2477/pytorch-deep-generative-replay)## 2016
+ **[LwF]** Learning without Forgetting(ECCV 2016)[[paper]](https://link.springer.com/chapter/10.1007/978-3-319-46493-0_37)[[code]](https://github.com/lizhitwo/LearningWithoutForgetting)
# :gift_heart: Contributors
[
](https://github.com/pinna526) [
](https://github.com/xiaopenghong) [
](https://github.com/iamwangyabin) [
](https://github.com/ZhihengCV) [
](https://github.com/benmagnifico) [
](https://github.com/zxxxxh)