Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Incremental-Learning
Awesome Incremental Learning
https://github.com/xialeiliu/Awesome-Incremental-Learning
Last synced: 3 days ago
JSON representation
-
Survey
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - continual-learning)]
- [paper
- [paper
- [paper - incremental-learning/tree/master/cil)]
- [paper
- [paper
- [paper - learning)]
- [paper
- [paper - zdw/CIL_Survey/)]
- [paper - hailong/LAMDA-PILOT)]
- [paper
- [paper
- [paper
- [paper - ML-Lab/llm-continual-learning-survey)]
-
Papers
-
2024
- [paper
- [paper - teacher-adaptation)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - Lab/SSIAT)]
- [paper
- [paper
- [paper - ICST-MIPL/C2R)]
- [paper - purdue/continual-compression)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - FCS)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - Adapters4CL)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - wang/CCL-DC)]
- [paper
- [paper
- [paper - Gao/CPrompt)]
- [paper
- [paper - hailong/CVPR24-Ease)]
- [paper - AGI/PriViLege)]
- [paper
- [paper
- [paper
- [paper - lee/SB-MCL)]
- [paper
- [paper
- [paper
- [paper - JWLEE/STELLA_code)]
- [paper - lakshman/diminishing-returns-wide-continual-learning)]
- [paper - FSCIL)]
- [paper - wm)]
- [paper
- [paper
- [paper
- [paper
- [paper - Lab/DARE)]
- [paper
- [paper
- [paper
- [paper
- [paper - huang/eTag)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - ml/pec)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - alfred)]
- [paper
- [paper
- [paper
- [paper
- [paper - SNN)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - gradient-projection)]
- [paper
- [paper - AGI/OCGCD)]
- [paper
- [paper
- [paper - XJTU/EvoPrompt)]
- [paper
- [paper
- [paper
- [paper
- [paper - ECCV2024.git)]
- [paper - ah/AwoForget)]
- [paper
- [paper
- [paper - yoon/Pick-a-back.git)]
- [paper
- [paper - AIM-Group/CLIFF)]
- [paper - Prompt)]
- [paper - purdue/delta)]
- [paper - CGCD)]
- [paper
- [paper
- [paper
- [paper - ASP)]
- [paper
- [paper
- [paper - liang/DDDR)]
- [paper - Laboratory/Revisting_FSCIL)]
- [paper - AGI/VIL)]
- [paper
- [paper - xie/ECCV24_NeST)]
- [paper
- [paper - Laboratory/BPF)]
- [paper
- [paper
- [paper - 109/Web-WILSS)]
- [paper
- [paper
- [paper - Code)]
- [paper - Tom/iNeMo)]
- [paper
- [paper
- [paper
- [paper
- [paper - Mixup)]
- [paper
- [paper
- [paper - vs-ssl-cl)]
- [paper
- [paper
- [paper
- [paper - DETR)]
- [paper
- [paper
- [paper - ai.github.io/promptccd)]
- [paper
- [paper
- [paper
- [paper - Prompt)]
- [paper
- [paper
- [paper
- [paper
-
2023
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - ML-Lab/unified-continual-learning)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - gyuhak/CLOOD)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - model-composition)]
- [paper
- [paper
- [paper - VQ.git)]
- [paper
- [paper
- [paper
- [paper
- [paper - blurry)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - CIL_ICCV2023)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - wise-Lightweight-Reprogramming)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - liu/CL-DETR)]
- [paper
- [paper - RIPL/CODA-Prompt)]
- [paper
- [paper - CIL)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - models-in-CL)]
- [paper - Lab/SCoMMER)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - for-incremental-learning-NeurIPS-2023)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - gyuhak/CLOOD)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - liu/CL-DETR)]
- [paper - RIPL/CODA-Prompt)]
- [paper
- [paper - CIL)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - for-incremental-learning-NeurIPS-2023)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - ML-Lab/unified-continual-learning)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
-
2019
-
2022
- [paper
- [paper
- [paper - NLP/Incremental_Prompting)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - gyuhak/WPTP)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - USC/CLiMB)]
- [paper - mixing-times)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - research/l2p)]
- [paper
- [paper
- [paper
- [paper
- [paper - pour/Dynamic-Sparse-Distributed-Memory)]
- [paper
- [paper - arjun/CSCCT)]
- [paper - U-N/ECCV22-FOSTER)]
- [paper
- [paper - dfcil)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - Liu-Lab/CPT)]
- [paper
- [paper
- [paper
- [paper - zdw/TPAMI-Limit)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - vipa/MEAT-TIL)]
- [paper - research/l2p)]
- [paper
- [paper
- [paper
- [paper - Shi/CwD)]
- [paper - zdw/CVPR22-Fact)]
- [paper
- [paper
- [paper
- [paper
- [paper - gyuhak/MORE)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
-
2021
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - Non-Co-occurrence)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - OZg)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - continual-learning)]
- [paper
- [paper
- [paper
- [paper - inha/Split-and-Bridge)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
-
2020
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - Continual-Learning)]
- [paper - hayes/REMIND)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - IL)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - liu/mnemonics)]
- [paper
- [paper
- [paper - inspired-replay)]
- [paper - neurobotics-lab/ANML)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper - inspired-replay)]
- [paper
- [paper - liu/mnemonics)]
-
2018
-
2017
-
2016
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper
-
-
ContinualAI wiki
-
Workshops
-
Challenges or Competitions