Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-continual-leaning-with-ptms
This is a curated list of "Continual Learning with Pretrained Models" research.
https://github.com/danelpeng/awesome-continual-leaning-with-ptms
Last synced: about 7 hours ago
JSON representation
-
Methods 🌟
- **CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning**
- **Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning**
- **Self-regulating Prompts: Foundational Model Adaptation without Forgetting**
- **Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach**
- **ModalPrompt:Dual-Modality Guided Prompt for Continual Learning of Large Multimodal Models**
- **Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning**
- **LW2G: Learning Whether to Grow for Prompt-based Continual Learning**
- **Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models**
- **Evolving Parameterized Prompt Memory for Continual Learning**
- **Generating Prompts in Latent Space for Rehearsal-free Continual Learning**
- **Convolutional Prompting meets Language Models for Continual Learning**
- **Consistent Prompting for Rehearsal-Free Continual Learning**
- **Steering Prototypes with Prompt-tuning for Rehearsal-free Continual Learning**
- **Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality**
- **When Prompt-based Incremental Learning Does Not Meet Strong Pretraining**
- **Introducing Language Guidance in Prompt-based Continual Learning**
- **Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space**
- **Progressive Prompts: Continual Learning for Language Models**
- **ATLAS: Adapter-Based Multi-Modal Continual Learning with a Two-Stage Learning Strategy**
- **Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models**
- **When Prompt-based Incremental Learning Does Not Meet Strong Pretraining**
- **Introducing Language Guidance in Prompt-based Continual Learning**
- **MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning**
- **Recent Advances of Multimodal Continual Learning: A Comprehensive Survey**
- **Continual Learning with Pre-Trained Models: A Survey**
- **MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning**
- **Progressive Prompts: Continual Learning for Language Models**
- **Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning**
- **Self-regulating Prompts: Foundational Model Adaptation without Forgetting**
- **Generating Instance-level Prompts for Rehearsal-free Continual Learning**
- **CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning**
- **S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning**
- **DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning**
- **Adaptive Adapter Routing for Long-Tailed Class-Incremental Learning**
- **Mixture of Experts Meets Prompt-Based Continual Learning**
- **Learning More Generalized Experts by Merging Experts in Mixture-of-Experts**
- **MoRAL: MoE Augmented LoRA for LLMs’ Lifelong Learning**
- **Divide and not forget: Ensemble of selectively trained experts in Continual Learning**
- **An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training**
- **Lifelong Language Pretraining with Distribution-Specialized Experts**
- **Continual Learning Beyond a Single Model**
- **Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners**
- **Learning without Forgetting for Vision-Language Models**
- **Generating Instance-level Prompts for Rehearsal-free Continual Learning**
- **Adaptive Adapter Routing for Long-Tailed Class-Incremental Learning**
- **Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models**
- **Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space**
- **A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning**
- **Continual learning with task specialist**
- **A Practitioner’s Guide to Continual Multimodal Pretraining**
- **CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning**
- **Anytime Continual Learning for Open Vocabulary Classification**
- **Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models**
- **CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning**
- **S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning**
- **DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning**
- **Learning to Prompt for Continual Learning**
- **InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning**
- **Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation**
- **Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters**
- **Learning Attentional Mixture of LoRAs for Language Model Continual Learning**
- **Theory on Mixture-of-Experts in Continual Learning**
- **Weighted Ensemble Models Are Strong Continual Learners**
- **LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models**
- **A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning**
- **Continual learning with task specialist**
- **CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning**
- **Anytime Continual Learning for Open Vocabulary Classification**
- **Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models**
- **CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning**
- **TiC-CLIP: Continual Training of CLIP Models**
- **Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners**
- **Learning without Forgetting for Vision-Language Models**
- **Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models**
- **Continual Vision-Language Representation Learning with Off-Diagonal Information**
- **CLIP model is an Efficient Continual Learner**
- **Don’t Stop Learning: Towards Continual Learning for the CLIP Model**
- **CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks**
- **Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning**
- **Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer**
- **Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning**
- **CoSCL: Cooperation of Small Continual Learners is Stronger Than a Big One**
- **Ex-Model: Continual Learning from a Stream of Trained Models**
- **A Practitioner’s Guide to Continual Multimodal Pretraining**
- **TiC-CLIP: Continual Training of CLIP Models**
- **GUIDE: Guidance-based Incremental Learning with Diffusion Models**
- **SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection**
- **Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning**
- **Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer**
- **Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning**
- **InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning**
- **Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation**
- **Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters**
- **Learning Attentional Mixture of LoRAs for Language Model Continual Learning**
- **Theory on Mixture-of-Experts in Continual Learning**
- **Weighted Ensemble Models Are Strong Continual Learners**
- **LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models**
- **Mixture of Experts Meets Prompt-Based Continual Learning**
- **Learning More Generalized Experts by Merging Experts in Mixture-of-Experts**
- **MoRAL: MoE Augmented LoRA for LLMs’ Lifelong Learning**
- **Divide and not forget: Ensemble of selectively trained experts in Continual Learning**
- **Continual Vision-Language Representation Learning with Off-Diagonal Information**
- **CLIP model is an Efficient Continual Learner**
- **Don’t Stop Learning: Towards Continual Learning for the CLIP Model**
- **CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks**
- **Robust Fine-Tuning of Zero-Shot Models**
- **Diffusion Model Meets Non-Exemplar Class-Incremental Learning and Beyond**
- **An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training**
- **Lifelong Language Pretraining with Distribution-Specialized Experts**
- **Continual Learning Beyond a Single Model**
- **CoSCL: Cooperation of Small Continual Learners is Stronger Than a Big One**
- **Ex-Model: Continual Learning from a Stream of Trained Models**
- **Class-Prototype Conditional Diffusion Model with Gradient Projection for Continual Learning**
- **Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning**
- **DiffClass: Diffusion-Based Class Incremental Learning**
- **Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning**
- **Incremental Learning for Robot Shared Autonomy**
- **Vision-Language Navigation with Continual Learning**
- **Continual Vision-and-Language Navigation**
- **Online Continual Learning For Interactive Instruction Following Agents**
- **LLaCA: Multimodal Large Language Continual Assistant**
- **Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models**
- **Is Parameter Collision Hindering Continual Learning in LLMs?**
- **Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting**
- **Low-Rank Continual Personalization of Diffusion Models**
- **Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters**
- **Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA**
- **Continual Learning of Diffusion Models with Generative Distillation**
- **Conditioned Prompt-Optimization for Continual Deepfake Detection**
- **Robust Fine-Tuning of Zero-Shot Models**
- **Is Parameter Collision Hindering Continual Learning in LLMs?**
- **Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting**
- **Low-Rank Continual Personalization of Diffusion Models**
- **Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning**
- **DiffClass: Diffusion-Based Class Incremental Learning**
- **GUIDE: Guidance-based Incremental Learning with Diffusion Models**
- **SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection**
- **Class-Incremental Learning using Diffusion Model for Distillation and Replay**
- **DDGR: Continual Learning with Deep Diffusion-based Generative Replay**
- **Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning**
- **Incremental Learning for Robot Shared Autonomy**
- **Task-unaware Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation**
- **Vision-Language Navigation with Continual Learning**
- **Continual Vision-and-Language Navigation**
- **Online Continual Learning For Interactive Instruction Following Agents**
- **LLaCA: Multimodal Large Language Continual Assistant**
- **Class-Incremental Learning using Diffusion Model for Distillation and Replay**
- **DDGR: Continual Learning with Deep Diffusion-based Generative Replay**
- **A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials**
- **Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters**
- **Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA**
- **Continual Learning of Diffusion Models with Generative Distillation**
- **Conditioned Prompt-Optimization for Continual Deepfake Detection**
- **A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials**
- **Mixture-of-Variational-Experts for Continual Learning**
- **Routing Networks with Co-training for Continual Learning**
-
Acknowledge 👨‍🏫
Categories
Sub Categories