https://github.com/hyintell/awesome-refreshing-llms
EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.
https://github.com/hyintell/awesome-refreshing-llms
List: awesome-refreshing-llms
awesome-list continual-learning knowledge-editing large-language-models llm llms natural-language-processing nlp paper pretrained-language-model refreshing retrieval-augmented-generation review survey update-llm
Last synced: 3 months ago
JSON representation
EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.
- Host: GitHub
- URL: https://github.com/hyintell/awesome-refreshing-llms
- Owner: hyintell
- License: mit
- Created: 2023-10-08T16:20:21.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-12-12T12:20:16.000Z (over 1 year ago)
- Last Synced: 2025-02-10T00:04:13.218Z (3 months ago)
- Topics: awesome-list, continual-learning, knowledge-editing, large-language-models, llm, llms, natural-language-processing, nlp, paper, pretrained-language-model, refreshing, retrieval-augmented-generation, review, survey, update-llm
- Homepage:
- Size: 2.31 MB
- Stars: 131
- Watchers: 5
- Forks: 10
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ultimate-awesome - awesome-refreshing-llms - EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining. (Other Lists / Julia Lists)
README
# Awesome-Refreshing-LLMs
[](https://awesome.re)
[](./LICENSE)

Although **large language models (LLMs)** are impressive in solving various tasks, they can quickly be outdated after deployment. Maintaining their up-to-date status is a pressing concern in the current era. How can we refresh LLMs to align with the ever-changing world knowledge ***without expensive retraining from scratch***?
![]()
An LLM after training is static and can be quickly outdated. For example, ChatGPT has a knowledge
cutoff date of September 2021. Without web browsing, it does not know the latest information ever since.## π’ News
- **[2023-10] Our survey paper is now available on Arxiv: *[How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances](https://arxiv.org/abs/2310.07343)***.
- **[2023-10] Our survey paper: *"How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances"* has been accepted by [EMNLP 2023](https://2023.emnlp.org/)! We will release the camera-ready version soon.**
- **[2023-10] We create this repository to maintain a paper list on *refreshing LLMs without retraining*.**---
## π Table of Contents
- [π’ News](#-news)
- [π Table of Contents](#-table-of-contents)
- [π Papers](#-papers)
- [Methods Overview](#methods-overview)
- [Knowledge Editing](#knowledge-editing)
- [Meta-learning](#meta-learning)
- [Hypernetwork Editor](#hypernetwork-editor)
- [Locate and Edit](#locate-and-edit)
- [Other](#other)
- [Continual Learning](#continual-learning)
- [Continual Pre-training](#continual-pre-training)
- [Continual Knowledge Editing](#continual-knowledge-editing)
- [Memory-enhanced](#memory-enhanced)
- [Retrieval-enhanced](#retrieval-enhanced)
- [Internet-enhanced](#internet-enhanced)
- [π» Resources](#-resources)
- [Related Survey](#related-survey)
- [Tools](#tools)
- [π© Citation](#-citation)
- [π Acknowledgement \& Contribution](#-acknowledgement--contribution)## π Papers
### Methods Overview
To refresh LLMs to align with the ever-changing world knowledge without retraining, we roughly categorize existing methods into ***Implicit*** and ***Explicit*** approaches.
***Implicit*** means the approaches seek to directly alter the knowledge stored in LLMs, such as parameters or weights, while ***Explicit*** means more often incorporating external resources to override internal knowledge, such as augmenting a search engine.Please see our paper for more details.
![]()
Taxonomy of methods to align LLMs with the ever-changing world knowledge.
![]()
A high-level comparison of different approaches.### Knowledge Editing
> **Knowledge editing (KE)** is an arising and promising research area that aims to alter the parameters of some specific knowledge stored in pre-trained models so that the model can make new predictions on those revised instances while keeping other irrelevant knowledge unchanged.
> We categorize existing methods into *meta-learning*, *hypernetwork*, and *locate-and-edit* -based methods.#### Meta-learning
| Year | Venue | Paper | Link |
| :--- | :---- | :------------------------------------------------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | Arxiv | RECKONING: Reasoning through Dynamic Knowledge Encoding | [](https://arxiv.org/abs/2305.06349) |
| 2020 | ICLR | Editable Neural Networks | [](https://openreview.net/forum?id=HJedXaEtvS) [](https://github.com/editable-ICLR2020/editable) |#### Hypernetwork Editor
| Year | Venue | Paper | Link |
| :--- | :---- | :---------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 2023 | KBS | A divide and conquer framework for Knowledge Editing | [](https://www.sciencedirect.com/science/article/pii/S0950705123005762) |
| 2023 | Arxiv | Inspecting and Editing Knowledge Representations in Language Models | [](https://arxiv.org/abs/2304.00740) [](https://github.com/evandez/REMEDI) |
| 2023 | Arxiv | Propagating Knowledge Updates to LMs Through Distillation | [](https://arxiv.org/abs/2306.09306) [](https://github.com/shankarp8/knowledge_distillation) |
| 2023 | EACL | Methods for Measuring, Updating, and Visualizing Factual Beliefs in Language Models | [](https://aclanthology.org/2023.eacl-main.199/) [](https://github.com/peterbhase/SLAG-Belief-Updating) |
| 2022 | ICLR | Fast Model Editing at Scale | [](https://openreview.net/forum?id=0DcZxeWfOPt) [](https://github.com/eric-mitchell/mend) |
| 2021 | EMNLP | Editing Factual Knowledge in Language Models | [](https://aclanthology.org/2021.emnlp-main.522/) [](https://github.com/nicola-decao/KnowledgeEditor) |#### Locate and Edit
| Year | Venue | Paper | Link |
| :--- | :------ | :------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | Arxiv | KLoB: a Benchmark for Assessing Knowledge Locating Methods in Language Models | [](https://arxiv.org/abs/2309.16535) [](https://github.com/juyiming/KLoB) |
| 2023 | Arxiv | Editing Commonsense Knowledge in GPT | [](https://arxiv.org/abs/2305.14956) [](https://github.com/anshitag/memit_csk) |
| 2023 | Arxiv | PMET: Precise Model Editing in a Transformer | [](https://arxiv.org/abs/2308.08742) [](https://github.com/xpq-tech/PMET) |
| 2023 | Arxiv | Journey to the Center of the Knowledge Neurons: Discoveries of Language-Independent Knowledge Neurons and Degenerate Knowledge Neurons | [](https://arxiv.org/abs/2308.13198) |
| 2023 | Arxiv | Dissecting Recall of Factual Associations in Auto-Regressive Language Models | [](https://arxiv.org/abs/2304.14767) |
| 2023 | ICLR | Mass-Editing Memory in a Transformer | [](https://openreview.net/forum?id=MkbcAHIYgyS) [](https://github.com/kmeng01/memit) |
| 2022 | ACL | Knowledge Neurons in Pretrained Transformers | [](https://aclanthology.org/2022.acl-long.581/) [](https://github.com/hunter-ddm/knowledge-neurons) |
| 2022 | NeurIPS | Fast Model Editing at Scale | [](https://proceedings.neurips.cc/paper_files/paper/2022/hash/6f1d43d5a82a37e89b0665b33bf3a182-Abstract-Conference.html) [](https://github.com/kmeng01/rome) |#### Other
| Year | Venue | Paper | Link |
| :--- | :---- | :-------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | Arxiv | Eva-KELLM: A New Benchmark for Evaluating Knowledge Editing of LLMs | [](https://arxiv.org/abs/2308.09954) |
| 2023 | Arxiv | Evaluating the Ripple Effects of Knowledge Editing in Language Models | [](https://arxiv.org/abs/2307.12976) [](https://github.com/edenbiran/RippleEdits/) |
| 2023 | Arxiv | Cross-Lingual Knowledge Editing in Large Language Models | [](https://arxiv.org/abs/2309.08952) [](https://github.com/krystalan/Bi-ZsRE) |
| 2023 | Arxiv | Language Anisotropic Cross-Lingual Model Editing | [](https://arxiv.org/abs/2205.12677) |### Continual Learning
> **Continual learning (CL)** aims to enable a model to learn from a continuous data stream across time while reducing catastrophic forgetting of previously acquired knowledge. With CL, a deployed LLM has the potential to adapt to the changing world without costly re-training from scratch. Below papers employ CL for aligning language models with the current world knowledge, including *Continual Pre-training* and *Continual Knowledge Editing*.
#### Continual Pre-training
| Year | Venue | Paper | Link |
| :--- | :------ | :---------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | Arxiv | KILM: Knowledge Injection into Encoder-Decoder Language Models | [](https://arxiv.org/abs/2302.09170) [](https://github.com/alexa/kilm) |
| 2023 | Arxiv | Semiparametric Language Models Are Scalable Continual Learners | [](https://arxiv.org/abs/2303.01421) |
| 2023 | Arxiv | Meta-Learning Online Adaptation of Language Models | [](https://arxiv.org/abs/2305.15076) |
| 2023 | Arxiv | ModuleFormer: Modularity Emerges from Mixture-of-Experts | [](https://arxiv.org/abs/2306.04640) [](https://github.com/IBM/ModuleFormer) |
| 2023 | Arxiv | Self Information Update for Large Language Models through Mitigating Exposure Bias | [](https://arxiv.org/abs/2305.18582) |
| 2023 | Arxiv | Continual Pre-Training of Large Language Models: How to (re)warm your model? | [](https://arxiv.org/abs/2308.04014) |
| 2023 | ICLR | Continual Pre-training of Language Models | [](https://openreview.net/forum?id=m_GDIItaI3o) [](https://github.com/UIC-Liu-Lab/ContinualLM) |
| 2023 | ICML | Lifelong Language Pretraining with Distribution-Specialized Experts | [](https://arxiv.org/abs/2305.12281) |
| 2022 | ACL | ELLE: Efficient Lifelong Pre-training for Emerging Data | [](https://aclanthology.org/2022.findings-acl.220/) [](https://github.com/thunlp/elle) |
| 2022 | EMNLP | Fine-tuned Language Models are Continual Learners | [](https://aclanthology.org/2022.emnlp-main.410/) [](https://github.com/ThomasScialom/T0_continual_learning) |
| 2022 | EMNLP | Continual Training of Language Models for Few-Shot Learning | [](https://aclanthology.org/2022.emnlp-main.695/) [](https://github.com/UIC-Liu-Lab/CPT) |
| 2022 | EMNLP | TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models | [](https://aclanthology.org/2022.emnlp-main.418/) [](https://github.com/joeljang/temporalwiki) |
| 2022 | ICLR | LoRA: Low-Rank Adaptation of Large Language Models | [](https://openreview.net/forum?id=nZeVKeeFYf9) [](https://github.com/microsoft/LoRA) |
| 2022 | ICLR | Towards Continual Knowledge Learning of Language Models | [](https://openreview.net/forum?id=vfsRB5MImo9) [](https://github.com/joeljang/continual-knowledge-learning) |
| 2022 | NAACL | DEMix Layers: Disentangling Domains for Modular Language Modeling | [](https://aclanthology.org/2022.naacl-main.407/) [](https://github.com/kernelmachine/demix) |
| 2022 | NAACL | Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora | [](https://aclanthology.org/2022.naacl-main.351/) |
| 2022 | NeurIPS | Factuality Enhanced Language Models for Open-Ended Text Generation | [](https://arxiv.org/abs/2206.04624) [](https://github.com/nayeon7lee/FactualityPrompt) |
| 2022 | TACL | Time-Aware Language Models as Temporal Knowledge Bases | [](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00459/110012/Time-Aware-Language-Models-as-Temporal-Knowledge) [](https://github.com/google-research/language/tree/master/language/templama) |
| 2021 | ACL | K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters | [](https://aclanthology.org/2021.findings-acl.121/) [](https://github.com/microsoft/K-Adapter) |
| 2021 | EACL | Analyzing the Forgetting Problem in Pretrain-Finetuning of Open-domain Dialogue Response Models | [](https://aclanthology.org/2021.eacl-main.95/) |
| 2020 | EMNLP | Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting | [](https://aclanthology.org/2020.emnlp-main.634/) [](https://github.com/Sanyuan-Chen/RecAdam) |#### Continual Knowledge Editing
| Year | Venue | Paper | Link |
| :--- | :---- | :------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | Arxiv | Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adapters | [](https://arxiv.org/abs/2211.11031) [](https://github.com/thartvigsen/grace) |
| 2023 | ICLR | Transformer-Patcher: One Mistake Worth One Neuron | [](https://openreview.net/forum?id=4oYUGeGBPm) [](https://github.com/ZeroYuHuang/Transformer-Patcher) |
| 2022 | ACL | On Continual Model Refinement in Out-of-Distribution Data Streams | [](https://aclanthology.org/2022.acl-long.223/) [](https://github.com/facebookresearch/CMR) |
| 2022 | ACL | Plug-and-Play Adaptation for Continuously-updated QA | [](https://aclanthology.org/2022.findings-acl.37/) |### Memory-enhanced
> Pairing a static LLM with a growing **non-parametric memory** enables it to capture information beyond its memorized knowledge during inference. The external memory can store a recent *corpus* or *feedback* that contains new information to guide the model generation.
| Year | Venue | Paper | Link |
| :--- | :---- | :------------------------------------------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | Arxiv | Adaptation Approaches for Nearest Neighbor Language Models | [](https://arxiv.org/abs/2211.07828) |
| 2023 | Arxiv | Semiparametric Language Models Are Scalable Continual Learners | [](https://arxiv.org/abs/2303.01421) |
| 2023 | Arxiv | MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions | [](https://arxiv.org/abs/2305.14795) [](https://github.com/princeton-nlp/MQuAKE) |
| 2022 | EMNLP | You canβt pick your neighbors, or can you? When and How to Rely on Retrieval in the kNN-LM | [](https://aclanthology.org/2022.findings-emnlp.218/) |
| 2022 | EMNLP | Nearest Neighbor Zero-Shot Inference | [](https://aclanthology.org/2022.emnlp-main.214/) [](https://github.com/swj0419/kNN_prompt) |
| 2022 | EMNLP | Memory-assisted prompt editing to improve GPT-3 after deployment | [](https://aclanthology.org/2022.emnlp-main.183/) [](https://github.com/madaan/memprompt) |
| 2022 | EMNLP | Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement | [](https://aclanthology.org/2022.emnlp-main.644/) [](https://allenai.org/data/teachme) |
| 2022 | ICML | Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval | [](https://arxiv.org/abs/2201.12431) [](https://github.com/neulab/retomaton) |
| 2022 | ICML | Memory-Based Model Editing at Scale | [](https://arxiv.org/abs/2206.06520) [](https://github.com/eric-mitchell/serac) |
| 2022 | NAACL | Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedback | [](https://aclanthology.org/2022.findings-naacl.26/) [](https://github.com/allenai/interscript) |
| 2021 | EMNLP | Efficient Nearest Neighbor Language Models | [](https://aclanthology.org/2021.emnlp-main.461/) [](https://github.com/jxhe/efficient-knnlm) |
| 2021 | EMNLP | BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief | [](https://aclanthology.org/2021.emnlp-main.697/) |
| 2020 | ICLR | Generalization through Memorization: Nearest Neighbor Language Models | [](https://openreview.net/forum?id=HklBjCEKvH) [](https://github.com/urvashik/knnlm) |### Retrieval-enhanced
> Leveraging an off-the-shelf retriever and the in-context learning ability of LLMs, this line of work designs better retrieval strategies to incorporate world knowledge into a fixed LLM through prompting, which can be divided into *single-stage* and *multi-stage*.
![]()
Single-Stage (left) typically retrieves once, while Multi-Stage (right) involves multiple retrievals or revisions to solve complex questions| Year | Venue | Paper | Link |
| :--- | :---- | :------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | ACL | Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In | [](https://arxiv.org/abs/2305.17331) [](https://github.com/OpenMatch/Augmentation-Adapted-Retriever) |
| 2023 | ACL | When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories | [](https://arxiv.org/abs/2212.10511) [](https://github.com/AlexTMallen/adaptive-retrieval) |
| 2023 | ACL | Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions | [](https://arxiv.org/abs/2212.10509) [](https://github.com/stonybrooknlp/ircot) |
| 2023 | ACL | RARR: Researching and Revising What Language Models Say, Using Language Models | [](https://arxiv.org/abs/2210.08726) [](https://github.com/anthonywchen/RARR) |
| 2023 | ACL | MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting | [](https://arxiv.org/abs/2305.16896) [](https://github.com/InabaTatsuro/MultiTool-CoT) |
| 2023 | Arxiv | Can We Edit Factual Knowledge by In-Context Learning? | [](https://arxiv.org/abs/2305.12740) [](https://github.com/Zce1112zslx/IKE) |
| 2023 | Arxiv | REPLUG: Retrieval-Augmented Black-Box Language Models | [](https://arxiv.org/abs/2301.12652) |
| 2023 | Arxiv | Improving Language Models via Plug-and-Play Retrieval Feedback | [](https://arxiv.org/abs/2305.14002) |
| 2023 | Arxiv | Measuring and Narrowing the Compositionality Gap in Language Models | [](https://arxiv.org/abs/2210.03350) [](https://github.com/ofirpress/self-ask) |
| 2023 | Arxiv | ART: Automatic multi-step reasoning and tool-use for large language models | [](https://arxiv.org/abs/2303.09014) [](https://github.com/bhargaviparanjape/language-programmes/) |
| 2023 | Arxiv | ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models | [](https://arxiv.org/abs/2305.14323) [](https://github.com/RUCAIBOX/ChatCoT) |
| 2023 | Arxiv | Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback | [](https://arxiv.org/abs/2302.12813) [](https://github.com/pengbaolin/LLM-Augmenter) |
| 2023 | Arxiv | Question Answering as Programming for Solving Time-Sensitive Questions | [](https://arxiv.org/abs/2305.14221) [](https://github.com/microsoft/ContextualSP/tree/master/qaap) |
| 2023 | Arxiv | Active Retrieval Augmented Generation | [](https://arxiv.org/abs/2305.06983) [](https://github.com/jzbjyb/FLARE) |
| 2023 | Arxiv | Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP | [](https://arxiv.org/abs/2212.14024) [](https://github.com/stanfordnlp/dspy) |
| 2023 | Arxiv | Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy | [](https://arxiv.org/abs/2305.15294) |
| 2023 | Arxiv | Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework | [](https://arxiv.org/abs/2305.03268) |
| 2023 | Arxiv | CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing | [](https://arxiv.org/abs/2305.11738) [](https://github.com/microsoft/ProphetNet/tree/master/CRITIC) |
| 2023 | Arxiv | WikiChat: A Few-Shot LLM-Based Chatbot Grounded with Wikipedia | [](https://arxiv.org/abs/2305.14292) [](https://github.com/stanford-oval/WikiChat) |
| 2023 | Arxiv | Query Rewriting for Retrieval-Augmented Large Language Models | [](https://arxiv.org/abs/2305.14283) |
| 2023 | Arxiv | Knowledge Solver: Teaching LLMs to Search for Domain Knowledge from Knowledge Graphs | [](https://arxiv.org/abs/2309.03118) |
| 2023 | ICLR | Prompting GPT-3 To Be Reliable | [](https://openreview.net/forum?id=98p5x51L5af) [](https://github.com/NoviScl/GPT3-Reliability) |
| 2023 | ICLR | Decomposed Prompting: A Modular Approach for Solving Complex Tasks | [](https://openreview.net/forum?id=_nGgzQjzaRy) [](https://github.com/allenai/DecomP) |
| 2023 | ICLR | ReAct: Synergizing Reasoning and Acting in Language Models | [](https://openreview.net/forum?id=WE_vluYUL-X) [](https://github.com/ysymyth/ReAct) |
| 2023 | TACL | In-Context Retrieval-Augmented Language Models | [](https://arxiv.org/abs/2302.00083) [](https://github.com/AI21Labs/in-context-ralm) |
| 2022 | Arxiv | Rethinking with Retrieval: Faithful Large Language Model Inference | [](https://arxiv.org/abs/2301.00303) [](https://github.com/HornHehhf/RR) |### Internet-enhanced
> A recent trend uses the whole web as the knowledge source and equips LLMs with the **Internet** to support real-time information seeking.
| Year | Venue | Paper | Link |
| :--- | :---- | :----------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2023 | ACL | Large Language Models are Built-in Autoregressive Search Engines | [](https://arxiv.org/abs/2305.09612) [](https://github.com/Ziems/llm-url) |
| 2023 | ACL | RARR: Researching and Revising What Language Models Say, Using Language Models | [](https://arxiv.org/abs/2210.08726) [](https://github.com/anthonywchen/RARR) |
| 2023 | Arxiv | Measuring and Narrowing the Compositionality Gap in Language Models | [](https://arxiv.org/abs/2210.03350) [](https://github.com/ofirpress/self-ask) |
| 2023 | Arxiv | ART: Automatic multi-step reasoning and tool-use for large language models | [](https://arxiv.org/abs/2303.09014) [](https://github.com/bhargaviparanjape/language-programmes/) |
| 2023 | Arxiv | TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs | [](https://arxiv.org/abs/2303.16434) [](https://github.com/microsoft/TaskMatrix/tree/main/TaskMatrix.AI) |
| 2023 | Arxiv | MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action | [](https://arxiv.org/abs/2303.11381) [](https://github.com/microsoft/MM-REACT) |
| 2023 | Arxiv | Active Retrieval Augmented Generation | [](https://arxiv.org/abs/2305.06983) [](https://github.com/jzbjyb/FLARE) |
| 2023 | Arxiv | Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models | [](https://arxiv.org/abs/2304.09842) [](https://github.com/lupantech/chameleon-llm) |
| 2023 | Arxiv | CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing | [](https://arxiv.org/abs/2305.11738) [](https://github.com/microsoft/ProphetNet/tree/master/CRITIC) |
| 2023 | Arxiv | Query Rewriting for Retrieval-Augmented Large Language Models | [](https://arxiv.org/abs/2305.14283) |
| 2023 | ICLR | ReAct: Synergizing Reasoning and Acting in Language Models | [](https://openreview.net/forum?id=WE_vluYUL-X) [](https://github.com/ysymyth/ReAct) |
| 2022 | Arxiv | Internet-augmented language models through few-shot prompting for open-domain question answering | [](https://arxiv.org/abs/2203.05115) |## π» Resources
### Related Survey
- [Augmented Language Models: a Survey](https://arxiv.org/abs/2302.07842), 2023
- [The Life Cycle of Knowledge in Big Language Models: A Survey](https://arxiv.org/abs/2303.07616), 2023
- [Interactive Natural Language Processing](https://arxiv.org/abs/2305.13246), 2023
- [Editing Large Language Models: Problems, Methods, and Opportunities](https://arxiv.org/abs/2305.13172), 2023
- [Tool Learning with Foundation Models](https://arxiv.org/abs/2304.08354), 2023
- [Unifying Large Language Models and Knowledge Graphs: A Roadmap](https://arxiv.org/abs/2306.08302), 2023
- [A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning](https://arxiv.org/abs/2307.09218), 2023
- [Large Language Models for Information Retrieval: A Survey](https://arxiv.org/abs/2308.07107)
- [A Review on Language Models as Knowledge Bases](https://arxiv.org/abs/2204.06031), 2022
- [A Survey of Knowledge-enhanced Text Generation](https://dl.acm.org/doi/10.1145/3512467), 2022
- [A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models](https://arxiv.org/abs/2202.08772), 2022
- [A Survey on Knowledge-Enhanced Pre-trained Language Models](https://arxiv.org/abs/2212.13428), 2022
- [Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering](https://arxiv.org/abs/2101.00774), 2021
- [Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey](https://arxiv.org/abs/2110.08455), 2021### Tools
- [LangChain](https://github.com/langchain-ai/langchain): a framework for developing applications powered by language models.
- [ChatGPT plugins](https://openai.com/blog/chatgpt-plugins): designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.
- [EasyEdit](https://github.com/zjunlp/EasyEdit): an Easy-to-use Knowledge Editing Framework for LLMs.
- [FastEdit](https://github.com/hiyouga/FastEdit): injecting fresh and customized knowledge into large language models efficiently using one single command.
- [PyContinual](https://github.com/ZixuanKe/PyContinual): an Easy and Extendible Framework for Continual Learning.
- [Avalanche](https://github.com/ContinualAI/avalanche): an End-to-End Library for Continual Learning based on PyTorch.## π© Citation
If our research helps you, please kindly cite our paper.
```bibtex
@article{zhang2023large,
title={How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances},
author={Zhang, Zihan and Fang, Meng and Chen, Ling and Namazi-Rad, Mohammad-Reza and Wang, Jun},
journal={arXiv preprint arXiv:2310.07343},
year={2023}
}
```## π Acknowledgement & Contribution
This field is evolving very fast, and we may miss important works. Please don't hesitate to share your work.
Pull requests are always welcome if you spot anything wrong (e.g., broken links, typos, etc.) or share new papers!
We thank all contributors for their valuable efforts.