https://github.com/zjunlp/dynamicknowledgecircuits
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
https://github.com/zjunlp/dynamicknowledgecircuits
artificial-intelligence continual-learning continue-pretraining interpretability knowledge-circuit knowledge-editing large-language-models model-editing natural-language-processing new-knowledge transformer
Last synced: 8 months ago
JSON representation
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
- Host: GitHub
- URL: https://github.com/zjunlp/dynamicknowledgecircuits
- Owner: zjunlp
- License: mit
- Created: 2024-12-10T06:37:02.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-04-18T06:21:19.000Z (10 months ago)
- Last Synced: 2025-04-18T19:38:07.856Z (10 months ago)
- Topics: artificial-intelligence, continual-learning, continue-pretraining, interpretability, knowledge-circuit, knowledge-editing, large-language-models, model-editing, natural-language-processing, new-knowledge, transformer
- Language: Jupyter Notebook
- Homepage:
- Size: 39.4 MB
- Stars: 30
- Watchers: 8
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE