knowledge editing
29 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find knowledge editing models and implementationsMost implemented papers
Editing Language Model-based Knowledge Graph Embeddings
To address this issue, we propose a new task of editing language model-based KG embeddings in this paper.
Can We Edit Factual Knowledge by In-Context Learning?
Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.
MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option.
EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models
Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy issues, which means they are unaware of unseen events or generate text with incorrect facts owing to outdated/noisy data.
Cross-Lingual Knowledge Editing in Large Language Models
With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch.
Assessing Knowledge Editing in Language Models via Relation Perspective
Knowledge Editing (KE) for modifying factual knowledge in Large Language Models (LLMs) has been receiving increasing attention.
A Comprehensive Study of Knowledge Editing for Large Language Models
In this paper, we first define the knowledge editing problem and then provide a comprehensive review of cutting-edge approaches.
MLaKE: Multilingual Knowledge Editing Benchmark for Large Language Models
We evaluate the multilingual knowledge editing generalization capabilities of existing methods on MLaKE.
Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models
This finding raises questions about how past work relies on Causal Tracing to select which model layers to edit.
Propagating Knowledge Updates to LMs Through Distillation
Then, we update the model parameters so that the distribution of the LM (the student) matches the distribution of the LM conditioned on the definition (the teacher) on the transfer set.