Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning

Part of International Conference on Representation Learning 2025 (ICLR 2025) Conference

Bibtex Paper

Authors

Tianci Liu, Ruirui Li, Yunzhe Qi, Hui Liu, Xianfeng Tang, Tianqi Zheng, Qingyu Yin, Monica Cheng, Jun Huan, Haoyu Wang, Jing Gao

Abstract

Large language models (LLMs) have achieved remarkable performance on vari-ous natural language tasks. However, they are trained on static corpora and theirknowledge can become outdated quickly in the fast-changing world. This moti-vates the development of knowledge editing methods designed to update certainknowledge in LLMs without changing unrelated others. To make selective edits,previous efforts often sought to update a small amount of parameters in some spe-cific layer(s) of a LLM. Nonetheless, in challenging scenarios, they still fall shortin making successful edits while preserving knowledge irrelevant to the updatessimultaneously, resulting in a notable editing-locality trade-off. In this work, wequestion if the trade-offs are caused by the fact that parameter-based updates havea global effect, i.e., edited parameters affect all inputs indiscriminately. In light ofthis, we explore the feasibility of representation fine-tuning, which applied somelinear update to a few representations in a learned subspace, for knowledge edit-ing. While being effective to enhance an LLM’s general ability as demonstrated inthe previous work, we theoretically show that this linear update imposes a tensionin editing-locality trade-off. Subsequently, BaFT is proposed to break the linear-ity. BaFT computes a weight for each basis that spans a dimension of the subspacebased on the input representation. This input-dependent weighting mechanism al-lows BaFT to manage different types of knowledge in an adaptive way, therebyachieving a better editing-locality trade-off. Experiments on three LLMs with fiveediting benchmarks in diverse scenarios show the superiority of our method.