Part of International Conference on Representation Learning 2025 (ICLR 2025) Conference
Zhengbo Wang, Jian Liang, Ran He, Zilei Wang, Tieniu Tan
Low-rank adaptation, also known as LoRA, has emerged as a prominent method for parameter-efficient fine-tuning of foundation models.Despite its computational efficiency, LoRA still yields inferior performance compared to full fine-tuning.In this paper, we first uncover a fundamental connection between the optimization processes of LoRA and full fine-tuning: using LoRA for optimization is mathematically equivalent to full fine-tuning using a low-rank gradient for parameter updates.And this low-rank gradient can be expressed in terms of the gradients of the two low-rank matrices in LoRA.Leveraging this insight, we introduce LoRA-Pro, a method that enhances LoRA's performance by strategically adjusting the gradients of these low-rank matrices.This adjustment allows the low-rank gradient to more accurately approximate the full fine-tuning gradient, thereby narrowing the performance gap between LoRA and full fine-tuning.Furthermore, we theoretically derive the optimal solutions for adjusting the gradients of the low-rank matrices, applying them during fine-tuning in LoRA-Pro.We conduct extensive experiments across natural language understanding, dialogue generation, mathematical reasoning, code generation, and image classification tasks, demonstrating that LoRA-Pro substantially improves LoRA's performance, effectively narrowing the gap with full fine-tuning.Our code is publicly available at https://github.com/mrflogs/LoRA-Pro.