AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.3 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Efficient Low-Rank Adaptation for Sparse Large Language Model

School of Information, Renmin University of China, Beijing 100872, China
Digital Intelligence Department, China Mobile Communications Group Co., Ltd., Beijing 100007, China
Tencent AI Lab, Tencent Holdings Ltd., Beijing 100193, China

Yuxuan Hu and Tian Tian contribute equally to this paper.

Show Author Information

Abstract

Existing Low-Rank Adaptation (LoRA) methods face challenges on sparse Large Language Models (LLMs) due to the inability to maintain sparsity. Recent works introduce methods that maintain sparsity by augmenting LoRA techniques with additional masking mechanisms. Despite these successes, such approaches suffer from an increased memory and computation overhead, which affects the efficiency of LoRA methods. In response to this limitation, we introduce Low Rank adaptation method for Sparse LLM (LoRS), an innovative method designed to achieve both memory and computation efficiency when fine-tuning sparse LLMs. To mitigate the substantial memory and computation demands associated with preserving sparsity, our approach incorporates strategies of weight recomputing and computational graph rearrangement. In addition, we also improve the effectiveness of LoRS through better adapter initialization. These innovations lead to a notable reduction in memory and computation consumption during the fine-tuning phase, while achieving performance levels that outperform existing LoRA approaches.

References

【1】
【1】
 
 
Tsinghua Science and Technology
Pages 2292-2303

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Hu Y, Tian T, Chen X, et al. Efficient Low-Rank Adaptation for Sparse Large Language Model. Tsinghua Science and Technology, 2026, 31(4): 2292-2303. https://doi.org/10.26599/TST.2025.9010174

1198

Views

55

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 17 September 2025
Revised: 08 October 2025
Accepted: 06 November 2025
Published: 03 February 2026
© The author(s) 2026.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).