AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (3.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Original Paper | Open Access | Just Accepted

KDLLM: Copyright-Preserving LLM based on Knowledge Distillation

Shiva Shrestha1Honghui Xu1Zongxing Xie2Daehee Seo3Yongjoon Joe4Wonbin Kim3Yingshu Li5( )

1 Department of Information Technology, Kennesaw State University, Marietta, GA 30060, USA

2 Department of Computer Science, Kennsaw State University, Marietta, GA 30060, USA

3 Department of Artificial Intelligence and Data Engineering, Sangmyung University, Seoul 03016, Republic of Korea

4 Director of LSware Inc., Seoul 08504, Republic of Korea

5 Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA

Show Author Information

Abstract

Large Language Models (LLMs) have emerged as the cornerstone of various natural language processing activities, enabling everything from chatbots to text classification and summarization. However, using LLMs presents some significant challenges, most notably the threat of intellectual property infringement from the exposure of the entire model and the excessive communication and storage overhead associated with their large size. We propose KDLLM, a novel knowledge distillation-based framework for efficient and compact LLMs to address these challenges. KDLLM transfers the performance of a large teacher LLM to a significantly smaller student model with high performance similarity to its teacher, while obscuring architectural and parameter-level details to protect the intellectual property of the original model. The resulting student model substantially reduces the memory footprint and transmission overhead, making it amenable to deployment in bandwidth-constrained or security-sensitive environments. Comprehensive experiments demonstrate that KDLLM achieves robust performance preservation and boosts copyright protection and communication efficiency.

References

【1】
【1】
 
 
Tsinghua Science and Technology

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Shrestha S, Xu H, Xie Z, et al. KDLLM: Copyright-Preserving LLM based on Knowledge Distillation. Tsinghua Science and Technology, 2025, https://doi.org/10.26599/TST.2025.9010130

1329

Views

95

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 10 July 2025
Accepted: 04 August 2025
Available online: 18 August 2025

© The author(s) 2025

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).