AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Fine-Tuning Channel-Pruned Deep Model via Knowledge Distillation

Faculty of Computing, Harbin Institute of Technology, Harbin 150001, China
School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
Show Author Information

Abstract

Deep convolutional neural networks with high performance are hard to be deployed in many real world applications, since the computing resources of edge devices such as smart phones or embedded GPU are limited. To alleviate this hardware limitation, the compression of deep neural networks from the model side becomes important. As one of the most popular methods in the spotlight, channel pruning of the deep convolutional model can effectively remove redundant convolutional channels from the CNN (convolutional neural network) without affecting the network’s performance remarkably. Existing methods focus on pruning design, evaluating the importance of different convolutional filters in the CNN model. A fast and effective fine-tuning method to restore accuracy is urgently needed. In this paper, we propose a fine-tuning method KDFT (Knowledge Distillation Based Fine-Tuning), which improves the accuracy of fine-tuned models with almost negligible training overhead by introducing knowledge distillation. Extensive experimental results on benchmark datasets with representative CNN models show that up to 4.86% accuracy improvement and 79% time saving can be obtained.

Electronic Supplementary Material

Download File(s)
JCST-2204-12386-Highlights.pdf (174.8 KB)

References

【1】
【1】
 
 
Journal of Computer Science and Technology
Pages 1238-1247

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Zhang C, Wang H-Z, Liu H-W, et al. Fine-Tuning Channel-Pruned Deep Model via Knowledge Distillation. Journal of Computer Science and Technology, 2024, 39(6): 1238-1247. https://doi.org/10.1007/s11390-023-2386-8

759

Views

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 04 April 2022
Accepted: 07 November 2023
Published: 16 January 2025
© Institute of Computing Technology, Chinese Academy of Sciences 2024