AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

FlexPDA: A Flexible Programming Framework for Deep Learning Accelerators

College of Computer Science and Technology, Jilin University, Changchun 130012, China
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University Changchun 130012, China
State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
Show Author Information

Abstract

There are a wide variety of intelligence accelerators with promising performance and energy efficiency, deployed in a broad range of applications such as computer vision and speech recognition. However, programming productivity hinders the deployment of deep learning accelerators. The low-level library invoked in the high-level deep learning framework which supports the end-to-end execution with a given model, is designed to reduce the programming burden on the intelligence accelerators. Unfortunately, it is inflexible for developers to build a network model for every deep learning application, which probably brings unnecessary repetitive implementation. In this paper, a flexible and efficient programming framework for deep learning accelerators, FlexPDA, is proposed, which provides more optimization opportunities than the low-level library and realizes quick transplantation of applications to intelligence accelerators for fast upgrades. We evaluate FlexPDA by using 10 representative operators selected from deep learning algorithms and an end-to-end network. The experimental results validate the effectiveness of FlexPDA, which achieves an end-to-end performance improvement of 1.620x over the low-level library.

Electronic Supplementary Material

Download File(s)
jcst-37-5-1200-Highlights.pdf (581.8 KB)

References

【1】
【1】
 
 
Journal of Computer Science and Technology
Pages 1200-1220

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Liu L, Ma X, Liu H-X, et al. FlexPDA: A Flexible Programming Framework for Deep Learning Accelerators. Journal of Computer Science and Technology, 2022, 37(5): 1200-1220. https://doi.org/10.1007/s11390-021-1406-9

861

Views

1

Crossref

1

Web of Science

1

Scopus

0

CSCD

Received: 01 March 2021
Accepted: 18 September 2021
Published: 30 September 2022
©Institute of Computing Technology, Chinese Academy of Sciences 2022