Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Novel artificial intelligence (AI) technology has expedited various scientific research, e.g., cosmology, physics, and bioinformatics, inevitably becoming a significant category of workload on high-performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under the predefined problem size, in terms of datasets and AI models. However, driven by novel AI technology, most of AI applications are evolving fast on models and datasets to achieve higher accuracy and be applicable to more scenarios. Due to the lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH. With data and model augment, SAIH can progressively evaluate the AI performance trend of HPC systems, e.g., increasing from 5.2% to 59.6% of the peak theoretical hardware performance. The evaluation results are analyzed and summarized into insight findings on performance issues. For instance, we find that the AI application constantly consumes the I/O bandwidth of the shared parallel file system during its iteratively training model. If I/O contention exists, the shared parallel file system might become a bottleneck.
Wozniak J M, Jain R, Balaprakash P, Ozik J, Collier N T, Bauer J, Xia F F, Brettin T, Stevens R, Mohd-Yusof J, Cardona C G, Van Essen B, Baughman M. CANDLE/Supervisor: A workflow framework for machine learning applied to cancer research. BMC Bioinformatics , 2018, 19(18): Article No. 491. DOI: 10.1186/s12859-018-2508-4.
Zhang L, Ji Q. A bayesian network model for automatic and interactive image segmentation. IEEE Trans. Image Processing , 2011, 20(9): 2582–2593. DOI: 10.1109/TIP. 2011.2121080.
Shen Z, Bao W Z, Huang D S. Recurrent neural network for predicting transcription factor binding sites. Scientific Reports , 2018, 8(1): Article No. 15270. DOI: 10.1038/s41598-018-33321-1.
Trabelsi A, Chaabane M, Ben-Hur A. Comprehensive evaluation of deep learning architectures for prediction of DNA/RNA sequence binding specificities. Bioinformatics , 2019, 35(14): i269–i277. DOI: 10.1093/bioinformatics/btz 339.
Lyu C, Chen B, Ren Y F, Ji D H. Long short-term memory RNN for biomedical named entity recognition. BMC Bioinformatics , 2017, 18(1): Article No. 462. DOI: 10.1186/s12859-017- 1868-5.
Sandfort V, Yan K, Pickhardt P J, Summers R M. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Scientific Reports , 2019, 9(1): Article No. 16884. DOI: 10.1038/s41598-019-52737-x.
Hahn O, Abel T. Multi-scale initial conditions for cosmological simulations. Monthly Notices of the Royal Astronomical Society , 2011, 415(3): 2101–2121. DOI: 10.1111/j.1365-2966.2011.18820.x.