270
Views
31
Downloads
1
Crossref
N/A
WoS
1
Scopus
N/A
CSCD
Artificial intelligence (AI) systems have many applications with tremendous current and future value to human society. As AI systems penetrate the aspects of everyday life, a pressing need arises to explain their decision-making processes to build trust and familiarity among end users. In high-stakes fields such as healthcare and self-driving cars, AI systems are required to have a minimum standard for accuracy and to provide well-designed explanations for their output, especially when they impact human life. Although many techniques have been developed to make algorithms explainable in human terms, no design methodologies that will allow software teams to systematically draw out and address explainability-related issues during AI design and conception have been established. In response to this gap, we proposed the explainability in design (EID) methodological framework for addressing explainability problems in AI systems. We explored the literature on AI explainability to narrow down the field into six major explainability principles that will aid designers in brainstorming around the metrics and guide the critical thinking process. EID is a step-by-step guide to AI design that has been refined over a series of user studies and interviews with experts in AI explainability. It is devised for software design teams to uncover and resolve potential issues in their AI products and to simply refine and explore the explainability of their products and systems. The EID methodology is a novel framework that aids in the design and conception stages of the AI pipeline and can be integrated into the form of a step-by-step card game. Empirical studies involving AI system designers have shown that EID can decrease the barrier of entry and the time and experience required to effectively make well-informed decisions for integrating explainability into their AI solutions.
Artificial intelligence (AI) systems have many applications with tremendous current and future value to human society. As AI systems penetrate the aspects of everyday life, a pressing need arises to explain their decision-making processes to build trust and familiarity among end users. In high-stakes fields such as healthcare and self-driving cars, AI systems are required to have a minimum standard for accuracy and to provide well-designed explanations for their output, especially when they impact human life. Although many techniques have been developed to make algorithms explainable in human terms, no design methodologies that will allow software teams to systematically draw out and address explainability-related issues during AI design and conception have been established. In response to this gap, we proposed the explainability in design (EID) methodological framework for addressing explainability problems in AI systems. We explored the literature on AI explainability to narrow down the field into six major explainability principles that will aid designers in brainstorming around the metrics and guide the critical thinking process. EID is a step-by-step guide to AI design that has been refined over a series of user studies and interviews with experts in AI explainability. It is devised for software design teams to uncover and resolve potential issues in their AI products and to simply refine and explore the explainability of their products and systems. The EID methodology is a novel framework that aids in the design and conception stages of the AI pipeline and can be integrated into the form of a step-by-step card game. Empirical studies involving AI system designers have shown that EID can decrease the barrier of entry and the time and experience required to effectively make well-informed decisions for integrating explainability into their AI solutions.
H. Yu, C. Miao, Y. Chen, S. Fauvel, X. Li, and V. R. Lesser, Algorithmic management for improving collective productivity in crowdsourcing, Scientific Reports, vol. 7, no. 1, p. 12541, 2017.
S. Makridakis, The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms, Futures, vol. 90, pp. 46–60, 2017.
S. Lapuschkin, S. Wäldchen, A. Binder, G. Montavon, W. Samek, and K. -R. Müller, Unmasking clever Hans predictors and assessing what machines really learn, Nature Communications, vol. 10, no. 1, p. 1096, 2019.
D. Gunning and D. W. Aha, Darpa’s explainable artificial intelligence (XAI) program, AI Magazine, vol. 40, no. 2, pp. 44–58, 2019.
B. Friedman, Value-sensitive design, Interactions, vol. 3, no. 6, pp. 16–23, 1996.
M. -Y. Kim, S. Atakishiyev, H. K. B. Babiker, N. Farruque, R. Goebel, O. R. Zaïane, M. -H. Motallebi, J. Rabelo, T. Syed, H. Yao, et al., A multi-component framework for the analysis and design of explainable artificial intelligence, Machine Learning and Knowledge Extraction, vol. 3, no. 4, pp. 900–921, 2021.
B. Shneiderman and H. Hochheiser, Universal usability as a stimulus to advanced interface design, Behaviour & Information Technology, vol. 20, no. 5, pp. 367–376, 2001.
E. Zell and Z. Krizan, Do people have insight into their abilities? A metasynthesis, Perspectives on Psychological Science, vol. 9, no. 2, pp. 111–125, 2014.
E. Tjoa and C. Guan, A survey on explainable artificial intelligence (XAI): Toward medical XAI, IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2020.
This work was supported in part by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI) (No. Alibaba-NTU-AIR2019B1), Nanyang Technological University, Singapore; the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No. AISG2-RP-2020-019); Nanyang Technological University, Nanyang Assistant Professorship (NAP); the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore; the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR); and Future Communications Research & Development Programme (No. FCP-NTU-RG-2021-014). Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).