Journal Home > Volume 7 , Issue 2

Artificial intelligence (AI) systems have many applications with tremendous current and future value to human society. As AI systems penetrate the aspects of everyday life, a pressing need arises to explain their decision-making processes to build trust and familiarity among end users. In high-stakes fields such as healthcare and self-driving cars, AI systems are required to have a minimum standard for accuracy and to provide well-designed explanations for their output, especially when they impact human life. Although many techniques have been developed to make algorithms explainable in human terms, no design methodologies that will allow software teams to systematically draw out and address explainability-related issues during AI design and conception have been established. In response to this gap, we proposed the explainability in design (EID) methodological framework for addressing explainability problems in AI systems. We explored the literature on AI explainability to narrow down the field into six major explainability principles that will aid designers in brainstorming around the metrics and guide the critical thinking process. EID is a step-by-step guide to AI design that has been refined over a series of user studies and interviews with experts in AI explainability. It is devised for software design teams to uncover and resolve potential issues in their AI products and to simply refine and explore the explainability of their products and systems. The EID methodology is a novel framework that aids in the design and conception stages of the AI pipeline and can be integrated into the form of a step-by-step card game. Empirical studies involving AI system designers have shown that EID can decrease the barrier of entry and the time and experience required to effectively make well-informed decisions for integrating explainability into their AI solutions.


menu
Abstract
Full text
Outline
About this article

EID: Facilitating Explainable AI Design Discussions in Team-Based Settings

Show Author's information Jiehuang Zhang1,2( )Han Yu1
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
Alibaba-NTU Singapore Joint Research Institute, Singapore 637335, Singapore

Abstract

Artificial intelligence (AI) systems have many applications with tremendous current and future value to human society. As AI systems penetrate the aspects of everyday life, a pressing need arises to explain their decision-making processes to build trust and familiarity among end users. In high-stakes fields such as healthcare and self-driving cars, AI systems are required to have a minimum standard for accuracy and to provide well-designed explanations for their output, especially when they impact human life. Although many techniques have been developed to make algorithms explainable in human terms, no design methodologies that will allow software teams to systematically draw out and address explainability-related issues during AI design and conception have been established. In response to this gap, we proposed the explainability in design (EID) methodological framework for addressing explainability problems in AI systems. We explored the literature on AI explainability to narrow down the field into six major explainability principles that will aid designers in brainstorming around the metrics and guide the critical thinking process. EID is a step-by-step guide to AI design that has been refined over a series of user studies and interviews with experts in AI explainability. It is devised for software design teams to uncover and resolve potential issues in their AI products and to simply refine and explore the explainability of their products and systems. The EID methodology is a novel framework that aids in the design and conception stages of the AI pipeline and can be integrated into the form of a step-by-step card game. Empirical studies involving AI system designers have shown that EID can decrease the barrier of entry and the time and experience required to effectively make well-informed decisions for integrating explainability into their AI solutions.

Keywords: design method, ethics, design methodology, explainable artificial intelligence (AI), design tool, values sensitive design

References(28)

[1]
K. Schwab, The Fourth Industrial Revolution. New York, NY, USA: Currency, 2017.
[2]
C. Liu, Y. Dong, H. Yu, Z. Shen, Z. Gao, P. Wang, C. Zhang, P. Ren, X. Xie, L. Cui, et al., Generating persuasive visual storylines for promotional videos, in Proc. 28th ACM International Conference on Information and Knowledge Management (CIKM’19), Beijing, China, 2019, pp. 901–910.
DOI
[3]
X. Guo, B. Li, H. Yu, and C. Miao, Latent-optimized adversarial neural transfer for sarcasm detection, in Proc. 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT’21), Virtual event, 2021, pp. 5394–5407.
DOI
[4]
H. Yu, Z. Shen, and C. Leung, Bringing reputation-awareness into crowdsourcing, in Proc. 9th International Conference on Information, Communications and Signal Processing (ICICS’13), Tainan, China, 2013, pp. 1–5.
DOI
[5]

H. Yu, C. Miao, Y. Chen, S. Fauvel, X. Li, and V. R. Lesser, Algorithmic management for improving collective productivity in crowdsourcing, Scientific Reports, vol. 7, no. 1, p. 12541, 2017.

[6]
Y. Zheng, H. Yu, L. Cui, C. Miao, C. Leung, and Q. Yang, SmartHS: An AI platform for improving government service provision, in Proc. 30th AAAI Conference on Innovative Applications of Artificial Intelligence (IAAI-18), New Orleans, LA, USA, 2018, pp. 7704–7711.
DOI
[7]
J. Zhang, Y. Shu, and H. Yu, Human-machine interaction for autonomous vehicles: A review, in Proc. 23rd International Conference on Human-Computer Interaction, Virtual event, 2021, pp. 190–201.
DOI
[8]

S. Makridakis, The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms, Futures, vol. 90, pp. 46–60, 2017.

[9]
S. Croeser and P. Eckersley, Theories of parenting and their application to artificial intelligence, in Proc. 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 2019, pp. 423–428.
DOI
[10]
M. Heinert, Artificial neural networks–how to open the black boxes? in Proc. First Workshop Application of Artificial Intelligence in Engineering Geodesy (AIEG 2008), Vienna, Austria, 2008, pp. 42–62.
[11]
Y. Shu, J. Zhang, and H. Yu, Fairness in design: A tool for guidance in ethical artificial intelligence design, in Proc. 23rd International Conference on Human-Computer Interaction, Virtual event, 2021, pp. 500–510.
DOI
[12]

S. Lapuschkin, S. Wäldchen, A. Binder, G. Montavon, W. Samek, and K. -R. Müller, Unmasking clever Hans predictors and assessing what machines really learn, Nature Communications, vol. 10, no. 1, p. 1096, 2019.

[13]
F. K. Došilović, M. Brčić, and N. Hlupić, Explainable artificial intelligence: A survey, in Proc. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 2018, pp. 210–215.
DOI
[14]

D. Gunning and D. W. Aha, Darpa’s explainable artificial intelligence (XAI) program, AI Magazine, vol. 40, no. 2, pp. 44–58, 2019.

[15]
F. Xu, H. Uszkoreit, Y. Du, W. Fan, D. Zhao, and J. Zhu, Explainable AI: A brief survey on history, research areas, approaches and challenges, in Proc. 8th CCF International Conference on Natural Language Processing and Chinese Computing, Dunhuang, China, 2019, pp. 563–574.
DOI
[16]
G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. -R. Müller, Layer-wise relevance propagation: An overview, in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. -R. Müller, eds. Cham, Switzerland: Springer, 2019, pp. 193–209.
DOI
[17]
H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser, and Q. Yang, Building ethics into artificial intelligence, in Proc. 27th International Joint Conference on Artificial Intelligence (IJCAI’18), Stockholm, Sweden, 2018, pp. 5527–5533.
DOI
[18]
M. Ancona, C. Oztireli, and M. Gross, Explaining deep neural networks with a polynomial time algorithm for shapley value approximation, in Proc. 36th International Conference on Machine Learning, Long Beach, CA, USA, 2019, pp. 272–281.
[19]
M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you?”: Explaining the predictions of any classifier, in Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 1135–1144.
DOI
[20]

B. Friedman, Value-sensitive design, Interactions, vol. 3, no. 6, pp. 16–23, 1996.

[21]
B. Friedman and D. Hendry, The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations, in Proc. SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 2012, pp. 1145–1148.
DOI
[22]
S. Ballard, K. M. Chappell, and K. Kennedy, Judgment call the game: Using value sensitive design and design fiction to surface ethical concerns related to technology, in Proc. 2019 on Designing Interactive Systems Conference, San Diego, CA, USA, 2019, pp. 421–433.
DOI
[23]
Q. V. Liao, D. Gruen, and S. Miller, Questioning the AI: Informing design practices for explainable AI user experiences, in Proc. 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 2020, pp. 1–15.
DOI
[24]

M. -Y. Kim, S. Atakishiyev, H. K. B. Babiker, N. Farruque, R. Goebel, O. R. Zaïane, M. -H. Motallebi, J. Rabelo, T. Syed, H. Yao, et al., A multi-component framework for the analysis and design of explainable artificial intelligence, Machine Learning and Knowledge Extraction, vol. 3, no. 4, pp. 900–921, 2021.

[25]

B. Shneiderman and H. Hochheiser, Universal usability as a stimulus to advanced interface design, Behaviour & Information Technology, vol. 20, no. 5, pp. 367–376, 2001.

[26]

E. Zell and Z. Krizan, Do people have insight into their abilities? A metasynthesis, Perspectives on Psychological Science, vol. 9, no. 2, pp. 111–125, 2014.

[27]
S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions, arXiv preprint arXiv: 2112.11561, 2021.
[28]

E. Tjoa and C. Guan, A survey on explainable artificial intelligence (XAI): Toward medical XAI, IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2020.

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 16 August 2022
Revised: 28 September 2022
Accepted: 30 September 2022
Published: 22 June 2023
Issue date: June 2023

Copyright

© The author(s) 2023.

Acknowledgements

Acknowledgment

This work was supported in part by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI) (No. Alibaba-NTU-AIR2019B1), Nanyang Technological University, Singapore; the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No. AISG2-RP-2020-019); Nanyang Technological University, Nanyang Assistant Professorship (NAP); the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore; the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR); and Future Communications Research & Development Programme (No. FCP-NTU-RG-2021-014). Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return