AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (582.6 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

EID: Facilitating Explainable AI Design Discussions in Team-Based Settings

Jiehuang Zhang1,2( )Han Yu1
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
Alibaba-NTU Singapore Joint Research Institute, Singapore 637335, Singapore
Show Author Information

Abstract

Artificial intelligence (AI) systems have many applications with tremendous current and future value to human society. As AI systems penetrate the aspects of everyday life, a pressing need arises to explain their decision-making processes to build trust and familiarity among end users. In high-stakes fields such as healthcare and self-driving cars, AI systems are required to have a minimum standard for accuracy and to provide well-designed explanations for their output, especially when they impact human life. Although many techniques have been developed to make algorithms explainable in human terms, no design methodologies that will allow software teams to systematically draw out and address explainability-related issues during AI design and conception have been established. In response to this gap, we proposed the explainability in design (EID) methodological framework for addressing explainability problems in AI systems. We explored the literature on AI explainability to narrow down the field into six major explainability principles that will aid designers in brainstorming around the metrics and guide the critical thinking process. EID is a step-by-step guide to AI design that has been refined over a series of user studies and interviews with experts in AI explainability. It is devised for software design teams to uncover and resolve potential issues in their AI products and to simply refine and explore the explainability of their products and systems. The EID methodology is a novel framework that aids in the design and conception stages of the AI pipeline and can be integrated into the form of a step-by-step card game. Empirical studies involving AI system designers have shown that EID can decrease the barrier of entry and the time and experience required to effectively make well-informed decisions for integrating explainability into their AI solutions.

References

[1]
K. Schwab, The Fourth Industrial Revolution. New York, NY, USA: Currency, 2017.
[2]
C. Liu, Y. Dong, H. Yu, Z. Shen, Z. Gao, P. Wang, C. Zhang, P. Ren, X. Xie, L. Cui, et al., Generating persuasive visual storylines for promotional videos, in Proc. 28th ACM International Conference on Information and Knowledge Management (CIKM’19), Beijing, China, 2019, pp. 901–910.
[3]
X. Guo, B. Li, H. Yu, and C. Miao, Latent-optimized adversarial neural transfer for sarcasm detection, in Proc. 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT’21), Virtual event, 2021, pp. 5394–5407.
[4]
H. Yu, Z. Shen, and C. Leung, Bringing reputation-awareness into crowdsourcing, in Proc. 9th International Conference on Information, Communications and Signal Processing (ICICS’13), Tainan, China, 2013, pp. 1–5.
[5]

H. Yu, C. Miao, Y. Chen, S. Fauvel, X. Li, and V. R. Lesser, Algorithmic management for improving collective productivity in crowdsourcing, Scientific Reports, vol. 7, no. 1, p. 12541, 2017.

[6]
Y. Zheng, H. Yu, L. Cui, C. Miao, C. Leung, and Q. Yang, SmartHS: An AI platform for improving government service provision, in Proc. 30th AAAI Conference on Innovative Applications of Artificial Intelligence (IAAI-18), New Orleans, LA, USA, 2018, pp. 7704–7711.
[7]
J. Zhang, Y. Shu, and H. Yu, Human-machine interaction for autonomous vehicles: A review, in Proc. 23rd International Conference on Human-Computer Interaction, Virtual event, 2021, pp. 190–201.
[8]

S. Makridakis, The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms, Futures, vol. 90, pp. 46–60, 2017.

[9]
S. Croeser and P. Eckersley, Theories of parenting and their application to artificial intelligence, in Proc. 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 2019, pp. 423–428.
[10]
M. Heinert, Artificial neural networks–how to open the black boxes? in Proc. First Workshop Application of Artificial Intelligence in Engineering Geodesy (AIEG 2008), Vienna, Austria, 2008, pp. 42–62.
[11]
Y. Shu, J. Zhang, and H. Yu, Fairness in design: A tool for guidance in ethical artificial intelligence design, in Proc. 23rd International Conference on Human-Computer Interaction, Virtual event, 2021, pp. 500–510.
[12]

S. Lapuschkin, S. Wäldchen, A. Binder, G. Montavon, W. Samek, and K. -R. Müller, Unmasking clever Hans predictors and assessing what machines really learn, Nature Communications, vol. 10, no. 1, p. 1096, 2019.

[13]
F. K. Došilović, M. Brčić, and N. Hlupić, Explainable artificial intelligence: A survey, in Proc. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 2018, pp. 210–215.
[14]

D. Gunning and D. W. Aha, Darpa’s explainable artificial intelligence (XAI) program, AI Magazine, vol. 40, no. 2, pp. 44–58, 2019.

[15]
F. Xu, H. Uszkoreit, Y. Du, W. Fan, D. Zhao, and J. Zhu, Explainable AI: A brief survey on history, research areas, approaches and challenges, in Proc. 8th CCF International Conference on Natural Language Processing and Chinese Computing, Dunhuang, China, 2019, pp. 563–574.
[16]
G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. -R. Müller, Layer-wise relevance propagation: An overview, in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K. -R. Müller, eds. Cham, Switzerland: Springer, 2019, pp. 193–209.
[17]
H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser, and Q. Yang, Building ethics into artificial intelligence, in Proc. 27th International Joint Conference on Artificial Intelligence (IJCAI’18), Stockholm, Sweden, 2018, pp. 5527–5533.
[18]
M. Ancona, C. Oztireli, and M. Gross, Explaining deep neural networks with a polynomial time algorithm for shapley value approximation, in Proc. 36th International Conference on Machine Learning, Long Beach, CA, USA, 2019, pp. 272–281.
[19]
M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you?”: Explaining the predictions of any classifier, in Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 1135–1144.
[20]

B. Friedman, Value-sensitive design, Interactions, vol. 3, no. 6, pp. 16–23, 1996.

[21]
B. Friedman and D. Hendry, The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations, in Proc. SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 2012, pp. 1145–1148.
[22]
S. Ballard, K. M. Chappell, and K. Kennedy, Judgment call the game: Using value sensitive design and design fiction to surface ethical concerns related to technology, in Proc. 2019 on Designing Interactive Systems Conference, San Diego, CA, USA, 2019, pp. 421–433.
[23]
Q. V. Liao, D. Gruen, and S. Miller, Questioning the AI: Informing design practices for explainable AI user experiences, in Proc. 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 2020, pp. 1–15.
[24]

M. -Y. Kim, S. Atakishiyev, H. K. B. Babiker, N. Farruque, R. Goebel, O. R. Zaïane, M. -H. Motallebi, J. Rabelo, T. Syed, H. Yao, et al., A multi-component framework for the analysis and design of explainable artificial intelligence, Machine Learning and Knowledge Extraction, vol. 3, no. 4, pp. 900–921, 2021.

[25]

B. Shneiderman and H. Hochheiser, Universal usability as a stimulus to advanced interface design, Behaviour & Information Technology, vol. 20, no. 5, pp. 367–376, 2001.

[26]

E. Zell and Z. Krizan, Do people have insight into their abilities? A metasynthesis, Perspectives on Psychological Science, vol. 9, no. 2, pp. 111–125, 2014.

[27]
S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions, arXiv preprint arXiv: 2112.11561, 2021.
[28]

E. Tjoa and C. Guan, A survey on explainable artificial intelligence (XAI): Toward medical XAI, IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2020.

International Journal of Crowd Science
Pages 47-54
Cite this article:
Zhang J, Yu H. EID: Facilitating Explainable AI Design Discussions in Team-Based Settings. International Journal of Crowd Science, 2023, 7(2): 47-54. https://doi.org/10.26599/IJCS.2022.9100034

622

Views

54

Downloads

5

Crossref

3

Scopus

Altmetrics

Received: 16 August 2022
Revised: 28 September 2022
Accepted: 30 September 2022
Published: 22 June 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return