1392
Views
67
Downloads
3
Crossref
N/A
WoS
1
Scopus
N/A
CSCD
As Artificial Intelligence (AI) and Digital Transformation (DT) technologies become increasingly ubiquitous in modern society, the flaws in their designs are starting to attract attention. AI models have been shown to be susceptible to biases in the training data, especially against underrepresented groups. Although an increasing call for AI solution designers to take fairness into account, the field lacks a design methodology to help AI design teams of members from different backgrounds brainstorm and surface potential fairness issues during the design stage. To address this problem, we propose the Fairness in Design (FID) framework to help AI software designers surface and explore complex fairness-related issues, that otherwise can be overlooked. We explore literature in the field of fairness in AI to narrow down the field into ten major fairness principles, which assist designers in brainstorming around metrics and guide thinking processes about fairness. FID facilitates discussions among design team members, through a game-like approach that is based on a set of prompt cards, to identify and discuss potential concerns from the perspective of various stakeholders. Extensive user studies show that FID is effective at assisting participants in making better decisions about fairness, especially complex issues that involve algorithmic decisions. It has also been found to decrease the barrier of entry for software teams, in terms of the pre-requisite knowledge about fairness, to address fairness issues so that they can make more appropriate related design decisions. The FID methodological framework contributes a novel toolkit to aid in the design and conception process of AI systems, decrease barriers to entry, and assist critical thinking around complex issues surrounding algorithmic systems. The framework is integrated into a step-by-step card game for AI system designers to employ during the design and conception stage of the life-cycle process. FID is a unique decision support framework for software teams interested to create fairness-aware AI solutions.
As Artificial Intelligence (AI) and Digital Transformation (DT) technologies become increasingly ubiquitous in modern society, the flaws in their designs are starting to attract attention. AI models have been shown to be susceptible to biases in the training data, especially against underrepresented groups. Although an increasing call for AI solution designers to take fairness into account, the field lacks a design methodology to help AI design teams of members from different backgrounds brainstorm and surface potential fairness issues during the design stage. To address this problem, we propose the Fairness in Design (FID) framework to help AI software designers surface and explore complex fairness-related issues, that otherwise can be overlooked. We explore literature in the field of fairness in AI to narrow down the field into ten major fairness principles, which assist designers in brainstorming around metrics and guide thinking processes about fairness. FID facilitates discussions among design team members, through a game-like approach that is based on a set of prompt cards, to identify and discuss potential concerns from the perspective of various stakeholders. Extensive user studies show that FID is effective at assisting participants in making better decisions about fairness, especially complex issues that involve algorithmic decisions. It has also been found to decrease the barrier of entry for software teams, in terms of the pre-requisite knowledge about fairness, to address fairness issues so that they can make more appropriate related design decisions. The FID methodological framework contributes a novel toolkit to aid in the design and conception process of AI systems, decrease barriers to entry, and assist critical thinking around complex issues surrounding algorithmic systems. The framework is integrated into a step-by-step card game for AI system designers to employ during the design and conception stage of the life-cycle process. FID is a unique decision support framework for software teams interested to create fairness-aware AI solutions.
M. Lei, Z. Rao, H. Wang, Y. Chen, L. Zou, and H. Yu, Maceral groups analysis of coal based on sematic segmentation of photomicrographs via the improved U-net, Fuel, vol. 294, p. 120475, 2021.
S. Makridakis, The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms, Futures, vol. 90, pp. 46–60, 2017.
Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, Dissecting racial bias in an algorithm used to manage the health of populations, Science, vol. 366, no. 6464, pp. 447–453, 2019.
B. Friedman, D. G. Hendry, and A. Borning, A survey of value sensitive design methods, Foundations and Trends in Human-Computer Interaction, vol. 11, no. 2, pp. 63–125, 2017.
G. Vial, Understanding digital transformation: A review and a research agenda, The Journal of Strategic Information Systems, vol. 28, no. 2, pp. 118–144, 2019.
C. -H. Lee, C. -L. Liu, A. J. Trappey, J. P. T. Mo, and K. C. Desouza, Understanding digital transformation in advanced manufacturing and engineering: A bibliometric analysis, topic modeling and research trend discovery, Advanced Engineering Informatics, vol. 50, p. 101428, 2021.
C. -H. Lee, A. J. Trappey, C. -L. Liu, J. P. T. Mo, and K. C. Desouza, Design and management of digital transformations for value creation, Advanced Engineering Informatics, vol. 52, p. 101547, 2022.
C. -Y. Lee, B. -J. Chou, and C. -F. Huang, Data science and reinforcement learning for price forecasting and raw material procurement in petrochemical industry, Advanced Engineering Informatics, vol. 51, p. 101443, 2022.
A. Chouldechova, Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Big Data, vol. 5, no. 2, pp. 153–163, 2017.
R. Binns, Fairness in machine learning: Lessons from political philosophy, Proceedings of Machine Learning Research, vol. 81, pp. 149–159, 2018.
R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth, Fairness in criminal justice risk assessments: The state of the art, Sociological Methods & Research, vol. 50, no. 1, pp. 3–44, 2021.
B. Friedman, Value-sensitive design, Interactions, vol. 3, no. 6, pp. 16–23, 1996.
E. Zell and Z. Krizan, Do people have insight into their abilities? A metasynthesis, Perspectives on Psychological Science, vol. 9, no. 2, pp. 111–125, 2014.
This work was supported in part by Nanyang Technological University, Nanyang Assistant Professorship (NAP); Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI) (No. Alibaba-NTU-AIR2019B1), Nanyang Technological University, Singapore; the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award) (No. AISG2-RP-2020-019); the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore; the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR); and Future Communications Research and Development Programme (No. FCP-NTU-RG-2021-014). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).