AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (513.1 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Fairness in Design: A Framework for Facilitating Ethical Artificial Intelligence Designs

Jiehuang Zhang1,2( )Ying Shu1Han Yu1
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
Alibaba-NTU Singapore Joint Research Institute, Singapore 637335, Singapore
Show Author Information

Abstract

As Artificial Intelligence (AI) and Digital Transformation (DT) technologies become increasingly ubiquitous in modern society, the flaws in their designs are starting to attract attention. AI models have been shown to be susceptible to biases in the training data, especially against underrepresented groups. Although an increasing call for AI solution designers to take fairness into account, the field lacks a design methodology to help AI design teams of members from different backgrounds brainstorm and surface potential fairness issues during the design stage. To address this problem, we propose the Fairness in Design (FID) framework to help AI software designers surface and explore complex fairness-related issues, that otherwise can be overlooked. We explore literature in the field of fairness in AI to narrow down the field into ten major fairness principles, which assist designers in brainstorming around metrics and guide thinking processes about fairness. FID facilitates discussions among design team members, through a game-like approach that is based on a set of prompt cards, to identify and discuss potential concerns from the perspective of various stakeholders. Extensive user studies show that FID is effective at assisting participants in making better decisions about fairness, especially complex issues that involve algorithmic decisions. It has also been found to decrease the barrier of entry for software teams, in terms of the pre-requisite knowledge about fairness, to address fairness issues so that they can make more appropriate related design decisions. The FID methodological framework contributes a novel toolkit to aid in the design and conception process of AI systems, decrease barriers to entry, and assist critical thinking around complex issues surrounding algorithmic systems. The framework is integrated into a step-by-step card game for AI system designers to employ during the design and conception stage of the life-cycle process. FID is a unique decision support framework for software teams interested to create fairness-aware AI solutions.

References

[1]
K. Schwab, The Fourth Industrial Revolution. New York, NY, USA: Currency, 2017.
[2]
Y. Zheng, H. Yu, L. Cui, C. Miao, C. Leung, and Q. Yang, SmartHS: An AI platform for improving government service provision, in Proc. 30th Innovative Applications of Artificial Intelligence Conference, New Orleans, LA, USA, 2018, pp. 7704–7711.
[3]
X. Guo, B. Li, H. Yu, and C. Miao, Latent-optimized adversarial neural transfer for sarcasm detection, in Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT’21), Online, 2021, pp. 5394–5407.
[4]

M. Lei, Z. Rao, H. Wang, Y. Chen, L. Zou, and H. Yu, Maceral groups analysis of coal based on sematic segmentation of photomicrographs via the improved U-net, Fuel, vol. 294, p. 120475, 2021.

[5]

S. Makridakis, The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms, Futures, vol. 90, pp. 46–60, 2017.

[6]
H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser, and Q. Yang, Building ethics into artificial intelligence, in Proc. 27th International Joint Conference on Artificial Intelligence (IJCAI’18), Stockholm, Sweden, 2018, pp. 5527–5533.
[7]

Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, Dissecting racial bias in an algorithm used to manage the health of populations, Science, vol. 366, no. 6464, pp. 447–453, 2019.

[8]
J. Angwin, J. Larson, S. Mattu, and L. Kirchner, Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
[9]
K. Crawford, Artificial intelligence’s white guy problem, The New York Times, https://www.cs.dartmouth.edu/~ccpalmer/teaching/cs89/Resources/Papers/AIs%20White%20Guy%20Problem%20-%20NYT.pdf, 2016.
[10]
A. Yapo and J. Weiss, Ethical implications of bias in machine learning, in Proc. 51st Hawaii International Conference on System Sciences, Hawaii, HI, USA, 2018, pp. 5365–5372.
[11]
J. Havens, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines. New York, NY, USA: Jeremy P. Tarcher/Penguin, 2016.
[12]

B. Friedman, D. G. Hendry, and A. Borning, A survey of value sensitive design methods, Foundations and Trends in Human-Computer Interaction, vol. 11, no. 2, pp. 63–125, 2017.

[13]

G. Vial, Understanding digital transformation: A review and a research agenda, The Journal of Strategic Information Systems, vol. 28, no. 2, pp. 118–144, 2019.

[14]

C. -H. Lee, C. -L. Liu, A. J. Trappey, J. P. T. Mo, and K. C. Desouza, Understanding digital transformation in advanced manufacturing and engineering: A bibliometric analysis, topic modeling and research trend discovery, Advanced Engineering Informatics, vol. 50, p. 101428, 2021.

[15]

C. -H. Lee, A. J. Trappey, C. -L. Liu, J. P. T. Mo, and K. C. Desouza, Design and management of digital transformations for value creation, Advanced Engineering Informatics, vol. 52, p. 101547, 2022.

[16]

C. -Y. Lee, B. -J. Chou, and C. -F. Huang, Data science and reinforcement learning for price forecasting and raw material procurement in petrochemical industry, Advanced Engineering Informatics, vol. 51, p. 101443, 2022.

[17]
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, A survey on bias and fairness in machine learning, arXiv preprint arXiv: 1908.09635, 2019.
[18]

A. Chouldechova, Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Big Data, vol. 5, no. 2, pp. 153–163, 2017.

[19]
J. Kleinberg, S. Mullainathan, and M. Raghavan, Inherent trade-offs in the fair determination of risk scores, arXiv preprint arXiv: 1609.05807, 2016.
[20]
K. Holstein, J. W. Vaughan, H. Daumé, M. Dudik, and H. Wallach, Improving fairness in machine learning systems: What do industry practitioners need? in Proc. 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2019, pp. 1–16.
[21]

R. Binns, Fairness in machine learning: Lessons from political philosophy, Proceedings of Machine Learning Research, vol. 81, pp. 149–159, 2018.

[22]
B. Hutchinson and M. Mitchell, 50 years of test (un)fairness: Lessons for machine learning, in Proc. Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 2019, pp. 49–58.
[23]
S. Verma and J. Rubin, Fairness definitions explained, in Proc. 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), Gothenburg, Sweden, 2018, pp. 1–7.
[24]
C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, Fairness through awareness, in Proc. 3rd Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 2012, pp. 214–226.
[25]
M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva, Counterfactual fairness, arXiv preprint arXiv: 1703.06856, 2017.
[26]
N. Grgic-Hlaca, M. B. Zafar, K. P. Gummadi, and A. Weller, The case for process fairness in learning: Feature selection for fair decision making, presented at Symposium on Machine Learning and the Law at the 29th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016.
[27]
S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq, Algorithmic decision making and the cost of fairness, in Proc. 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Canada, 2017, pp. 797–806.
[28]
M. Hardt, E. Price, and N. Srebro, Equality of opportunity in supervised learning, arXiv preprint arXiv: 1610.02413, 2016.
[29]

R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth, Fairness in criminal justice risk assessments: The state of the art, Sociological Methods & Research, vol. 50, no. 1, pp. 3–44, 2021.

[30]
M. A. Madaio, L. Stark, J. W. Vaughan, and H. Wallach, Co-designing checklists to understand organizational challenges and opportunities around fairness in AI, in Proc. 2020 CHI Conference on Human Factors in Computing Systems (CHI’20), Honolulu, HI, USA, 2020, pp. 1–14.
[31]
S. Bird, M. Dudík, R. Edgar, B. Horn, R. Lutz, V. Milan, M. Sameki, H. Wallach, and K. Walker, Fairlearn: A toolkit for assessing and improving fairness in AI, Tech. Rep. MSR-TR-2020-32, Microsoft, Redmond, WA, USA, 2020.
[32]

B. Friedman, Value-sensitive design, Interactions, vol. 3, no. 6, pp. 16–23, 1996.

[33]
B. Friedman and D. Hendry, The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations, in Proc. SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 2012, pp. 1145–1148.
[34]
S. Ballard, K. M. Chappell, and K. Kennedy, Judgment call the game: Using value sensitive design and design fiction to surface ethical concerns related to technology, in Proc. 2019 on Designing Interactive Systems Conference, San Diego, CA, USA, 2019, pp. 421–433.
[35]

E. Zell and Z. Krizan, Do people have insight into their abilities? A metasynthesis, Perspectives on Psychological Science, vol. 9, no. 2, pp. 111–125, 2014.

International Journal of Crowd Science
Pages 32-39
Cite this article:
Zhang J, Shu Y, Yu H. Fairness in Design: A Framework for Facilitating Ethical Artificial Intelligence Designs. International Journal of Crowd Science, 2023, 7(1): 32-39. https://doi.org/10.26599/IJCS.2022.9100033

2103

Views

133

Downloads

10

Crossref

12

Scopus

Altmetrics

Received: 04 May 2022
Revised: 27 September 2022
Accepted: 28 September 2022
Published: 31 March 2023
© The author(s) 2023

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return