386
Views
30
Downloads
0
Crossref
N/A
WoS
0
Scopus
N/A
CSCD
Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges. Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them. The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility. However, often the public is never informed of how these systems operate and resultant institutional decisions are made. A “black box” effect of automated decision-making systems reduces the public’s perceptions of integrity and trustworthiness. Consequently, the institutions administering these systems are less able to assess whether the decisions are just. The result is that the public loses the capacity to identify, challenge, and rectify unfairness or the costs associated with the loss of public goods or benefits. The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system. We formulate an automated decision-making system as a data science lifecycle (DSL) and examine the implications of fiduciary responsibility within the context of the DSL. Fiduciary responsibility within DSLs provides a methodology for addressing the public’s lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public. We posit that fiduciary responsibility manifests in several contexts of a DSL, each of which requires its own mitigation of sources of mistrust. To instantiate fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive policing case study is examined. We examine the development and deployment by the LAPD of predictive policing technology and identify several ways in which the LAPD failed to meet its fiduciary responsibility.
Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges. Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them. The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility. However, often the public is never informed of how these systems operate and resultant institutional decisions are made. A “black box” effect of automated decision-making systems reduces the public’s perceptions of integrity and trustworthiness. Consequently, the institutions administering these systems are less able to assess whether the decisions are just. The result is that the public loses the capacity to identify, challenge, and rectify unfairness or the costs associated with the loss of public goods or benefits. The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system. We formulate an automated decision-making system as a data science lifecycle (DSL) and examine the implications of fiduciary responsibility within the context of the DSL. Fiduciary responsibility within DSLs provides a methodology for addressing the public’s lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public. We posit that fiduciary responsibility manifests in several contexts of a DSL, each of which requires its own mitigation of sources of mistrust. To instantiate fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive policing case study is examined. We examine the development and deployment by the LAPD of predictive policing technology and identify several ways in which the LAPD failed to meet its fiduciary responsibility.
S. G. Sapp, S. Dorius, K. Bertelson, and S. Harper, Public support for government use of network surveillance: An empirical assessment of public understanding of ethics in science administration, Public Understanding of Science, vol. 31, no. 4, pp. 489–506, 2022.
Y. Kao and S. G. Sapp, The effect of cultural values and institutional trust on public perceptions of government use of network surveillance, Technology in Society, vol. 70, p. 102047, 2022.
S. G. Sapp, C. Arnot, J. Fallon, T. Fleck, D. Soorholtz, M. Sutton-Vermeulen, and J. J. H. Wilson, Consumer trust in the US food system: An examination of the recreancy theorem, Rural Sociology, vol. 74, no. 4, pp. 525–545, 2009.
G. O. Mohler, M. B. Short, S. Malinowski, M. Johnson, G. E. Tita, A. L. Bertozzi, and P. J. Brantingham, Randomized controlled field trials of predictive policing, Journal of the American Statistical Association, vol. 110, no. 512, pp. 1399–1411, 2015.
K. Lum and W. Isaac, To predict and serve? Significance, vol. 13, no. 5, pp. 14–19, 2016.
M. Hengstler, E. Enkel, and S. Duelli, Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices, Technological Forecasting and Social Change, vol. 105, pp. 105–120, 2016.
K. Siau and W. Wang, Building trust in artificial intelligence, machine learning, and robotics, Cutter Business Technology Journal, vol. 31, no. 2, pp. 47–53, 2018.
J. D. Lee and K. A. See, Trust in automation: Designing for appropriate reliance, Human Factors, vol. 46, no. 1, pp. 50–80, 2004.
G. Nguyen, S. Dlugolinsky, M. Bobák, V. Tran, Á. L. García, I. Heredia, P. Malík, and L. Hluchý, Machine learning and deep learning frameworks and libraries for large-scale data mining: A survey, Artificial Intelligence Review, vol. 52, no. 1, pp. 77–124, 2019.
N. Polyzotis, S. Roy, S. E. Whang, and M. Zinkevich, Data lifecycle challenges in production machine learning: A survey, ACM SIGMOD Record, vol. 47, no. 2, pp. 17–28, 2018.
R. Ashmore, R. Calinescu, and C. Paterson, Assuring the machine learning lifecycle: Desiderata, methods, and challenges, ACM Computing Surveys, vol. 54, no. 5, pp. 1–39, 2022.
W. R. Freudenburg, Risk and recreancy: Weber, the division of labor, and the rationality of risk perceptions, Social Forces, vol. 71, no. 4, pp. 909–932, 1993.
M. Alario and W. Freudenburg, The paradoxes of modernity: Scientific advances, environmental problems, and risks to the social fabric? Sociological Forum, vol. 18, no. 2, pp. 193–214, 2003.
J. A. Colquitt, B. A. Scott, and J. A. LePine, Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance, Journal of Applied Psychology, vol. 92, no. 4, pp. 909–927, 2007.
K. Ball, S. D. Esposti, S. Dibb, V. Pavone, and E. Santiago-Gomez, Institutional trustworthiness and national security governance: Evidence from six European countries, Governance, vol. 32, no. 1, pp. 103–121, 2019.
K. Blomqvist, P. Hurmelinna, and R. Seppänen, Playing the collaboration game right—Balancing trust and contracting, Technovation, vol. 25, no. 5, pp. 497–504, 2005.
M. Deutsch, Trust and suspicion, Journal of Conflict Resolution, vol. 2, no. 4, pp. 265–279, 1958.
S. G. Sapp and T. Downing-Matibag, Consumer acceptance of food irradiation: A test of the recreancy theorem, International Journal of Consumer Studies, vol. 33, no. 4, pp. 417–424, 2009.
M. Siegrist and G. Cvetkovich, Perception of hazards: The role of social trust and knowledge, Risk analysis, vol. 20, no. 5, pp. 713–720, 2000.
C. A. Cooper, H. G. Knotts, and K. M. Brennan, The importance of trust in government for public administration: The case of zoning, Public Administration Review, vol. 68, no. 3, pp. 459–468, 2008.
F. D. Schoorman, R. C. Mayer, and J. H. Davis, An integrative model of organizational trust: Past, present, and future, Academy of Management Review, vol. 32, pp. 344–354, 2007.
S. G. Sapp, P. F. Korsching, C. Arnot, and J. J. H. Wilson, Science communication and the rationality of public opinion formation, Science Communication, vol. 35, no. 6, pp. 734–757, 2013.
J. Anderson, L. Rainie, and A. Luchsinger, Artificial intelligence and the future of humans, Pew Research Center, vol. 10, p. 12, 2018.
J. D. Lewis and A. Weigert, Trust as a social reality, Social Forces, vol. 63, no. 4, pp. 967–985, 1985.
M. Wißner, S. Hammer, E. Kurdyukova, and E. André, Trust-based decision-making for the adaptation of public displays in changing social contexts, Journal of Trust Management, vol. 1, p. 6, 2014.
A. Jobin, M. Ienca, and E. Vayena, The global landscape of AI ethics guidelines, Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019.
R. C. Mayer, J. H. Davis, and F. D. Schoorman, An integrative model of organizational trust, Academy of Management Review, vol. 20, no. 3, pp. 709–734, 1995.
K. Hawley, Trust, distrust and commitment, Noûs, vol. 48, no. 1, pp. 1–20, 2014.
F. D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly, vol. 13, no. 3, pp. 319–340, 1989.
F. D. Davis, User acceptance of information technology: System characteristics, user perceptions and behavioral impacts, International Journal of Man-Machine Studies, vol. 38, no. 3, pp. 475–487, 1993.
S. Wachter, B. Mittelstadt, and C. Russell, Counterfactual explanations without opening the black box: Automated decisions and the GDPR, Harv. JL &Tech., vol. 31, no. 2, p. 841, 2018.
W. Hardyns and A. Rummens, Predictive policing as a new tool for law enforcement? Recent developments and challenges,, European Journal on Criminal Policy and Research, vol. 24, no. 3, pp. 201–218, 2018.
A. Meijer and M. Wessels, Predictive policing: Review of benefits and drawbacks, International Journal of Public Administration, vol. 42, no. 12, pp. 1031–1039, 2019.
A. Rosenberg, A. K. Groves, and K. M. Blankenship, Comparing black and white drug offenders: Implications for racial disparities in criminal justice and reentry policy and programming, Journal of Drug Issues, vol. 47, no. 1, pp. 132–142, 2017.
M. Alexander, The new Jim crow, Ohio St. J. Crim. L., vol. 9, no. 1, pp. 7–26, 2011.
R. Richardson, J. M. Schultz, and K. Crawford, Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice, NYUL Rev. Online, vol. 94, p. 15, 2019.
L. Barrett, Reasonably suspicious algorithms: Predictive policing at the United States border, NYU Rev. L. &Soc. Change, vol. 41, p. 327, 2017.
A. G. Ferguson, Predictive policing and reasonable suspicion, Emory Law Journal, vol. 62, p. 259, 2012.
T. Aougab, F. Ardila, J. Athreya, E. Goins, and C. Hoffman, Letters to the editor: Boycott collaboration with police, Notices Amer. Math. Soc., vol. 67, no. 9, p. 1293, 2020.
M. Tonkin, J. Woodhams, R. Bull, J. W. Bond, and E. J. Palmer, Linking different types of crime using geographical and temporal proximity, Criminal Justice and Behavior, vol. 38, no. 11, pp. 1069–1088, 2011.
E. Bakke, Predictive policing: The argument for public transparency, NYU Ann. Surv. Am. L., vol. 74, pp. 131–171, 2018.
This work was supported by the National Science Foundation and the National Geospatial Intelligence Agency (No. 1830254) and the National Science Foundation (No. 1934884).
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).