Journal Home > Volume 3 , Issue 4

Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges. Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them. The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility. However, often the public is never informed of how these systems operate and resultant institutional decisions are made. A “black box” effect of automated decision-making systems reduces the public’s perceptions of integrity and trustworthiness. Consequently, the institutions administering these systems are less able to assess whether the decisions are just. The result is that the public loses the capacity to identify, challenge, and rectify unfairness or the costs associated with the loss of public goods or benefits. The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system. We formulate an automated decision-making system as a data science lifecycle (DSL) and examine the implications of fiduciary responsibility within the context of the DSL. Fiduciary responsibility within DSLs provides a methodology for addressing the public’s lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public. We posit that fiduciary responsibility manifests in several contexts of a DSL, each of which requires its own mitigation of sources of mistrust. To instantiate fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive policing case study is examined. We examine the development and deployment by the LAPD of predictive policing technology and identify several ways in which the LAPD failed to meet its fiduciary responsibility.


menu
Abstract
Full text
Outline
About this article

Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making

Show Author's information Shannon B. Harper1Eric S. Weber2( )
Department of Sociology and Criminal Justice, Iowa State University, Ames, IA 50011, USA
Department of Mathematics, Iowa State University, Ames, IA 50011, USA

Abstract

Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges. Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them. The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility. However, often the public is never informed of how these systems operate and resultant institutional decisions are made. A “black box” effect of automated decision-making systems reduces the public’s perceptions of integrity and trustworthiness. Consequently, the institutions administering these systems are less able to assess whether the decisions are just. The result is that the public loses the capacity to identify, challenge, and rectify unfairness or the costs associated with the loss of public goods or benefits. The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system. We formulate an automated decision-making system as a data science lifecycle (DSL) and examine the implications of fiduciary responsibility within the context of the DSL. Fiduciary responsibility within DSLs provides a methodology for addressing the public’s lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public. We posit that fiduciary responsibility manifests in several contexts of a DSL, each of which requires its own mitigation of sources of mistrust. To instantiate fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive policing case study is examined. We examine the development and deployment by the LAPD of predictive policing technology and identify several ways in which the LAPD failed to meet its fiduciary responsibility.

Keywords: artificial intelligence, trust, automated decision-making, recreancy theorem, fiduciary responsibility

References(108)

[1]

S. G. Sapp, S. Dorius, K. Bertelson, and S. Harper, Public support for government use of network surveillance: An empirical assessment of public understanding of ethics in science administration, Public Understanding of Science, vol. 31, no. 4, pp. 489–506, 2022.

[2]

Y. Kao and S. G. Sapp, The effect of cultural values and institutional trust on public perceptions of government use of network surveillance, Technology in Society, vol. 70, p. 102047, 2022.

[3]

S. G. Sapp, C. Arnot, J. Fallon, T. Fleck, D. Soorholtz, M. Sutton-Vermeulen, and J. J. H. Wilson, Consumer trust in the US food system: An examination of the recreancy theorem, Rural Sociology, vol. 74, no. 4, pp. 525–545, 2009.

[4]

G. O. Mohler, M. B. Short, S. Malinowski, M. Johnson, G. E. Tita, A. L. Bertozzi, and P. J. Brantingham, Randomized controlled field trials of predictive policing, Journal of the American Statistical Association, vol. 110, no. 512, pp. 1399–1411, 2015.

[5]

K. Lum and W. Isaac, To predict and serve? Significance, vol. 13, no. 5, pp. 14–19, 2016.

[6]
D. Ensign, S. A. Friedler, S. Neville, C. Scheidegger, and S. Venkatasubramanian, Runaway feedback loops in predictive policing, in Proc. 1st Conference on Fairness, Accountability and Transparency, New York, NY, USA, 2018, pp. 160–171.
[7]

M. Hengstler, E. Enkel, and S. Duelli, Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices, Technological Forecasting and Social Change, vol. 105, pp. 105–120, 2016.

[8]

K. Siau and W. Wang, Building trust in artificial intelligence, machine learning, and robotics, Cutter Business Technology Journal, vol. 31, no. 2, pp. 47–53, 2018.

[9]
R. Dobbe, S. Dean, T. Gilbert, and N. Kohli, A broader view on bias in automated decision-making: Reflecting on epistemology and dynamics, arXiv preprint arXiv: 1807.00553, 2018.
[10]

J. D. Lee and K. A. See, Trust in automation: Designing for appropriate reliance, Human Factors, vol. 46, no. 1, pp. 50–80, 2004.

[11]
L. Jaume-Palasí and M. Spielkamp, Ethics and algorithmic processes for decision making and decision support, https://algorithmwatch.org/de/wp-content/uploads/2017/06/AlgorithmWatch_Working-Paper_No_2_Ethics_ADM.pdf, 2017.
[12]
B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, The ethics of algorithms: Mapping the debate, Big Data & Society, doi: 10.1177/2053951716679679.
DOI
[13]
S. Barocas and A. D. Selbst, Big data’s disparate impact, http://papers.ssrn.com/abstract=2477899, 2016.
DOI
[14]
H. Mouzannar, M. I. Ohannessian, and N. Srebro, From fair decision making to social equality, in Proc. Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 2019, pp. 359–368.
DOI
[15]
B. Knowles and J. T. Richards, The sanction of authority: Promoting public trust in AI, in Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021, pp. 262–271.
DOI
[16]
S. Biswas, M. Wardat, and H. Rajan, The art and practice of data science pipelines: A comprehensive study of data science pipelines in theory, in-the-small, and in-the-large, in Proc. 2022 IEEE/ACM 44th International Conference on Software Engineering, Pittsburgh, PA, USA, 2022, pp. 2091–2103.
DOI
[17]

G. Nguyen, S. Dlugolinsky, M. Bobák, V. Tran, Á. L. García, I. Heredia, P. Malík, and L. Hluchý, Machine learning and deep learning frameworks and libraries for large-scale data mining: A survey, Artificial Intelligence Review, vol. 52, no. 1, pp. 77–124, 2019.

[18]
R. S. Olson, N. Bartley, R. J. Urbanowicz, and J. H. Moore, Evaluation of a tree-based pipeline optimization tool for automating data science, in Proc. Genetic and Evolutionary Computation Conference, Denver, CO, USA, 2016, pp. 485–492.
[19]
H. Wickham, Data science: How is it different to statistics? IMS Bulletin, https://imstat.org/2014/09/04/data-science-how-is-it-different-to-statistics%E2%80%89/, 2014.
DOI
[20]
S. A. Hong and T. Hunter, Build, scale, and deploy deep learning pipelines with ease, https://www.databricks.com/blog/2017/09/06/build-scale-deploy-deep-learning-pipelines-ease.html, 2017.
[21]
S. Todd and D. Dietrich, Computing resource re-provisioning during data analytic lifecycle, US Patent 9619550B1, 2017.
[22]
R. Garcia, V. Sreekanti, N. Yadwadkar, D. Crankshaw, J. E. Gonzalez, and J. M. Hellerstein, Context: The missing piece in the machine learning lifecycle, https://rlnsanz.github.io/dat/Flor_CMI_18_CameraReady.pdf, 2018.
[23]

N. Polyzotis, S. Roy, S. E. Whang, and M. Zinkevich, Data lifecycle challenges in production machine learning: A survey, ACM SIGMOD Record, vol. 47, no. 2, pp. 17–28, 2018.

[24]
L. Zhou, How to build a better machine learning pipeline, https://www.datanami.com/2018/09/05/how-to-build-a-better-machine-learning-pipeline/, 2018.
[25]
J. M. Wing, The data life cycle, https://hdsr.mitpress.mit.edu/pub/577rq08d/release/3, 2018.
[26]

R. Ashmore, R. Calinescu, and C. Paterson, Assuring the machine learning lifecycle: Desiderata, methods, and challenges, ACM Computing Surveys, vol. 54, no. 5, pp. 1–39, 2022.

[27]

W. R. Freudenburg, Risk and recreancy: Weber, the division of labor, and the rationality of risk perceptions, Social Forces, vol. 71, no. 4, pp. 909–932, 1993.

[28]

M. Alario and W. Freudenburg, The paradoxes of modernity: Scientific advances, environmental problems, and risks to the social fabric? Sociological Forum, vol. 18, no. 2, pp. 193–214, 2003.

[29]
G. Roth and C. Wittich, Economy and Society: An Outline of Interpretive Sociology. Berkeley, CA, USA: University of California Press, 1978.
DOI
[30]

J. A. Colquitt, B. A. Scott, and J. A. LePine, Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance, Journal of Applied Psychology, vol. 92, no. 4, pp. 909–927, 2007.

[31]

K. Ball, S. D. Esposti, S. Dibb, V. Pavone, and E. Santiago-Gomez, Institutional trustworthiness and national security governance: Evidence from six European countries, Governance, vol. 32, no. 1, pp. 103–121, 2019.

[32]
B. Barber, The Logic and Limits of Trust. New Brunswick, NJ, USA: Rutgers University Press, 1983.
[33]
K. Blomqvist, Trust in a dynamic environment: Fast trust as a threshold condition for asymmetric technology partnership formation in the ICT sector, Trust in Pressure: Investigations of Trust and Trust Building in Uncertain Circumstances, doi: 10.4337/9781845427962.00011.
[34]

K. Blomqvist, P. Hurmelinna, and R. Seppänen, Playing the collaboration game right—Balancing trust and contracting, Technovation, vol. 25, no. 5, pp. 497–504, 2005.

[35]

M. Deutsch, Trust and suspicion, Journal of Conflict Resolution, vol. 2, no. 4, pp. 265–279, 1958.

[36]

S. G. Sapp and T. Downing-Matibag, Consumer acceptance of food irradiation: A test of the recreancy theorem, International Journal of Consumer Studies, vol. 33, no. 4, pp. 417–424, 2009.

[37]
S. Kim and J. Lee, E-participation, transparency, and trust in local government, Public Administration Review, vol. 72, no. 6, pp. 819–828, 2012.
DOI
[38]
R. Hardin, Conceptions and explanations of trust, in Trust in Society, K. Cook, ed. New York, NY, USA: Russell Sage Foundation, 2001, pp. 3–39.
DOI
[39]
T. C. Earle and G. Cvetkovich, Social Trust: Toward a Cosmopolitan Society. Westport, CT, USA: Greenwood Publishing Group, 1995.
[40]

M. Siegrist and G. Cvetkovich, Perception of hazards: The role of social trust and knowledge, Risk analysis, vol. 20, no. 5, pp. 713–720, 2000.

[41]

C. A. Cooper, H. G. Knotts, and K. M. Brennan, The importance of trust in government for public administration: The case of zoning, Public Administration Review, vol. 68, no. 3, pp. 459–468, 2008.

[42]

F. D. Schoorman, R. C. Mayer, and J. H. Davis, An integrative model of organizational trust: Past, present, and future, Academy of Management Review, vol. 32, pp. 344–354, 2007.

[43]

S. G. Sapp, P. F. Korsching, C. Arnot, and J. J. H. Wilson, Science communication and the rationality of public opinion formation, Science Communication, vol. 35, no. 6, pp. 734–757, 2013.

[44]

J. Anderson, L. Rainie, and A. Luchsinger, Artificial intelligence and the future of humans, Pew Research Center, vol. 10, p. 12, 2018.

[45]
P. Hitlin and L. Rainie, Facebook algorithms and personal data, https://www.pewresearch.org/internet/2019/01/16/facebook-algorithms-and-personal-data/, 2019.
[46]
[47]
L. Rainie, J. Anderson, and D. Page, Code-dependent: Pros and cons of the algorithm age, Pew Research Center, 2017.
[48]
L. Rainie and J. Anderson, The internet of things connectivity binge: What are the implications? Pew Research Center, 2017.
[49]

J. D. Lewis and A. Weigert, Trust as a social reality, Social Forces, vol. 63, no. 4, pp. 967–985, 1985.

[50]
G. J. Hofstede, Intrinsic and enforceable trust: A research agenda, presented at 99th EAAE Seminar, Bonn, Germany, 2006.
[51]

M. Wißner, S. Hammer, E. Kurdyukova, and E. André, Trust-based decision-making for the adaptation of public displays in changing social contexts, Journal of Trust Management, vol. 1, p. 6, 2014.

[52]
IBM Research AI, Trustworthy AI, https://research.ibm.com/topics/trustworthy-ai, 2020.
[53]
European Commission, White paper: On artificial intelligence–a European approach to excellence and trust, https://ec.europa.eu/futurium/en/system/files/ged/white_paper_ai_19_02_2020.pdf, 2020.
[54]
Z. Yan and S. Holtmanns, Trust modeling and management: From social trust to digital trust, in Computer Security, Privacy and Politics: Current Issues, Challenges and Solutions, R. Subramanian, ed. Hershey, PA, USA: IGI Global, 2008, pp. 290–323.
[55]
R. Harper, Trust, Computing, and Society. Cambridge, UK: Cambridge University Press, 2014.
DOI
[56]

A. Jobin, M. Ienca, and E. Vayena, The global landscape of AI ethics guidelines, Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019.

[57]
E. Toreini, M. Aitken, K. Coopamootoo, K. Elliott, C. G. Zelaya, and A. V. Moorsel, The relationship between trust in AI and trustworthy machine learning technologies, in Proc. 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 2020, pp. 272–283.
[58]
M. Arnold, R. K. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilović, R. Nair, K. N. Ramamurthy, A. Olteanu, D. Piorkowski, et al., FactSheets: Increasing trust in AI services through supplier’s declarations of conformity, IBM Journal of Research and Development, vol. 63, nos. 4&5, pp. 1–13, 2019.
DOI
[59]
A. Jacovi, A. Mojsilović, T. Miller, and Y. Goldberg, Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI, in Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021, pp. 624–635.
DOI
[60]

R. C. Mayer, J. H. Davis, and F. D. Schoorman, An integrative model of organizational trust, Academy of Management Review, vol. 20, no. 3, pp. 709–734, 1995.

[61]

K. Hawley, Trust, distrust and commitment, Noûs, vol. 48, no. 1, pp. 1–20, 2014.

[62]
M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you?”: Explaining the predictions of any classifier, in Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 1135–1144.
DOI
[63]
B. Mittelstadt, C. Russell, and S. Wachter, Explaining explanations in AI, in Proc. Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 2019, pp. 279–288.
DOI
[64]
J. Richards, D. Piorkowski, M. Hind, S. Houde, and A. Mojsilovic, A methodology for creating AI factsheets, arXiv preprint arXiv: 2006.13796, 2020
DOI
[65]
M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilovic, R. Nair, K. N. Ramamurthy, D. Reimer, A. Olteanu, et al., FactSheets: Increasing trust in AI services through supplier’s declarations of conformity, arXiv preprint arXiv: 1808.07261, 2018.
[66]

F. D. Davis, Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly, vol. 13, no. 3, pp. 319–340, 1989.

[67]

F. D. Davis, User acceptance of information technology: System characteristics, user perceptions and behavioral impacts, International Journal of Man-Machine Studies, vol. 38, no. 3, pp. 475–487, 1993.

[68]

S. Wachter, B. Mittelstadt, and C. Russell, Counterfactual explanations without opening the black box: Automated decisions and the GDPR, Harv. JL &Tech., vol. 31, no. 2, p. 841, 2018.

[69]
L. Gilpin, A. Paley, M. Alam, S. Spurlock, and K. Hammond, “Explanation” is not a technical term: The problem of ambiguity in XAI, arXiv preprint arXiv: 2207.00007, 2022.
DOI
[70]
N. Kohli, R. Barreto, and J. A. Kroll, Translation tutorial: A shared lexicon for research and practice in human-centered software systems, in Proc. 1st Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2018, p. 7.
[71]
[72]
S. Thomson, ‘predictive policing’: Law enforcement revolution or just new spin on old biases? Depends who you ask, https://www.cbc.ca/news/world/crime-los-angeles-predictive-policing-algorithms-1.4826030, 2018.
[73]
R. B. Santos, Predictive policing: Where’s the evidence? in Police Innovation, D. Weisburd and A. A. Braga, eds. Cambridge, UK: Cambridge University Press, 2019, pp. 366–396.
[74]

W. Hardyns and A. Rummens, Predictive policing as a new tool for law enforcement? Recent developments and challenges,, European Journal on Criminal Policy and Research, vol. 24, no. 3, pp. 201–218, 2018.

[75]

A. Meijer and M. Wessels, Predictive policing: Review of benefits and drawbacks, International Journal of Public Administration, vol. 42, no. 12, pp. 1031–1039, 2019.

[76]
E. A. Carson and D. Golinelli, Prisoners in 2012: Trends in admissions and releases, 1991–2012, https://bjs.ojp.gov/content/pub/pdf/p12tar9112.pdf, 2014.
DOI
[77]

A. Rosenberg, A. K. Groves, and K. M. Blankenship, Comparing black and white drug offenders: Implications for racial disparities in criminal justice and reentry policy and programming, Journal of Drug Issues, vol. 47, no. 1, pp. 132–142, 2017.

[78]

M. Alexander, The new Jim crow, Ohio St. J. Crim. L., vol. 9, no. 1, pp. 7–26, 2011.

[79]

R. Richardson, J. M. Schultz, and K. Crawford, Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice, NYUL Rev. Online, vol. 94, p. 15, 2019.

[80]
A. J. Ritchie, Invisible No More: Police Violence Against Black Women and Women of Color. Boston, MA, USA: Beacon Press, 2017.
[81]
American Civil Liberties Union, Statement of concern about predictive policing by ACLU and 16 civil rights privacy, racial justice, and technology organizations, https://www.aclu.org/other/statement-concern-about-predictive-policing-aclu-and-16-civil-rights-privacy-racial-justice, 2016.
[82]

L. Barrett, Reasonably suspicious algorithms: Predictive policing at the United States border, NYU Rev. L. &Soc. Change, vol. 41, p. 327, 2017.

[83]
A. Edwards, Big data, predictive machines and security: The minority report, in the Routledge Handbook of Technology, Crime and Justice, M. R. McGuire and T. J. Holt, eds. Oxfordshire, UK: Routledge, 2017, pp. 451–461.
[84]
C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY, USA: Crown Books, 2016.
DOI
[85]
J. Bhuiyan, LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws, the Guardian, https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform, 2021.
[86]
E. R. Moravec, Do algorithms have a place in policing? the Atlantic, https://www.theatlantic.com/politics/archive/2019/09/do-algorithms-have-place-policing/596851/, 2019.
[87]

A. G. Ferguson, Predictive policing and reasonable suspicion, Emory Law Journal, vol. 62, p. 259, 2012.

[88]
A. Tarantola, Predictive policing’ could amplify today’s law enforcement issues, Engadget, https://www.engadget.com/predictive-policing-privacy-civil-rights-dangers-133040971.html, 2020.
DOI
[89]
A. G. Ferguson, The Rise of Big Data Policing. New York, NY, USA: New York University Press, 2017.
[90]

T. Aougab, F. Ardila, J. Athreya, E. Goins, and C. Hoffman, Letters to the editor: Boycott collaboration with police, Notices Amer. Math. Soc., vol. 67, no. 9, p. 1293, 2020.

[91]
W. D. Heaven, Predictive policing algorithms are racist. They need to be dismantled, MIT Technology Review, https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/, 2020.
[92]
M. Hvistendahl, How the LAPD and Palantir use data to justify racist policing, the Intercept, https://theintercept.com/2021/01/30/lapd-palantir-data-driven-policing/, 2021.
[93]
C. Linder, Why hundreds of mathematicians are boycotting predictive policing, Popular Mechanics, https://www.popularmechanics.com/science/math/a32957375/mathematicians-boycott-predictive-policing/, 2020.
[94]
G. Baek and T. Mooney, LAPD not giving up on data-driven policing, even after scrapping controversial program, https://www.cbsnews.com/news/los-angeles-police-department-laser-data-driven-policing-racial-profiling-2-0-cbsn-originals-documentary/, 2020.
[95]
I. Ayres and J. Borowsky, Study of racially disparate outcomes in the Los Angeles Police Department, ACLU of Southern California, https://www.aclusocal.org/sites/default/files/wp-content/uploads/2015/09/11837125-LAPD-Racial-Profiling-Report-ACLU.pdf, 2008.
[96]
I. Lapowski, How the LAPD uses data to predict crime, https://www.wired.com/story/los-angeles-police-department-predictive-policing/, 2018.
[97]
M. P. Smith, Review of selected Los Angeles Police Department data-driven policing strategies, Los Angeles Police Commission, https://www.lapdpolicecom.lacity.org/031219/BPC_19-0072.pdf, 2019.
[98]

M. Tonkin, J. Woodhams, R. Bull, J. W. Bond, and E. J. Palmer, Linking different types of crime using geographical and temporal proximity, Criminal Justice and Behavior, vol. 38, no. 11, pp. 1069–1088, 2011.

[99]
F. Yang, Predictive policing, Oxford Research Encyclopedia of Criminology and Criminal Justice, doi: 10.1093/acrefore/9780190264079.013.508.
[100]
T. Mooney and G. Baek, Is artificial intelligence making racial profiling worse? https://www.cbsnews.com/news/artificial-intelligence-racial-profiling-2-0-cbsn-originals-documentary/, 2020.
DOI
[101]

E. Bakke, Predictive policing: The argument for public transparency, NYU Ann. Surv. Am. L., vol. 74, pp. 131–171, 2018.

[102]
C. Chang, LAPD officials defend predictive policing as some groups call for its end, https://www.police1.com/patrol-issues/articles/lapd-officials-defend-predictive-policing-as-some-groups-call-for-its-end-PNfxLd2b6JajAZDs/, 2018.
[103]
Stop LAPD Spying Coalition, Before the bullet hits the body: Dismantling predictive policing in Los Angeles, https://stoplapdspying.org/wp-content/uploads/2018/05/Before-the-Bullet-Hits-the-Body-May-8-2018.pdf, 2018.
[104]
L. Miller, LAPD will end controversial program that aimed to predict where crimes would occur, https://www.latimes.com/california/story/2020-04-21/lapd-ends-predictive-policing-program, 2020.
[105]
R. Bailey, Stopping crime before it starts, Reason, https://reason.com/2012/07/10/predictive-policing-criminals-crime/, 2012.
[106]
Los Angeles Police Department, LAPD foothill community police station announces “international predpol day of action”, https://www.lapdonline.org/newsroom/lapd-foothill-community-police-station-announces-international-predpol-day-of-action-na13136bb/, 2013.
[107]
A. C. Madrigal, Toward a complex, realistic, and moral tech criticism, the Atlantic, https://www.theatlantic.com/technology/archive/2013/03/toward-a-complex-realistic-and-moral-tech-criticism/273996/, 2013.
[108]
S. Egbert and M. Mann, Discrimination in predictive policing: The (dangerous) myth of impartiality and the need for STS analysis, in Automating Crime Prevention, Surveillance, and Military Operations, A. Završnik and V. Badalič, eds. Cham, Switzerland: Springer, 2021, pp. 25–46.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 10 July 2022
Revised: 03 December 2022
Accepted: 28 December 2022
Published: 31 December 2022
Issue date: December 2022

Copyright

© The author(s) 2022

Acknowledgements

Acknowledgment

This work was supported by the National Science Foundation and the National Geospatial Intelligence Agency (No. 1830254) and the National Science Foundation (No. 1934884).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return