Journal Home > Volume 2 , Issue 3

This article introduces the special issue “Technology Ethics in Action: Critical and Interdisciplinary Perspectives”. In response to recent controversies about the harms of digital technology, discourses and practices of “tech ethics” have proliferated across the tech industry, academia, civil society, and government. Yet despite the seeming promise of ethics, tech ethics in practice suffers from several significant limitations: tech ethics is vague and toothless, has a myopic focus on individual engineers and technology design, and is subsumed into corporate logics and incentives. These limitations suggest that tech ethics enables corporate “ethics-washing”: embracing the language of ethics to defuse criticism and resist government regulation, without committing to ethical behavior. Given these dynamics, I describe tech ethics as a terrain of contestation where the central debate is not whether ethics is desirable, but what “ethics” entails and who gets to define it. Current approaches to tech ethics are poised to enable technologists and technology companies to label themselves as “ethical” without substantively altering their practices. Thus, those striving for structural improvements in digital technologies must be mindful of the gap between ethics as a mode of normative inquiry and ethics as a practical endeavor. In order to better evaluate the opportunities and limits of tech ethics, I propose a sociotechnical approach that analyzes tech ethics in light of who defines it and what impacts it generates in practice.


menu
Abstract
Full text
Outline
About this article

The Contestation of Tech Ethics: A Sociotechnical Approach to Technology Ethics in Practice

Show Author's information Ben Green1( )
Society of Fellows and the Gerald R. Ford School of Public Policy, University of Michigan, Ann Arbor, MI 48109, USA

Abstract

This article introduces the special issue “Technology Ethics in Action: Critical and Interdisciplinary Perspectives”. In response to recent controversies about the harms of digital technology, discourses and practices of “tech ethics” have proliferated across the tech industry, academia, civil society, and government. Yet despite the seeming promise of ethics, tech ethics in practice suffers from several significant limitations: tech ethics is vague and toothless, has a myopic focus on individual engineers and technology design, and is subsumed into corporate logics and incentives. These limitations suggest that tech ethics enables corporate “ethics-washing”: embracing the language of ethics to defuse criticism and resist government regulation, without committing to ethical behavior. Given these dynamics, I describe tech ethics as a terrain of contestation where the central debate is not whether ethics is desirable, but what “ethics” entails and who gets to define it. Current approaches to tech ethics are poised to enable technologists and technology companies to label themselves as “ethical” without substantively altering their practices. Thus, those striving for structural improvements in digital technologies must be mindful of the gap between ethics as a mode of normative inquiry and ethics as a practical endeavor. In order to better evaluate the opportunities and limits of tech ethics, I propose a sociotechnical approach that analyzes tech ethics in light of who defines it and what impacts it generates in practice.

Keywords:

technology ethics, AI ethics, ethics-washing, Science, Technology, and Society (STS), sociotechnical systems
Received: 20 May 2021 Accepted: 20 October 2021 Published: 13 January 2022 Issue date: September 2021
References(165)
1
A. Marantz, Silicon Valley’s crisis of conscience, The New Yorker, https://www.newyorker.com/magazine/2019/08/26/silicon-valleys-crisis-of-conscience, 2019.
2
Oxford Languages, Word of the year 2018: Shortlist, Oxford Languages, https://languages.oup.com/word-of-the-year/2018-shortlist/, 2018.
3
R. Foroohar, Year in a word: Techlash, Financial Times, https://www.ft.com/content/76578fba-fca1-11e8-ac00-57a2a826423e, 2018.
4
C. E. Emery Jr., Evidence ridiculously thin for sensational claim of huge underground Clinton sex network, PolitiFact, https://www.politifact.com/factchecks/2016/nov/04/conservative-daily-post/evidence-ridiculously-thin-sensational-claim-huge-/, 2016.
5
H. Ritchie, Read all about it: The biggest fake news stories of 2016, CNBC, https://www.cnbc.com/2016/12/30/read-all-about-it-the-biggest-fake-news-stories-of-2016.html, 2016.
6
A. Blake, A new study suggests fake news might have won Donald Trump the 2016 election, The Washington Post, https://www.washingtonpost.com/news/the-fix/wp/2018/04/03/a-new-study-suggests-fake-news-might-have-won-donald-trump-the-2016-election/, 2018.
7
J. Graham, Hillary Clinton—tech has to fix fake news, USA Today, https://www.usatoday.com/story/tech/talkingtech/2017/05/31/hrc-tech-has-fix-fake-news/102357904/, 2017.
8
M. Read, Donald Trump won because of Facebook, New York Magazine, https://nymag.com/intelligencer/2016/11/donald-trump-won-because-of-facebook.html, 2016.
9
O. Solon, Facebook’s failure: Did fake news and polarized politics get Trump elected? The Guardian, https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-theories, 2016.
10
N. Perlroth, S. Frenkel, and S. Shane, Facebook exit hints at dissent on handling of Russian Trolls, The New York Times, https://www.nytimes.com/2018/03/19/technology/facebook-alex-stamos.html?mtrref=undefined, 2018.
11
K. Hao, How Facebook got addicted to spreading misinformation, MIT Technology Review, https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/, 2021.
12
C. Cadwalladr and E. Graham-Harrison, Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach, The Guardian, https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election, 2018.
13
M. Rosenberg, N. Confessore, and C. Cadwalladr, How Trump consultants exploited the Facebook data of millions, The New York Times, https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html, 2018.
14
D. Cameron and K. Conger, Google is helping the Pentagon build AI for drones, Gizmodo, https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533, 2018.
15
N. Tiku, Three years of misery inside Google, the happiest company in tech, Wired, https://www.wired.com/story/inside-google-three-years-misery-happiest-company-tech/, 2019.
16
S. Woodman, Palantir provides the engine for Donald Trump’s deportation machine, The Intercept, https://theintercept.com/2017/03/02/palantir-provides-the-engine-for-donald-trumps-deportation-machine/, 2017.
17
J. Angwin, J. Larson, S. Mattu, and L. Kirchner, Machine bias, ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
18
V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY, USA: St. Martin's Press, 2018.
19
S. U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY, USA: NYU Press, 2018.https://doi.org/10.2307/j.ctt1pwt9w5
DOI
20
C. O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY, USA: Broadway Books, 2017.
21
S. Jasanoff, The idiom of co-production, in States of Knowledge: The Co-Production of Science and the Social Order, S. Jasanoff, ed. London, UK: Routledge, 2004, pp. 1–12.https://doi.org/10.4324/9780203413845-6
DOI
22

L. Suchman, J. Blomberg, J. E. Orr, and R. Trigg, Reconstructing technologies as social practice, American Behavioral Scientist, vol. 43, no. 3, pp. 392–408, 1999.

23
W. E. Bijker and J. Law, Shaping Technology / Building Society: Studies in Sociotechnical Change. Cambridge, MA, USA: MIT Press, 1992.
24
D. G. Johnson and J. M. Wetmore, STS and ethics: Implications for engineering ethics, in The Handbook of Science and Technology Studies, Third Edition, E. J. Hackett, O. Amsterdamska, M. E. Lynch, and J. Wajcman, eds. Cambridge, MA, USA: MIT Press, 2007, pp. 567–581.
25
C. Fiesler, What our tech ethics crisis says about the state of computer science education, How We Get to Next, https://howwegettonext.com/what-our-tech-ethics-crisis-says-about-the-state-of-computer-science-education-a6a5544e1da6, 2018.
26
P. Karoff, Embedding ethics in computer science curriculum, The Harvard Gazette, https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/, 2019.
27
I. Raicu, Rethinking ethics training in Silicon Valley, The Atlantic, https://www.theatlantic.com/technology/archive/2017/05/rethinking-ethics-training-in-silicon-valley/525456/, 2017.
28
Y. Zunger, Computer science faces an ethics crisis. The Cambridge Analytica scandal proves it. The Boston Globe, https://www.bostonglobe.com/ideas/2018/03/22/computer-science-faces-ethics-crisis-the-cambridge-analytica-scandal-proves/IzaXxl2BsYBtwM4nxezgcP/story.html, 2018.
29
J. Fjeld, N. Achten, H. Hilligoss, A. C. Nagy, and M. Srikumar, Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI, Berkman Klein Center Research Publication No. 2020-1, https://cyber.harvard.edu/publication/2020/principled-ai, 2020.https://doi.org/10.2139/ssrn.3518482
DOI
30
IBM, Everyday ethics for artificial intelligence, https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf, 2018.
31
Microsoft, Microsoft AI principles, https://www.microsoft.com/en-us/ai/responsible-ai, 2018.
32
S. Pichai, AI at Google: Our principles, https://www.blog.google/technology/ai/ai-principles/, 2018.
33
S. Legassick and V. Harding, Why we launched DeepMind ethics & society, DeepMind Blog, https://deepmind.com/blog/announcements/why-we-launched-deepmind-ethics-society, 2017.
34
S. Nadella, Embracing our future: Intelligent cloud and intelligent edge, Microsoft News Center, https://news.microsoft.com/2018/03/29/satya-nadella-email-to-employees-embracing-our-future-intelligent-cloud-and-intelligent-edge/, 2018.
35
J. Novet, Facebook forms a special ethics team to prevent bias in its A. I. software, CNBC, https://www.cnbc.com/2018/05/03/facebook-ethics-team-prevents-bias-in-ai-software.html, 2018.
36
J. Vincent and R. Brandom, Axon launches AI ethics board to study the dangers of facial recognition, The Verge, https://www.theverge.com/2018/4/26/17285034/axon-ai-ethics-board-facial-recognition-racial-bias, 2018.
37
K. Walker, Google AI principles updates, six months in, The Keyword, https://www.blog.google/technology/ai/google-ai-principles-updates-six-months/, 2018.
38
T. Simonite, Google offers to help others with the tricky ethics of AI, Wired, https://www.wired.com/story/google-help-others-tricky-ethics-ai/, 2020.
39
40
M. Dowd, Elon Musk’s billion-dollar crusade to stop the A. I. apocalypse, Vanity Fair, https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x, 2017.
41
K. Finley, Tech giants team up to keep AI from getting out of hand, Wired, https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai/, 2016.
42
A. Hern, 'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft, The Guardian, https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms, 2016.
43
Access Now, Access now resigns from the partnership on AI, https://www.accessnow.org/access-now-resignation-partnership-on-ai/, 2020.
44

B. J. Grosz, D. G. Grant, K. Vredenburgh, J. Behrends, L. Hu, A. Simmons, and J. Waldo, Embedded EthiCS: Integrating ethics broadly across computer science education, Communications of the ACM, vol. 62, no. 8, pp. 54–61, 2019.

45
R. Reich, M. Sahami, J. M. Weinstein, and H. Cohen, Teaching computer ethics: A deeply multidisciplinary approach, in Proc. the 51st ACM Technical Symposium on Computer Science Education, Portland, OR, USA, 2020, pp. 296–302.https://doi.org/10.1145/3328778.3366951
DOI
46
K. Shilton, M. Zimmer, C. Fiesler, A. Narayanan, J. Metcalf, M. Bietz, and J. Vitak, We’re awake — But we’re not at the wheel, PERVADE: Pervasive Data Ethics, https://medium.com/pervade-team/were-awake-but-we-re-not-at-the-wheel-7f0a7193e9d5, 2017.
47
C. Fiesler, N. Garrett, and N. Beard, What do we teach when we teach tech ethics? A syllabi analysis, in Proc. the 51st ACM Technical Symposium on Computer Science Education, Portland, OR, USA, 2020, pp. 289–295.https://doi.org/10.1145/3328778.3366825
DOI
48
C. Fiesler, Tech ethics curricula: A collection of syllabi, https://medium.com/@cfiesler/tech-ethics-curricula-a-collection-of-syllabi-3eedfb76be18, 2018.
49
ACM FAccT Conference, ACM FAccT network, https://facctconference.org/network/, 2021.
50
AI Now Institute, The AI now institute launches at NYU to examine the social effects of artificial intelligence, https://ainowinstitute.org/press-release-ai-now-launch, 2017.
51
M. Sharlach, Princeton collaboration brings new insights to the ethics of artificial intelligence, https://www.princeton.edu/news/2019/01/14/princeton-collaboration-brings-new-insights-ethics-artificial-intelligence, 2019.
52
MIT Media Lab, MIT Media Lab to participate in $27 million initiative on AI ethics and governance, MIT News, https://news.mit.edu/2017/mit-media-lab-to-participate-in-ai-ethics-and-governance-initiative-0110, 2017.
53
MIT News Office, MIT reshapes itself to shape the future, MIT News, http://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015, 2018.
54
A. Adams, Stanford University launches the institute for human-centered artificial intelligence, Stanford News, https://news.stanford.edu/2019/03/18/stanford_university_launches_human-centered_ai/, 2019.
55
S. Marowski, Artificial intelligence researchers create ethics center at University of Michigan, MLive, https://www.mlive.com/news/ann-arbor/2020/01/artificial-intelligence-researchers-create-ethics-center-at-university-of-michigan.html, 2020.
56
DOI
57
Mozilla, Announcing a competition for ethics in computer science, with up to $3.5 million in prizes, The Mozilla Blog, https://blog.mozilla.org/blog/2018/10/10/announcing-a-competition-for-ethics-in-computer-science-with-up-to-3-5-million-in-prizes/, 2018.
58
V. Eubanks, A hippocratic oath for data science, https://virginia-eubanks.com/2018/02/21/a-hippocratic-oath-for-data-science/, 2018.
59
D. J. Patil, A code of ethics for data science, https://www.linkedin.com/pulse/code-ethics-data-science-dj-patil/, 2018.
60
T. Simonite, Should data scientists adhere to a hippocratic oath? Wired, https://www.wired.com/story/should-data-scientists-adhere-to-a-hippocratic-oath/, 2018.
61
Data4Democracy, Ethics resources, https://github.com/Data4Democracy/ethics-resources, 2018.
62
The Institute for the Future and Omidyar Network, Ethical OS Toolkit, https://ethicalos.org, 2018.
63
D. boyd, G. Bowker, K. Crawford, and H. Nissenbaum, Council for Big Data, Ethics, and Society, https://bdes.datasociety.net, 2014.
64
National Science and Technology Council, Preparing for the future of artificial intelligence, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf, 2018.
65
U. S. Department of Defense, DOD adopts ethical principles for artificial intelligence, https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/, 2020.
66
67
High-Level Expert Group on AI, Ethics guidelines for trustworthy AI, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419, 2019.
68
Integrated Innovation Strategy Promotion Council, AI for everyone: People, industries, regions and governments, https://www8.cao.go.jp/cstp/english/humancentricai.pdf, 2019.
69
E. Martinho-Truswell, H. Miller, I. N. Asare, A. Petheram, R. Stirling, C. G. Mont, and C. Martínez, Hacia una Estrategia de IA en México: Aprovechando la Revolución de la IA (Towards an AI strategy in Mexico: Leveraging the AI revolution), https://docs.wixstatic.com/ugd/7be025_ba24a518a53a4275af4d7ff63b4cf594.pdf, 2018.
70
Organisation for Economic Co-operation and Development, Recommendation of the Council on Artificial Intelligence, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449, 2019.
71
D. Greene, A. L. Hoffmann, and L. Stark, Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning, in Proc. the 52nd Hawaii International Conference on System Sciences, Grand Wailea, HI, USA, 2019, pp. 2122–2131.https://doi.org/10.24251/HICSS.2019.258
DOI
72
73

A. Jobin, M. Ienca, and E. Vayena, The global landscape of AI ethics guidelines, Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019.

74
L. Stark and A. L. Hoffmann, Data is the new what? Popular metaphors & professional ethics in emerging data cultures, Journal of Cultural Analytics, doi: 10.22148/16.036.https://doi.org/10.22148/16.036
DOI
75

B. Mittelstadt, Principles alone cannot guarantee ethical AI, Nature Machine Intelligence, vol. 1, no. 11, pp. 501–507, 2019.

76
D. Harwell, Facial recognition may be coming to a police body camera near you, The Washington Post, https://www.washingtonpost.com/news/the-switch/wp/2018/04/26/facial-recognition-may-be-coming-to-a-police-body-camera-near-you/, 2018.
77
W. Knight, Google appoints an “AI council” to head off controversy, but it proves controversial, MIT Technology Review, https://www.technologyreview.com/2019/03/26/136376/google-appoints-an-ai-council-to-head-off-controversy-but-it-proves-controversial, 2019.
78
S. S. Silbey, How not to teach ethics, MIT Faculty Newsletter, https://web.mit.edu/fnl/volume/311/silbey.html, 2018.
79

J. Metcalf, E. Moss, and D. Boyd, Owning ethics: Corporate logics, Silicon Valley, and the institutionalization of ethics, Social Research, vol. 86, no. 2, pp. 449–476, 2019.

80
T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumé III, and K. Crawford, Datasheets for datasets, arXiv preprint arXiv: 1803.09010, 2018.
81
M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru, Model cards for model reporting, in Proc. the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 2019, pp. 220–229.https://doi.org/10.1145/3287560.3287596
DOI
82
K. R. Varshney, Introducing AI fairness 360, IBM Research Blog, https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/, 2018.
83
A. Peters, This tool lets you see–and correct–the bias in an algorithm, Fast Company, https://www.fastcompany.com/40583554/this-tool-lets-you-see-and-correct-the-bias-in-an-algorithm, 2018.
84
D. Gershgorn, Facebook says it has a tool to detect bias in its artificial intelligence, Quartz, https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence/, 2018.
85
K. Conger and C. Metz, Tech workers now want to know: What are we building this for? The New York Times, https://www.nytimes.com/2018/10/07/technology/tech-workers-ask-censorship-surveillance.html, 2018.
86
R. Gallagher, Google shut out privacy and security teams from secret China project, The Intercept, https://theintercept.com/2018/11/29/google-china-censored-search/, 2018.
87
K. Crawford, R. Dobbe, T. Dryer, G. Fried, B. Green, E. Kaziunas, A. Kak, V. Mathur, E. McElroy, A. N. Sánchez, et al., AI now 2019 report, https://ainowinstitute.org/AI_Now_2019_Report.pdf, 2019.
88
C. Haskins, The Los Angeles police department says it is dumping a controversial predictive policing tool, BuzzFeed News, https://www.buzzfeednews.com/article/carolinehaskins1/los-angeles-police-department-dumping-predpol-predictive, 2020.
89
B. Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. New York, NY, USA: W. W. Norton & Company, 2015.
90

S. Viljoen, A relational theory of data governance, Yale Law Journal, vol. 131, no. 2, pp. 573–654, 2021.

91

L. M. Khan, Amazon’s antitrust paradox, The Yale Law Journal, vol. 126, no. 3, pp. 564–907, 2017.

92
T. Wu, The Curse of Bigness: Antitrust in the New Gilded Age. New York, NY, USA: Columbia Global Reports, 2018.https://doi.org/10.2307/j.ctv1fx4h9c
DOI
93
K. Crawford and V. Joler, Anatomy of an AI system: The Amazon Echo as an anatomical map of human labor, data and planetary resources, https://anatomyof.ai, 2018.
94
R. Dobbe and M. Whittaker, AI and climate change: How they’re connected, and what we can do about it, AI Now Institute, https://medium.com/@AINowInstitute/ai-and-climate-change-how-theyre-connected-and-what-we-can-do-about-it-6aa8d0f5b32c, 2019.
95
W. Evans, Ruthless quotas at Amazon are maiming employees, The Atlantic, https://www.theatlantic.com/technology/archive/2019/11/amazon-warehouse-reports-show-worker-injuries/602530/, 2019.
96
M. L. Gray and S. Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston, MA, USA: Houghton Mifflin Harcourt, 2019.
97
S. Jasanoff, Technology as a site and object of politics, in The Oxford Handbook of Contextual Political Analysis, R. E. Goodin and C. Tilly, eds. New York, NY, USA: Oxford University Press, 2006, pp. 745–766.
98
S. M. West, M. Whittaker, and K. Crawford, Discriminating systems: Gender, race, and power in AI, https://ainowinstitute.org/discriminatingsystems.pdf, 2019.
99
T. Simonite, Google and Microsoft warn that AI may do dumb things, Wired, https://www.wired.com/story/google-microsoft-warn-ai-may-do-dumb-things/, 2019.
100
D. Seetharaman, Jack Dorsey’s push to clean up Twitter stalls, researchers say, The Wall Street Journal, https://www.wsj.com/articles/jack-dorseys-push-to-clean-up-twitter-stalls-researchers-say-11584264600, 2020.
101
K. Johnson, How to operationalize AI ethics, VentureBeat, https://venturebeat.com/2019/10/07/how-to-operationalize-ai-ethics/, 2019.
102
A. Pardes, Silicon Valley writes a playbook to help avert ethical disasters, Wired, https://www.wired.com/story/ethical-os/, 2018.
103
R. Neate, Twitter stock plunges 20% in wake of 1m user decline, The Guardian, https://www.theguardian.com/technology/2018/jul/27/twitter-share-price-tumbles-after-it-loses-1m-users-in-three-months, 2018.
104
S. Subin, Facebook, Twitter and digital ad stocks drop sharply after Snap earnings, CNBC, https://www.cnbc.com/2021/10/21/facebook-twitter-and-digital-ad-stocks-drop-sharply-after-snap-earnings.html, 2021.
105
B. Smith, Facial recognition: It’s time for action, Microsoft On The Issues, https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/, 2018.
106
O. Solon, Why did Microsoft fund an Israeli firm that surveils West Bank Palestinians? NBC News, https://www.nbcnews.com/news/all/why-did-microsoft-fund-israeli-firm-surveils-west-bank-palestinians-n1072116, 2019.
107
D. Wakabayashi and K. Conger, Google wants to work with the Pentagon again, despite employee concerns, The New York Times, https://www.nytimes.com/2021/11/03/technology/google-pentagon-artificial-intelligence.html, 2021.
108
K. Hao, We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review, https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/, 2020.
109
P. Dave and J. Dastin, Google told its scientists to ‘strike a positive tone’ in AI research - documents, Reuters, https://www.reuters.com/article/us-alphabet-google-research-focus/google-told-its-scientists-to-strike-a-positive-tone-in-ai-research-documents-idUSKBN28X1CB, 2020.
110
T. Metzinger, Ethics washing made in Europe, Der Tagesspiegel, https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html, 2019.
111

P. Nemitz, Constitutional democracy and technology in the age of artificial intelligence, Philosophical Transactions of the Royal Society A:Mathematical,Physical and Engineering Sciences, vol. 376, no. 2133, p. 20180089, 2018.

112
B. Wagner, Ethics as escape from regulation: From ethics-washing to ethics-shopping? in Being Profiling. Cogitas Ergo Sum, E. Bayamlioglu, I. Baraliuc, L. A. W. Janssens, and M. Hildebrandt, eds. Amsterdam, the Netherlands: Amsterdam University Press, 2018, pp. 84–89.https://doi.org/10.2307/j.ctvhrd092.18
DOI
113
Google Transparency Project, Google Academics Inc., https://www.techtransparencyproject.org/sites/default/files/Google-Academics-Inc.pdf, 2017.
114
O. Williams, How big tech funds the debate on AI ethics, New Statesman, https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics, 2019.
115
A. E. Domínguez, R. Bassett-Audain, H. Karimi, B. Estrada, C. I. Webb, R. Perry, S. Haslanger, J. King, K. Leonardo, S. Aladetan, et al., Celebrating war criminals at MIT’s ‘ethical’ College of Computing, The Tech, https://thetech.com/2019/02/14/celebrating-war-criminals, 2019.
116
R. Farrow, How an Élite University Research Center concealed its relationship with Jeffrey Epstein, The New Yorker, https://www.newyorker.com/news/news-desk/how-an-elite-university-research-center-concealed-its-relationship-with-jeffrey-epstein, 2019.
117
A. Mboya, Why Joi Ito needs to resign, The Tech, https://thetech.com/2019/08/29/joi-ito-needs-to-resign, 2019.
118
D. Gershgorn, Stanford’s new AI institute is inadvertently showcasing one of tech’s biggest problems, Quartz, https://qz.com/1578617/stanfords-new-diverse-ai-institute-is-overwhelmingly-white-and-male/, 2019.
119
R. Ochigame, The Invention of “Ethical AI”: How big tech manipulates academia to avoid regulation, The Intercept, https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/, 2019.
120
121
122
123
124
125
126
E. Moss and J. Metcalf, Too Big a Word, Data & Society: Points, https://points.datasociety.net/too-big-a-word-13e66e62a5bf, 2020.
127

T. F. Gieryn, Boundary-work and the demarcation of science from non-science: Strains and interests in professional ideologies of scientists, American Sociological Review, vol. 48, no. 6, pp. 781–795, 1983.

128
P. H. Collins, Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. London, UK: Routledge, 2008.
129

D. Haraway, Situated knowledges: The science question in feminism and the privilege of partial perspective, Feminist studies, vol. 14, no. 3, pp. 575–599, 1988.

130
S. Visvanathan, Knowledge, justice and democracy, in Science and Citizens: Globalization and the Challenge of Engagement, M. Leach, I. Scoones, and B. Wynne, eds. London, UK: Zed Books, 2005, pp. 83–94.https://doi.org/10.5040/9781350222458.ch-006
DOI
131
132
L. Irani and R. Chowdhury, To really ‘disrupt,’ tech needs to listen to actual researchers, Wired, https://www.wired.com/story/tech-needs-to-listen-to-actual-researchers/, 2019.
133
S. Jasanoff, The Ethics of Invention: Technology and the Human Future. New York, NY, USA: W. W. Norton & Company, 2016.
134
J. Reardon, Human population genomics and the dilemma of difference, in Reframing Rights: Bioconstitutionalism in the Genetic Age, S. Jasanoff, ed. Cambridge, MA, USA: MIT Press, 2011, pp. 217–238.https://doi.org/10.7551/mitpress/9780262015950.003.0114
DOI
135

S. Wright, Legitimating genetic engineering, Perspectives in Biology and Medicine, vol. 44, no. 2, pp. 235–247, 2001.

136
Future of Life Institute, Beneficial AI 2017, https://futureoflife.org/bai-2017/, 2017.
137

A. Abbott, Professional ethics, American Journal of Sociology, vol. 88, no. 5, pp. 855–885, 1983.

138
J. Metcalf, Ethics codes: History, context, and challenges, https://bdes.datasociety.net/wp-content/uploads/2016/10/EthicsCodes.pdf, 2014.
139

G. Wood and M. Rimmer, Codes of ethics: What are they really and what should they be? International Journal of Value-Based Management, vol. 16, no. 2, pp. 181–195, 2003.

140

D. R. Cressey and C. A. Moore, Managerial values and corporate codes of ethics, California Management Review, vol. 25, no. 4, pp. 53–77, 1983.

141

E. Oz, Ethical standards for information systems professionals: A case for a unified code, MIS quarterly, vol. 16, no. 4, pp. 423–433, 1992.

142
W. A. Gamson, The Strategy of Social Protest. Homewood, IL, USA: The Dorsey Press, 1975.
143

P. Selznick, Foundations of the theory of organization, American Sociological Review, vol. 13, no. 1, pp. 25–35, 1948.

144

A. J. Trumpy, Subject to negotiation: The mechanisms behind co-optation and corporate reform, Social Problems, vol. 55, no. 4, pp. 480–500, 2014.

145

L. King and J. Busa, When corporate actors take over the game: the corporatization of organic, recycling and breast cancer activism, Social Movement Studies, vol. 16, no. 5, pp. 549–563, 2017.

146
E. Graeff, The responsibility to not design and the need for citizen professionalism, Tech Otherwise, doi: 10.21428/93b2c832.c8387014.https://doi.org/10.21428/93b2c832.c8387014
DOI
147
M. Cifor, P. Garcia, T. L. Cowan, J. Rault, T. Sutherland, A. S. Chan, J. Rode, A. L. Hoffmann, N. Salehi, and L. Nakamura, Feminist Data Manifest-No, https://www.manifestno.com, 2019.
148
A. -E. M. Project, Counterpoints: A San Francisco Bay Area Atlas of Displacement & Resistance. Oakland, CA, USA: PM Press, 2021.
149
T. Lewis, S. P. Gangadharan, M. Saba, and T. Petty, Digital defense playbook: Community power tools for reclaiming data, Technical report, Our data bodies, Detroit, MI, USA, 2018.
150
S. T. Hamid, Community defense: Sarah T. Hamid on abolishing carceral technologies, Logic, https://logicmag.io/care/community-defense-sarah-t-hamid-on-abolishing-carceral-technologies/, 2020.
151
Stop LAPD spying coalition and free radicals, the algorithmic ecology: An abolitionist tool for organizing against algorithms, Medium, https://stoplapdspying.medium.com/the-algorithmic-ecology-an-abolitionist-tool-for-organizing-against-algorithms-14fcbd0e64d0, 2020.
152

S. P. Gangadharan and J. Niklas, Decentering technology in discourse on discrimination, Information,Communication&Society, vol. 22, no. 7, pp. 882–899, 2019.

DOI
153
S. Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA, USA: MIT Press, 2020.https://doi.org/10.7551/mitpress/12255.001.0001
DOI
154
B. Green and S. Viljoen, Algorithmic realism: Expanding the boundaries of algorithmic thought, in Proc. the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 2020, pp. 19–31.https://doi.org/10.1145/3351095.3372840
DOI
155

A. Dafoe, On technological determinism: A typology, scope conditions, and a mechanism, Science,Technology,&Human Values, vol. 40, no. 6, pp. 1047–1076, 2015.

DOI
156
L. Marx and M. R. Smith, Introduction, in Does Technology Drive Hisstory?: The Dilemma of Technological Determinism, M. R. Smith and L. Marx, eds. Cambridge, MA, USA: MIT Press, 1994, pp. IX–XV.
157
L. Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago, IL, USA: University of Chicago Press, 1986.
158
T. J. Pinch and W. E. Bijker, The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other, in The Social Construction of Technological Systems, W. E. Bijker, T. P. Hughes, and T. Pinch, eds. Cambridge, MA, USA: MIT Press, 1987, pp. 17–50.
159
E. Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism. PublicAffairs, New York, NY, USA: PublicAffairs, 2014.
160
B. Green, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. Cambridge, MA, USA: MIT Press, 2019.https://doi.org/10.7551/mitpress/11555.001.0001
DOI
161
L. Irani, Chasing Innovation: Making Entrepreneurial Citizens in Modern India. Princeton, NJ, USA: Princeton University Press, 2019.https://doi.org/10.23943/princeton/9780691175140.001.0001
DOI
162
M. G. Ames, The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child. Cambridge, MA, USA: MIT Press, 2019.https://doi.org/10.7551/mitpress/10868.001.0001
DOI
163
D. Greene, The Promise of Access: Technology, Inequality, and the Political Economy of Hope. Cambridge, MA, USA: MIT Press, 2021.https://doi.org/10.7551/mitpress/11674.001.0001
DOI
164

A. L. Hoffmann, Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse, Information,Communication&Society, vol. 22, no. 7, pp. 900–915, 2019.

DOI
165
B. Green, Escaping the impossibility of fairness: From formal to substantive algorithmic fairness, arXiv preprint arXiv: 2107.04642, 2021.https://doi.org/10.2139/ssrn.3883649
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 20 May 2021
Accepted: 20 October 2021
Published: 13 January 2022
Issue date: September 2021

Copyright

© The author(s) 2021

Acknowledgements

Acknowledgment

B. Green thanks Elettra Bietti, Anna Lauren Hoffmann, Jenny Korn, Kathy Pham, and Luke Stark for their comments on this article. B. Green also thanks the Harvard STS community, particularly Sam Weiss Evans, for feedback on an earlier iteration of this article.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return