Journal Home > Volume 2 , Issue 3

Weaponized in support of deregulation and self-regulation, “ethics” is increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behavior. So-called “ethics washing” by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community. The author defines “ethics bashing” as the parallel tendency to trivialize ethics and moral philosophy. Underlying these two attitudes are a few misunderstandings: (1) philosophy is understood in opposition and as alternative to law, political representation, and social organizing; (2) philosophy and “ethics” are perceived as formalistic, vulnerable to instrumentalization, and ontologically flawed; and (3) moral reasoning is portrayed as mere “ivory tower” intellectualization of complex problems that need to be dealt with through other methodologies. This article argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing”, or by scholars and policy-makers in the form of “ethics bashing”. Grappling with the role of philosophy and ethics requires moving beyond simplification and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. We must resist reducing moral philosophy’s role and instead must celebrate its special worth as a mode of knowledge-seeking and inquiry. Far from mandating self-regulation, moral philosophy facilitates the scrutiny of various modes of regulation, situating them in legal, political, and economic contexts. Moral philosophy indeed can explainin the relationship between technology and other worthy goals and can situate technology within the human, the social, and the political.


menu
Abstract
Full text
Outline
About this article

From Ethics Washing to Ethics Bashing: A Moral Philosophy View on Tech Ethics

Show Author's information Elettra Bietti1( )
Harvard Law School, Harvard University, Cambridge, MA 02138, USA

Abstract

Weaponized in support of deregulation and self-regulation, “ethics” is increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behavior. So-called “ethics washing” by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community. The author defines “ethics bashing” as the parallel tendency to trivialize ethics and moral philosophy. Underlying these two attitudes are a few misunderstandings: (1) philosophy is understood in opposition and as alternative to law, political representation, and social organizing; (2) philosophy and “ethics” are perceived as formalistic, vulnerable to instrumentalization, and ontologically flawed; and (3) moral reasoning is portrayed as mere “ivory tower” intellectualization of complex problems that need to be dealt with through other methodologies. This article argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing”, or by scholars and policy-makers in the form of “ethics bashing”. Grappling with the role of philosophy and ethics requires moving beyond simplification and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. We must resist reducing moral philosophy’s role and instead must celebrate its special worth as a mode of knowledge-seeking and inquiry. Far from mandating self-regulation, moral philosophy facilitates the scrutiny of various modes of regulation, situating them in legal, political, and economic contexts. Moral philosophy indeed can explainin the relationship between technology and other worthy goals and can situate technology within the human, the social, and the political.

Keywords: artificial intelligence, ethics, technology, big tech, ethics washing, law, regulation, moral philosophy, political philosophy

References(86)

1
K. Walker, An external advisory council to help advance the responsible development of AI, https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/, 2019.
2
Googlers Against Transphobia and Hate, Google must remove Kay Coles James from its Advanced Technology External Advisory Council (ATEAC), https://medium.com/@against.transphobia/googlers-against-transphobia-and-hate-b1b0a5dbf76, 2019.
3
S. Levin, Google scraps AI ethics council after backlash: “Back to the drawing board”, https://www.theguardian.com/technology/2019/apr/04/google-ai-ethics-council-backlash, 2019.
4
T. Gebru, I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired, https://twitter.com/timnitGebru/status/1334352694664957952, 2020.
5
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, On the dangers of stochastic parrots: Can language models be too big? in Proc. the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), Virtual Event, Canada, 2021, pp. 610–623.https://doi.org/10.1145/3442188.3445922
DOI
6
C. Newton, The withering email that got an ethical AI researcher fired at Google, https://www.platformer.news/p/the-withering-email-that-got-an-ethical, 2020.
7
J. Vincent, Google is poisoning its reputation with AI researchers, https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation, 2021.
8
B. Wagner, Ethics as an escape from regulation: From ethics-washing to ethics shopping? in Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen, E. Bayamlioğlu, I. baraliuc, L. Janssens, and M. Hildebrandt, eds. Amsterdam, the Netherlands: Amsterdam University Press, 2018, pp. 84–89.https://doi.org/10.2307/j.ctvhrd092.18
DOI
9
T. Harris, http://www.tristanharris.com/, 2021.
10
J. Powles and H. Nissenbaum, The seductive diversion of ‘solving’ bias in artificial intelligence, https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53, 2018, .
11
D. Greene, A. L. Hoffmann, and L. Stark, Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning, in Proc. of the 52nd Hawaii International Conference on System Sciences, Honolulu, HI, USA, 2019, pp. 2122–2131.https://doi.org/10.24251/HICSS.2019.258
DOI
12

P. Nemitz, Constitutional democracy and technology in the age of artificial intelligence, Philosophical Transactions of the Royal Society A, vol. 376, no. 2133, p. 20180089, 2018.

13

J. Metcalf, E. Moss, and D. Boyd, Owning ethics: Corporate logics, Sillicon Valley, and the institutionalization of ethics, Social Research:An International Quarterly, vol. 82, no. 2, pp. 449–476, 2019.

14
B. Green, Data science as political action: Grounding data science in a politics of justice, Journal of Social Computing, doi:10.23919/JSC.2021.0029.
15
H. L. A. Hart, The Concept of Law, Oxford, UK: Clarendon Press, 1961.
16
L. Hu, Tech ethics: Speaking ethics to power, or power speaking ethics? Journal of Social Computing, doi: 10.23919/JSC.2021.0033.
17
J. E. McNealy, Framing and the language of ethics: Technology, persuasion, and cultural context, Journal of Social Computing, doi: 10.23919/JSC.2021.0027.
18
L. Taylor and L. Dencik, Constructing commercial data ethics, 2020, https://doi.org/10.26116/techreg.2020.001, 2020.
19

A. L. Hoffmann, Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse, Information Communication and Society, vol. 22, no. 7, pp. 900–915, 2019.

20
C. Grannan, What’s the difference between morality and ethics? Encyclopedia Britannica, https://www.britannica.com/story/whats-the-difference-between-morality-and-ethics, 2021.
21
D. D. Runes, The Dictionary of Philosophy. New York, NY, USA: Philosophical Library, 1983.
22
T. Scanlon, What We Owe to Each Other. Cambridge, MA, USA: Harvard University Press, 1998.
23
R. Dworkin, Justice for Hedgehogs. Cambridge, MA, USA: Belknap Press, 2011.https://doi.org/10.2307/j.ctvjf9vkt
DOI
24
R. Dworkin, Law’s Empire. Cambridge, MA, USA: Belknap Press, 1986.
25

O. Tene and J. Polonetsky, A theory of creepy: Technology, privacy, and shifting social norms, Yale Journal of Law,and Technology, vol. 16, no. 1, p. 2, 2014.

26
J. Rawls, A Theory of Justice. Cambridge, MA, USA: Harvard University Press, 1971.
27
J. Cobbe and E. Bietti, Rethinking digital platforms for the post-COVID-19 era, https://www.cigionline.org/articles/rethinking-digital-platforms-post-covid-19-era, 2020.
28
J. Cheung, Real estate politik: Democracy and the financialization of social networks, Journal of Social Computing, doi: 10.23919/JSC.2021.0030.
29
C. Pateman and C. Mills, Contract and Domination. Malden, MA, USA: Polity Press, 2007.
30

B. J. Grosz, D. G. Grant, K. Vredenburgh, J. Behrends, L. Hu, A. Simmons, and J. Waldo, Embedded EthiCS: Integrating ethics across CS Education, Communications of the ACM, vol. 62, no. 8, pp. 54–61, 2019.

31
C. Fiesler, N. Garrett, and N. Beard, What do we teach when we teach tech ethics?: A syllabi analysis, in Proc. the 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20), Portland, OR, USA, 2020, pp. 289–295.https://doi.org/10.1145/3328778.3366825
DOI
32
R. Reich, M. Sahami, J. M. Weinstein, and H. Cohen, Teaching computer ethics: A deeply multidisciplinary approach, in Proc. the 51st ACM Technical Symposium on Computer Science Education, Portland, OR, USA, 2020, pp. 296–302.https://doi.org/10.1145/3328778.3366951
DOI
33
R. Ferreira and M. Y. Vardi, Deep tech ethics: An approach to teaching social justice in computer science, in Proc. the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE '21), Virtual Event, USA, 2021, pp. 1041–1047.https://doi.org/10.1145/3408877.3432449
DOI
34
N. Fraser, Rethinking the public sphere: A contribution to the critique of actually existing democracy, Social Text, no. 25/26, pp. 56–80, 1990.https://doi.org/10.2307/466240
DOI
35
M. Hildebrandt, Closure: On ethics, code and law, in Law for Computer Scientists and Other Folk, M. Hildebrandt, ed. Cambridge, MA, USA: Oxford University Press, 2020, pp. 283–318.https://doi.org/10.1093/oso/9780198860877.003.0011
DOI
36
J. Taplin, Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy. New York, NY, USA: Little, Brown and Company, 2017.
37

P. Foot, The problem of abortion and the doctrine of the double effect, Oxford Review, vol. 5, pp. 5–15, 1967.

38

E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J-F. Bonnefon, and I. Rahwan, The moral machine experiment, Nature, vol. 563, no. 7729, pp. 59–64, 2018.

39

E. Awad, S. Dsouza, A. Shariff, J. -F. Bonnefon, and I. Rahwan, Crowdsourcing moral machines, Communications of the ACM, vol. 63, no. 3, pp. 48–55, 2020.

40
A. E. Jaques, Why the moral machine is a monster, https://robots.law.miami.edu/2019/wp-content/uploads/2019/03/MoralMachineMonster.pdf, 2019.
41
J. Basl and J. Behrends, Why everyone has it wrong about the ethics of autonomous vehicles, in Frontiers of Engineering Reports on Leading-Edge Engineering from the 2019 Symposium, National Academy of Engineering, ed. Washington, DC, USA: The National Academies Press, 2020, pp. 75–82.
42
J. Fjeld, N. Achten, H. Hilligoss, A. C. Nagy, and M. Srikumar, Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI, Berkman Klein Center Research Publication No. 2020-1, doi: http//dx.doi.org/10.2139/ssrn.3518482 .https://doi.org/10.2139/ssrn.3518482
DOI
43
B. Green, The contestation of tech ethics: A sociotechnical approach to technology ethics in practice, Journal of Social Computing, doi: 10.23919/JSC.2021.0018.
44

B. Mittelstadt, Principles alone cannot guarantee ethical AI, Nature Machine Intelligence, vol. 1, pp. 501–507, 2019.

45
H. Nissenbaum, Privacy in Context. Stanford, CA, USA: Stanford University Press, 2009.
46
S. Prideaux, I Am Dynamite! A Life of Nietzsche. New York, NY, USA: Tim Duggan Books, 2018.
47

C. Baumann, Was Hegel an authoritarian thinker? Reading Hegel's Philosophy of History on the basis of his metaphysics, Archiv für Geschichte der Philosophie, vol. 103, no. 1, pp. 120–147, 2019.

48
M. Foucault, Naissance de la Biopolitique: Cours au Collège de France, 1978–1979. Paris, France: Editions du Seuil, 2004.
49
M. Rosen, The Marxist critique of morality and the theory of ideology, in Morality, Reflection and Ideology, E. Harcourt, ed. Cambridge, MA, USA: Oxford University Press, 2000, pp. 21–43.
50

J. Zigon, Moral breakdown and the ethical demand: A theoretical framework for an anthropology of moralities, Anthropological Theory, vol. 7, no. 2, pp. 131–150, 2007.

51

J. Habermas, Reconciliation through the public use of reason: Remarks on John Rawls's political liberalism, The Journal of Philosophy, vol. 92, no. 3, pp. 109–131, 1995.

52

K. Dotson, How is this paper philosophy? Comparative Philosophy, vol. 3, no. 1, pp. 3–29, 2012.

53

L. Winner, Do artifacts have politics? Daedalus, vol. 109, no. 1, pp. 121–136, 1980.

54
M. Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology. Cheltenham, UK: Edward Elgar Publishing Limited, 2015.
55
J. Angwin, J. Larson, S. Mattu, and L. Kirchner, Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
56
C. O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY, USA: Broadway Books, 2017.
57
S. U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY, USA: NYU Press, 2018.https://doi.org/10.2307/j.ctt1pwt9w5
DOI
58
V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York, NY, USA: St Martin’s Press, 2018.
59
R. Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press, 2019.https://doi.org/10.1093/sf/soz162
DOI
60
S. Constanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA, USA: MIT Press, 2020.https://doi.org/10.7551/mitpress/12255.001.0001
DOI
61
C. D’Ignazio and L. Klein, Data Feminism. Cambridge, MA, USA: MIT Press, 2020.https://doi.org/10.7551/mitpress/11805.001.0001
DOI
62
R. Foroohar, Year in a word: Techlash, https://www.ft.com/content/76578fba-fca1-11e8-ac00-57a2a826423e, 2018.
63
K. Shilton, M. Zimmer, C. Fiesler, A. Narayanan, J. Metcalf, M. Bietz, and J. Vitak, We’re awake —but we’re not at the wheel, https://medium.com/pervade-team/were-awake-but-we-re-not-at-the-wheel-7f0a7193e9d5, 2017.
64
T. Rees, Why tech companies need philosophers—and how I convinced Google to hire them, https://perma.cc/2967-8H5R, 2019.
65
Google, Artificial intelligence at Google: Our principles, https://ai.google/principles/, 2020.
66
IBM, Report: Advancing AI ethics beyond compliance, https://www.ibm.com/thought-leadership/institute-business-value/report/ai-ethics, 2020.
67
68
R. Ochigame, How big tech manipulates academia to avoid regulation, https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/, 2019.
69
AI Multiple, AI consulting: In-depth guide with top AI consultants of 2020, https://research.aimultiple.com/ai-consulting/2020, 2020.
70
71

L. Floridi, Translating principles into practices of digital ethics: Five risks of being unethical, Philosophy and Technology, vol. 32, pp. 185–193, 2019.

72
D. Hume, A Treatise of Human Nature, London, UK: Penguin Classics, 1739.https://doi.org/10.1093/oseo/instance.00046221
DOI
73
S. Viljoen, The promise and limits of lawfulness: Inequality, law, and the techlash, Journal of Social Computing, doi: 10.23919/JSC.2021.0025.
74

J. Balkin, Free speech is a triangle, Colorado Law Review, vol. 118, p. 201, 2018.

75

A. Shanor, The new Lochner, Wisconsin Law Review, vol. 1, pp. 133–208, 2016.

76
C. Botero-Marino, J. Greene, M. W. McConnell, and H. Thorning-Schmidt, We are a new board overseeing Facebook. Here’s what we’ll decide, https://www.nytimes.com/2020/05/06/opinion/facebook-oversight-board.html, 2020.
77

E. Douek, Facebook’s “oversight board:” Move fast with stable infrastructure and humility, North Carolina Journal of Law and Technology, vol. 21, no. 1, pp. 1–78, 2019.

78

T. Kadri and K. Klonick, Facebook v. Sullivan: Building constitutional law for online speech, Southern California Law Review, vol. 93, p. 37, 2019.

79
S. Vaidhyanathan, Facebook and the folly of self-regulation, https://www.wired.com/story/facebook-and-the-folly-of-self-regulation/, 2020.
80
J. Jaffer and R. Krishnan, Clearview AI’s first amendment theory threatens privacy—and free speech, too, https://slate.com/technology/2020/11/clearview-ai-first-amendment-illinois-lawsuit.html, 2020.
81
IBM, IBM CEO’s letter to congress on racial justice reform, https://www.ibm.com/blogs/policy/facial-recognition-susset-racial-justice-reforms/, 2020.
82
A. Smith, IBM will no longer develop facial recognition technology following George Floyd protests, https://www.independent.co.uk/life-style/gadgets-and-tech/news/ibm-facial-recognition-george-floyd-protests-a9556061.html, 2020.
83
J. Pesenti, An update on our use of face recognition, https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/, 2021.
84
Axon, First report of the Axon AI ethics board: Face recognition, https://www.policingproject.org/axon-fr, 2019.
85
A. Papazolgou, Silicon Valley’s secret philosophers should share their work, https://perma.cc/6KZR-ASJ9, 2019.
86
O. Williams, How big tech funds the debate on AI ethics, https://perma.cc/5999-57BW, 2019.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 20 May 2021
Revised: 19 November 2021
Accepted: 25 November 2021
Published: 13 January 2022
Issue date: September 2021

Copyright

© The author(s) 2021

Acknowledgements

Acknowledgment

E. Bietti thanks Jeff Behrends, Yochai Benkler, Brian Berkey, Reuben Binns, Mark Budolfson, Urs Gasser, Ben Green, Lily Hu, Lucas Stanczyk, Luke Stark, Jonathan Zittrain, and some anonymous reviewers for their valuable input on this article.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return