Journal Home > Volume 4 , Issue 4

This paper addresses a set of ideological tensions involving the classification of agential kinds, which I see as the methodological and conceptual core of the sentience discourse. Specifically, I consider ideals involved in the classification of biological and artifactual kinds, and ideals related to agency, identity, and value. These ideals frame the background against which sentience in Artificial Intelligence (AI) is theorized and debated, a framework I call the AIdeal. To make this framework explicit, I review the historical discourse on sentience as it appears in ancient, early modern, and the 20th century philosophy, paying special attention to how these ideals are projected onto artificial agents. I argue that tensions among these ideals create conditions where artificial sentience is both necessary and impossible, resulting in a crisis of ideology. Moving past this crisis does not require a satisfying resolution among competing ideals, but instead requires a shift in focus to the material conditions and actual practices in which these ideals operate. Following Charles Mills, I sketch a nonideal approach to AI and artificial sentience that seeks to loosen the grip of ideology on the discourse. Specifically, I propose a notion of participation that deflates the sentience discourse in AI and shifts focus to the material conditions in which sociotechnical networks operate.


menu
Abstract
Full text
Outline
About this article

AIdeal: Sentience and Ideology

Show Author's information Daniel Estrada( )
Department of Humanities and Social Sciences, New Jersey Institute of Technology, Newark, NJ 07102, USA

Abstract

This paper addresses a set of ideological tensions involving the classification of agential kinds, which I see as the methodological and conceptual core of the sentience discourse. Specifically, I consider ideals involved in the classification of biological and artifactual kinds, and ideals related to agency, identity, and value. These ideals frame the background against which sentience in Artificial Intelligence (AI) is theorized and debated, a framework I call the AIdeal. To make this framework explicit, I review the historical discourse on sentience as it appears in ancient, early modern, and the 20th century philosophy, paying special attention to how these ideals are projected onto artificial agents. I argue that tensions among these ideals create conditions where artificial sentience is both necessary and impossible, resulting in a crisis of ideology. Moving past this crisis does not require a satisfying resolution among competing ideals, but instead requires a shift in focus to the material conditions and actual practices in which these ideals operate. Following Charles Mills, I sketch a nonideal approach to AI and artificial sentience that seeks to loosen the grip of ideology on the discourse. Specifically, I propose a notion of participation that deflates the sentience discourse in AI and shifts focus to the material conditions in which sociotechnical networks operate.

Keywords: artificial intelligence, ideology, artifacts, sentience, agency, nonideal theory, natural kinds, participation

References(86)

[1]

X. Dong and X. Dong, Peripheral and central mechanisms of itch, Neuron, vol. 98, no. 3, pp. 482–494, 2018.

[2]

T. Akiyama and E. Carstens, Neural processing of itch, Neuroscience, vol. 250, pp. 697–714, 2013.

[3]

C. W. Mills, “Ideal theory” as ideology, Hypatia, vol. 20, no. 3, pp. 165–183, 2005.

[4]
R. D. Hicks, Aristotle De Anima. Cambridge, UK: Cambridge University Press, 2015.
[5]
P. Calvo and N. Lawrence, Planta Sapiens: Unmasking Plant Intelligence. London, UK: Hachette UK, 2022.
[6]

M. A. O’Malley, M. M. Leger, J. G. Wideman, and I. Ruiz-Trillo, Concepts of the last eukaryotic common ancestor, Nat. Ecol. Evol., vol. 3, no. 3, pp. 338–344, 2019.

[7]
J. J. Hall, The classification of birds, in Aristotle and early modern naturalists (I), Hist. Sci., vol. 29, no. 2, pp. 111–151, 1991.
DOI
[8]

I. Hacking, Natural kinds: Rosy dawn, scholastic twilight, Roy. Inst. Philos. Suppl., vol. 61, pp. 203–239, 2007.

[9]

T. E. Wilkerson, Species, essences and the names of natural kinds, Philos. Q., vol. 43, no. 170, pp. 1–19, 1993.

[10]

M. P. Winsor, The creation of the essentialism story: An exercise in metahistory, History and Philosophy of the Life Sciences, vol. 28, no. 2, pp. 149–174, 2006.

[11]
A. Bird and E. Tobin, Natural kinds, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/natural-kinds/, 2023.
[12]
H. Richards, Edsger Wybe Dijkstra, https://amturing.acm.org/award_winners/dijkstra_1053701.cfm, 2019.
[13]
P. Godfrey-Smith, Metazoa: Animal Life and the Birth of the Mind. New York, NY, USA: Farrar, Straus and Giroux, 2020.
[14]
B. Russell, The Impact of Science on Society. London, UK: Routledge, 2016.
DOI
[15]
C. Witt, L. Shapiro, C. Van Dyke, L. L. Moland, and M. Robinson, Feminist history of philosophy, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/feminism-femhist/, 2023.
[16]
J. Clutton-Brock, Aristotle, the scale of nature, and modern attitudes to animals, Social Research, vol. 62, no. 3, pp. 421–440, 1995.
[17]
Theophrastus and A. F. Hort, Enquiry into Plants: And Minor Works on Odours and Weather Signs. London, UK: Heinemann, 1916.
DOI
[18]

K. Nielsen, The private parts of animals: Aristotle on the teleology of sexual difference, Phronesis, vol. 53, nos. 4 & 5, pp. 373–405, 2008.

[19]

C. A. Freeland, Feminism and ideology in ancient philosophy, Apeiron, vol. 33, no. 4, pp. 365–406, 2000.

[20]

M. Heath, Aristotle on natural slavery, Phronesis, vol. 53, no. 3, pp. 243–270, 2008.

[21]

L. Schiebinger, Why mammals are called mammals: Gender politics in eighteenth-century natural history, Am. Hist. Rev., vol. 98, no. 2, pp. 382–411, 1993.

[22]

A. O. Rorty, From passions to emotions and sentiments, Philosophy, vol. 57, no. 220, pp. 159–172, 1982.

[23]

T. H. Irwin, Aristotle on reason, desire, and virtue, J. Philos., vol. 72, no. 17, pp. 567–578, 1975.

[24]

A. S. Khalil and J. J. Collins, Synthetic biology: Applications come of age, Nat. Rev. Genet., vol. 11, no. 5, pp. 367–379, 2010.

[25]

L. R. Baker, The ontology of artifacts, Philosophical Explorations, vol. 7, no. 2, pp. 99–111, 2004.

[26]
B. Preston, Artifact, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/artifact/, 2022.
[27]

J. L. England, Statistical physics of self-replication, J. Chem. Phys, vol. 139, no. 12, p. 121923, 2013.

[28]
C. Shields, Aristotle’s psychology, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/aristotle-psychology/, 2020.
[29]
J. C. S. Meng, Artificial intelligence and Thomistic angelology: A rejoinder, https://philarchive.org/rec/MENAIA-4, 2001.
[30]
C. Craver, J. Tabery, and P. Illari, Mechanisms in science, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/science-mechanisms/, 2023.
[31]
S. A. Kauffman, A World beyond Physics: The Emergence and Evolution of Life. New York, NY, USA: Oxford University Press, 2019.
[32]
W. Bechtel and R. C. Richardson, Vitalism, in Routledge Encyclopedia of Philosophy, E. Craig, ed. London, UK: Routledge, 2018, pp. 639–643.
[33]

D. Garber, Leibniz on form and matter, Early Sci. Med., vol. 2, no. 3, pp. 326–351, 1997.

[34]
R. Descartes and M. Moriarty, Meditations on First Philosophy: With Selections from the Objections and Replies. Oxford, UK: Oxford University Press, 2008.
[35]
S. Ghelli, The sensitive cogito: Modern materialism and its legacy, in The Suffering Animal: Life Between Weakness and Power, S. Ghelli, ed. Cham, Switzerland: Palgrave Macmillan, 2023, pp. 21–55.
DOI
[36]

S. Greenberg, Descartes on the passions: Function. representation, and motivation, Noûs., vol. 41, no. 4, pp. 714–734, 2007.

[37]

E. F. Keller, Organisms, machines, and thunderstorms: A history of self-organization, part one, Hist. Stud. Nat. Sci., vol. 38, no. 1, pp. 45–75, 2008.

[38]

E. F. Keller, Organisms, machines, and thunderstorms: A history of self-organization, part two. Complexity, emergence, and stable attractors, Hist. Stud. Nat. Sci., vol. 39, no. 1, pp. 1–31, 2009.

[39]
E. Thompson, Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA, USA: Harvard University Press, 2010.
[40]
W. Wiese and T. K Metzinger, Vanilla PP for philosophers: A primer on predictive processing, https://philpapers.org/rec/WIEVPF, 2017.
[41]

S. Tweyman, Hume and the Cogito ergo Sum, Eur. Leg., vol. 10, no. 4, pp. 315–328, 2005.

[42]
A. M. Schmitter, 17th and 18th century theories of emotions, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/emotions-17th18th/, 2021.
[43]
J. L. Tasset, Bentham on ‘hume’s virtues’, in Happiness and Utility: Essays Presented to Frederick Rosen, G. Varouxakis and M. Philp, eds. London, UK: UCL Press, 2019, pp. 81–97.
DOI
[44]
J. S. Mill, On Liberty, Utilitarianism, and Other Essays. New York, NY, USA: Oxford University Press, 1998.
[45]

J. R. Searle, Minds, brains, and programs, Behav. Brain Sci., vol. 3, no. 3, pp. 417–424, 1980.

[46]
D. J. Gunkel, Robot Rights. Cambridge, MA, USA: MIT Press, 2018.
DOI
[47]
D. J Gunkel, Person, Thing, Robot: A Moral and Legal Ontology for the 21st Century and Beyond. Cambridge, MA, USA: MIT Press, 2023.
DOI
[48]
S. Ahmed, What’s the Use?: On the Uses of Use. Durham, NC, USA: Duke University Press, 2019.
[49]
H. G. Frankfurt, On Bullshit. Princeton, NJ, USA: Princeton University Press, 2005.
[50]

W. E. G. Müller, H. C. Schröder, D. Pisignano, J. S. Markl, and X. Wang, Metazoan circadian rhythm: Toward an understanding of a light-based zeitgeber in sponges, Integr. Comp. Biol., vol. 53, no. 1, pp. 103–117, 2013.

[51]
M. P. d’Entreves, Hannah Arendt, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/arendt/, 2022.
[52]

A. M. Turing, Computing machinery and intelligence, Mind, vol. 59, no. 236, pp. 433–460, 1950.

[53]

C. Allen, Animal pain, Noûs, vol. 38, no. 4, pp. 617–643, 2004.

[54]

M. Gibbons, A. Crump, M. Barrett, S. Sarlak, J. Birch, and L. Chittka, Can insects feel pain? A review of the neural and behavioural evidence, Advances in Insect Physiology, vol. 63, pp. 155–229, 2022.

[55]
P. Godfrey-Smith, Limits of sentience and boundaries of consideration, https://petergodfreysmith.com/wp-content/uploads/2023/05/Whitehead-1-Limits-of-Sentience-PGS-2023-G4.pdf, 2023.
[56]

M. Mangan, D. Floreano, K. Yasui, B. A. Trimmer, N. Gravish, S. Hauert, B. Webb, P. Manoonpong, and N. Szczecinski, A virtuous cycle between invertebrate and robotics research: Perspective on a decade of living machines research, Bioinspir. Biomim, vol. 18, no. 3, p. 035005, 2023.

[57]
S. Fazelpour and Z. C. Lipton, Algorithmic fairness from a non-ideal perspective, in Proc. AAAI/ACM Conf. AI, Ethics, and Society, New York, NY, USA, 2020, pp. 57–63.
DOI
[58]
D. Estrada, Ideal theory in AI ethics, arXiv preprint arXiv: 2011.02279, 2020.
[59]

I. Gabriel, Toward a theory of justice for artificial intelligence, Daedalus, vol. 151, no. 2, pp. 218–231, 2022.

[60]

L. Weidinger, K. R. McKee, R. Everett, S. Huang, T. O. Zhu, M. J. Chadwick, C. Summerfield, and I. Gabriel, Using the veil of ignorance to align AI systems with principles of justice, Proc. Natl. Acad. Sci. USA, vol. 120, no. 18, p. e2213709120, 2023.

[61]
S. J. Khader, Decolonizing Universalism: A Transnational Feminist Ethic. Oxford, UK: Oxford University Press, 2018.
DOI
[62]

E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J. F. Bonnefon, and I. Rahwan, The moral machine experiment, Nature, vol. 563, no. 7729, pp. 59–64, 2018.

[63]
A. E. Jaques, Why the moral machine is a monster, https://robots.law.miami.edu/2019/wp-content/uploads/2019/03/MoralMachineMonster.pdf, 2019.
[64]

C. L. Bennett and O. Keyes, What is the point of fairness? Disability, AI and the complexity of justice, SIGACCESS Access. Comput, no. 125, p. 1, 2020.

[65]
M. Abdalla and M. Abdalla, The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity, in Proc. 2021 AAAI/ACM Conf. AI, Ethics, and Society, Virtual Event, 2021, pp. 287–297.
DOI
[66]

M. Whittaker, The steep cost of capture, Interactions, vol. 28, no. 6, pp. 50–55, 2021.

[67]
A. T. Baria and K. Cross, The brain is a computer is a brain: Neuroscience’s internal debate and the social significance of the computational metaphor, arXiv preprint arXiv: 2107.14042, 2021.
[68]

R. Emsley, ChatGPT: these are not hallucinations—They’re fabrications and falsifications, Schizophrenia, vol. 9, no. 1, p. 52, 2023.

[69]
R. Braidotti, The Posthuman. Cambridge, UK: Polity Press, 2013.
[70]

D. Estrada, Human supremacy as posthuman risk, The Journal of Sociotechnical Critique, vol. 1, no. 1, p. 5, 2020.

[71]
B. Latour, Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford, UK: Oxford University Press, 2007.
[72]
[73]
B. Latour, Do you believe in reality? in Beyond the Body Proper: Reading the Anthropology of Material Life, J. Farquhar and M. M. Lock, eds. Durham, NC, USA: Duke University Press, 2007, pp.176–184.
[74]
J. Carpenter, Culture and Human-Robot Interaction in Militarized Spaces. London, UK: Routledge, 2016.
DOI
[75]
K. Darling, ‘Who’s Johnny?’ Anthropomorphic framing in human-robot interaction, integration, and policy. Anthropomorphic framing in humanrobot interaction, integration, and policy, SSRN Electronic Journal.
[76]
L. Erscoi, A. Kleinherenbrink, and O. Guest, Pygmalion displacement: When humanizing AI dehumanises women, https://osf.io/preprints/socarxiv/jqxb6, 2023.
DOI
[77]
A. Turing, Lecture on the automatic computing engine (1947), in The Essential Turing, B. J. Copeland, ed. Oxford, UK: Oxford University Press, 2004, pp. 362–394.
DOI
[78]
D. Estrada, Value alignment, fair play, and the rights of service robots, in Proc. 2018 AAAI/ACM Conf. AI, Ethics, and Society, New Orleans, LA, USA, 2018, pp. 102–107.
DOI
[79]
D. J. Gunkel, Person, Thing, Robot: A Moral and Legal Ontology for the 21st Century and Beyond. Cambridge, MA, USA: MIT Press, 2023.
DOI
[80]

M. Coeckelbergh, How to describe and evaluate “deception” phenomena: Recasting the metaphysics, ethics, and politics of ICTs in terms of magic and performance and taking a relational and narrative turn, Ethics Inf. Technol., vol. 20, no. 2, pp. 71–85, 2018.

[81]

D. J. Gunkel, A. Gerdes, and M. Coeckelbergh, Editorial: Should robots have standing? The moral and legal status of social robots, Front. Robot. AI, vol. 9, p. 946529, 2022.

[82]
J. C. Gellers, Rights for Robots: Artificial Intelligence, Animal and Environmental Law. London, UK: Routledge, 2020.
DOI
[83]
J. C. Gellers and D. J. Gunkel, Artificial intelligence and international human rights law: Implications for humans and technology in the 21st century and beyond, in Handbook on the Politics and Governance of Big Data and Artificial Intelligence, A. Zwitter and O. Gstrein, eds. Cheltenham, UK: Edward Elgar Publishing, 2023, pp. 430–455.
DOI
[84]

A. Birhane, Algorithmic injustice: A relational ethics approach, Patterns, vol. 2, no. 2, p. 100205, 2021.

[85]
A. Birhane, W. Isaac, V. Prabhakaran, M. Diaz, M. C. Elish, I. Gabriel, and S. Mohamed, Power to the people? Opportunities and challenges for participatory AI, in Proc. Equity and Access in Algorithms, Mechanisms, and Optimization, Arlington, VA, USA, 2022, pp. 1–8.
DOI
[86]
I. Kavdir and D. E. Guyer, Apple grading using fuzzy logic, Turkish Journal of Agriculture and Forestry, vol. 27, no. 6, pp. 375–382, 2003.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 28 July 2023
Revised: 24 November 2023
Accepted: 21 December 2023
Published: 31 December 2023
Issue date: December 2023

Copyright

© The author(s) 2023.

Acknowledgements

Acknowledgment

This work was made possible by the patient support of my partner Anna Gollub.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return