Journal Home > Volume 4 , Issue 3

Artificial intelligence (AI) sentience has become an important topic of discourse and inquiry in light of the remarkable progress and capabilities of large language models (LLMs). While others have considered this issue from more philosophical and metaphysical perspectives, we present an alternative set of considerations grounded in sociocultural theory and analysis. Specifically, we focus on sociocultural perspectives on interpersonal relationships, sociolinguistics, and culture to consider whether LLMs are sentient. Using examples grounded in quotidian aspects of what it means to be sentient along with examples of AI in science fiction, we describe why LLMs are not sentient and are unlikely to ever be sentient. We present this as a framework to reimagine future AI not as impending forms of sentience but rather a potentially useful tool depending on how it is used and built.


menu
Abstract
Full text
Outline
About this article

AI Sentience and Socioculture

Show Author's information AJ Alvero1( )Courtney Peña2
Department of Sociology, University of Florida, Gainesville, FL 32601, USA
Stanford School of Medicine, Stanford University, Stanford, CA 94305, USA

Abstract

Artificial intelligence (AI) sentience has become an important topic of discourse and inquiry in light of the remarkable progress and capabilities of large language models (LLMs). While others have considered this issue from more philosophical and metaphysical perspectives, we present an alternative set of considerations grounded in sociocultural theory and analysis. Specifically, we focus on sociocultural perspectives on interpersonal relationships, sociolinguistics, and culture to consider whether LLMs are sentient. Using examples grounded in quotidian aspects of what it means to be sentient along with examples of AI in science fiction, we describe why LLMs are not sentient and are unlikely to ever be sentient. We present this as a framework to reimagine future AI not as impending forms of sentience but rather a potentially useful tool depending on how it is used and built.

Keywords: culture, sentience, sociology, sociolinguistics, interpersonal relationships, quotidian

References(86)

[1]

T. J. Sejnowski, Large language models and the reverse Turing test, Neural Comput., vol. 35, no. 3, pp. 309–342, 2023.

[2]
H. Erz, I get your excitement about ChatGPT, but.., https://www.hendrik-erz.de/post/i-get-your-excitement-about-chatgpt-but, 2022.
[3]
B. Lemoine, Is LaMDA sentient?—An interview, Medium, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, 2022.
[4]

L. Bojic, I. Stojković, and Z. J. Marjanović, Signs of consciousness in AI: Can GPT-3 tell how smart it really is? SSRN Electron. J., https://doi.org/10.2139/ssrn.4399438, 2023.

[5]

M. I. Jordan, Artificial intelligence—The revolution hasn’t happened yet, Harv. Data Sci. Rev., vol. 1, no. 1, pp. 1–9, 2019.

[6]

M. Wooldridge, Welcome to big AI, IEEE Intell. Syst., vol. 37, no. 3, pp. 24–26, 2022.

[7]
A. S. Laguna, Diversión: Play and Popular Culture in Cuban America. New York, NY, USA: NYU Press, 2017.
[8]
R. Pérez, The Souls of White Jokes: How Racist Humor Fuels White Supremacy. Stanford, CA, USA: Stanford University Press, 2022.
DOI
[9]

L. Wacquant, For a sociology of flesh and blood, Qual. Sociol., vol. 38, no. 1, pp. 1–11, 2015.

[10]

A. Williams, Disciplining animals: Sentience, production, and critique, Int. J. Sociol. Soc. Policy, vol. 24, no. 9, pp. 45–57, 2004.

[11]
A. Arluke and C. R. Sanders, Regarding Animals. Philadelphia, PA, USA: Temple University Press, 1996.
[12]

J. Tang, A. LeBel, S. Jain, and A. G. Huth, Semantic reconstruction of continuous language from non-invasive brain recordings, Nat. Neurosci., vol. 26, no. 5, pp. 858–866, 2023.

[13]

S. Reardon, Mind-reading machines are here: Is it time to worry? Nature, vol. 617, no. 7960, p. 236, 2023.

[14]

M. Gibert and D. Martin, In search of the moral status of AI: Why sentience is a strong argument, AI SOCIETY, vol. 37, no. 1, pp. 319–330, 2022.

[15]

K. Feher and A. I. Katona, Fifteen shadows of socio-cultural AI: A systematic review and future perspectives, Futures, vol. 132, p. 102817, 2021.

[16]

M. O. Riedl, Human-centered artificial intelligence and machine learning, Hum. Behav. Emerg. Technol., vol. 1, no. 1, pp. 33–36, 2019.

[17]

J. Evans, Social computing unhinged, Journal of Social Computing, vol. 1, no. 1, pp. 1–13, 2020.

[18]

R. Holton and R. Boyd, ‘Where are the people? What are they doing? Why are they doing it?’ (Mindell) Situating artificial intelligence within a socio-technical framework, J. Sociol., vol. 57, no. 2, pp. 179–195, 2021.

[19]

G. Thomas, A typology for the case study in social science following a review of definition, discourse, and structure, Qual. Inq., vol. 17, no. 6, pp. 511–521, 2011.

[20]

L. A. Rivera, Hiring as cultural matching: The case of elite professional service firms, American Sociological Review, vol. 77, no. 6, pp. 999–1022, 2012.

[21]
R. Abebe, S. Barocas, J. Kleinberg, K. Levy, M. Raghavan, and D. G. Robinson, Roles for computing in social change, in Proc. 2020 Conf. Fairness, Accountability, and Transparency, Barcelona, Spain, 2020, pp. 252–260.
DOI
[22]
R. F. Kizilcec and H. Lee, Algorithmic fairness in education, in The Ethics of Artificial Intelligence in Education, W. Holmes and K. Porayska-Pomsta, eds. New York, NY, USA: Routledge, 2022, pp. 174–202.
DOI
[23]

W. Zhou, Sociocultural theory and its implications in college English teaching and learning in the age of artificial intelligence, J. Phys.: Conf. Ser., vol. 1646, no. 1, p. 012142, 2020.

[24]

J. S. Coleman, Social capital in the creation of human capital, Am. J. Sociol., vol. 94, pp. S95–S120, 1988.

[25]

R. Boyd, P. J. Richerson, and J. Henrich, The cultural niche: Why social learning is essential for human adaptation, Proc. Natl. Acad. Sci. USA, vol. 108, no. Suppl2, pp. 10918–10925, 2011.

[26]

X. Li and Y. Sung, Anthropomorphism brings us closer: The mediating role of psychological distance in user-AI assistant interactions, Comput. Hum. Behav., vol. 118, p. 106680, 2021.

[27]
W. Labov, Sociolinguistic Patterns. Philadelphia, PA, USA: University of Pennsylvania Press, 1973.
[28]

C. Nilep, “Code switching” in sociocultural linguistics, Colorado Research in Linguistics, vol. 19, no. 1, pp. 1–22, 2006.

[29]
J. R. Rickford and F. Mcnair-Knox, Addressee-and topic-influenced style shift: A quantitative sociolinguistic study, in Sociolinguistic Perspectives on Register, D. Biber and E. Finegan, eds. Oxford, UK: Oxford University Press, 1994, pp. 235–276.
DOI
[30]

N. Chomsky, Language and other cognitive systems. what is special about language, Lang. Learn. Dev., vol. 7, no. 4, pp. 263–278, 2011.

[31]
A. J. Alvero, Sociolinguistic perspectives on machine learning with textual data, https://osf.io/b5w6y/, 2023.
DOI
[32]

A. J. Alvero, S. Giebel, B. Gebre-Medhin, A. L. Antonio, M. L. Stevens, and B. W. Domingue, Essay content and style are strongly related to household income and SAT scores: Evidence from 60 000 undergraduate applications, Sci. Adv., vol. 7, no. 42, p. eabi9031, 2021.

[33]

A. J. Alvero, J. Pal, and K. M. Moussavian, Linguistic, cultural, and narrative capital: Computational and human readings of transfer admissions essays, J. Comput. Soc. Sci., vol. 5, no. 2, pp. 1709–1734, 2022.

[34]
S. Giebel, A. J. Alvero, B. Gebre-Medhin, and A. L. Antonio, Signaled or suppressed? How gender informs women’s undergraduate applications in biology and engineering, Socius Sociol. Res. a Dyn. World. doi: 10.1177/23780231221127537.
DOI
[35]
J. T. Irvine, S. Gal, and P. V. Kroskrity, Language ideology and linguistic differentiation, in Linguistic Anthropology: A Reader, A. Duranti, ed. Hoboken, NJ, USA: Wiley-Blackwell, 2009, pp. 402–434.
[36]
M. Swain, Languaging, agency and collaboration in advanced second language proficiency, in Advanced Language Learning: The Contribution of Halliday and Vygotsky, H. Byrnes, ed. London, UK: Bloomsbury Academic, 2006, pp. 95–108.
[37]
A. J. Alvero and Rebecca Pattichis, Linguistic and cultural strategies: Identification and analysis of spanish language usage in college admissions essays, https://osf.io/preprints/socarxiv/wmsre, 2022.
DOI
[38]
N. Deas, J. Grieser, S. Kleiner, D. Patton, E. Turcan, and K. McKeown, Evaluation of African American language bias in natural language generation, arXiv preprint arXiv: 2305.14291, 2023.
DOI
[39]

A. Koenecke, A. Nam, E. Lake, J. Nudell, M. Quartey, Z. Mengesha, C. Toups, J. R. Rickford, D. Jurafsky, and S. Goel, Racial disparities in automated speech recognition, Proc. Natl. Acad. Sci. USA, vol. 117, no. 14, pp. 7684–7689, 2020.

[40]

O. Patterson, Making sense of culture, Annu. Rev. Sociol., vol. 40, pp. 1–30, 2014.

[41]

O. Lizardo, The cognitive origins of bourdieu’s habitus, J. Theory Soc. Behav., vol. 34, no. 4, pp. 375–401, 2004.

[42]
P. Bourdieu, Distinction: A Social Critique of the Judgement of Taste. Cambridge, MA, USA: Harvard University Press, 1987.
[43]

P. DiMaggio and T. Mukhtar, Arts participation as cultural capital in the United States, 1982–2002: Signs of decline, Poetics, vol. 32, no. 2, pp. 169–194, 2004.

[44]

R. A. Peterson and R. M. Kern, Changing highbrow taste: From snob to omnivore, Am. Sociol. Rev., vol. 61, no. 5, pp. 900–907, 1996.

[45]
R. Matthew, Japanese Science Fiction: A View of a Changing Society. Oxford, UK: Taylor and Francis Inc., 2003.
[46]
R. Barthes, Mythologies. New York, NY, USA: Hill and Wang, 1972.
[47]

B. C. Sanchez, Internet memes and desensitization, Pathways:A Journal of Humanistic and Social Inquiry, vol. 1, no. 2, p. 5, 2020.

[48]

J. G. Maudlin and J. A. Sandlin, Pop culture pedagogies: Process and praxis, Educ. Stud., vol. 51, no. 5, pp. 368–384, 2015.

[49]
D. Chandler, An introduction to genre theory, http://visual-memory.co.uk/daniel//Documents/intgenre/chandler_genre_theory.pdf, 1997.
[50]

A. K. Ajeesh and S. Rukmini, Posthuman perception of artificial intelligence in science fiction: An exploration of Kazuo Ishiguro’s Klara and the Sun, AI SOCIETY, vol. 38, no. 2, pp. 853–860, 2023.

[51]
K. Solarewicz, The stuff that dreams are made of: AI in contemporary science fiction, in Beyond Artificial Intelligence: The Disappearing Human-Machine Divide, J. Romportl, E. Zackova, and J. Kelemen, eds. Cham, Switzerland: Springer International Publishing, 2015, pp. 111–120.
DOI
[52]
M. L. Gray and S. Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston, MA, USA: Hought Mifflin Harcourt, 2019.
[53]

V. Grech and M. Scerri, Evil doctor, ethical android: Star Trek’s instantiation of conscience in subroutines, Early Hum. Dev., vol. 145, p. 105018, 2020.

[54]

A. Salles, K. Evers, and M. Farisco, Anthropomorphism in AI, AJOB Neurosci., vol. 11, no. 2, pp. 88–95, 2020.

[55]

J. E. McNealy, Framing and language of ethics: Technology, persuasion, and cultural context, Journal of Social Computing, vol. 2, no. 3, pp. 226–237, 2021.

[56]

M. Fröhlich, C. Sievers, S. W. Townsend, T. Gruber, and C. P. van Schaik, Multimodal communication and language origins: Integrating gestures and vocalizations, Biol. Rev. Camb. Philos. Soc., vol. 94, no. 5, pp. 1809–1829, 2019.

[57]

A. R. Moore, A feminine techno-utopia: Identification/transformation/transcendence of embodiment in spike jonze’s her, Film Matters, vol. 9, no. 3, pp. 57–72, 2018.

[58]
F. D. Saussure, W. Baskin, P. Meisel, and H. Saussy, Course in General Linguistics. New York, NY, USA: Columbia University Press, 2011.
[59]
E. W. Schneider, Investigating variation and change in written documents, in The Handbook of Language Variation and Change, J. K. Chambers, P. Trudgill, and N. Schilling-Estes, eds. Hoboken, NJ. USA: Wiley, 2004, pp. 67–96.
DOI
[60]
E. Ferrara, Should ChatGPT be biased? Challenges and risks of bias in large language models, arXiv preprint arXiv: 2304.03738, 2023.
DOI
[61]
S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach, Language (technology) is power: A critical survey of “bias” in NLP, arXiv preprint arXiv: 2005.14050, 2020.
DOI
[62]
S. Chakraborty, A. S. Bedi, S. Zhu, B. An, D. Manocha, and F. Huang, On the possibilities of AI-generated text detection, arXiv preprint arXiv: 2304.04736, 2023.
[63]
J. Kirchenbauer, J. Geiping, Y. Wen, M. Shu, K. Saifullah, K. Kong, K. Fernando, A. Saha, M. Goldblum, and T. Goldstein, On the reliability of watermarks for large language models, arXiv preprint arXiv: 2306.04634, 2023.
[64]
V. S. Sadasivan, A. Kumar, S. Balasubramanian, W. Wang, and S. Feizi, Can AI-generated text be reliably detected? arXiv preprint arXiv: 2303.11156, 2023.
[65]
L. R. Varshney, N. S. Keskar, and R. Socher, Limits of detecting text generated by large-scale language models, in Proc. 2020 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 2020, pp. 1–5.
DOI
[66]
K. Stollznow, Alien language, in Language Myths, Mysteries and Magic. London, UK: Palgrave Macmillan London, 2014, pp. 200–206.
DOI
[67]

A. Pennycook, Posthumanist applied linguistics, Appl. Linguist., vol. 39, no. 4, pp. 445–461, 2018.

[68]
R. Schaeffer, B. Miranda, and S. Koyejo, Are emergent abilities of large language models a mirage? arXiv preprint arXiv: 2304.15004, 2023.
[69]
W. L. Hamilton, J. Leskovec, and D. Jurafsky, Diachronic word embeddings reveal statistical laws of semantic change, in Proc. 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, 2016, pp. 1489–1501.
DOI
[70]
E. M. Bender and A. Koller, Climbing towards NLU: On meaning, form, and understanding in the age of data, in Proc. 58th Annual Meeting of the Association for Computational Linguistics, Virtual Event, 2020, pp. 5185–5198.
DOI
[71]
N. Arthurs and A. J. Alvero, Whose truth is the “ground truth”? College admissions essays and bias in word vector evaluation methods, in Proc. 13th Int. Conf. Educational Data Mining (EDM 2020), Virtual Event, 2020, pp. 342–349.
[72]
T. Veblen, The Theory of the Leisure Class: An Economic Study of Institutions. Delhi, India: Aakar Books, 2005.
[73]
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, On the dangers of stochastic parrots: Can language models be too big? in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021, pp. 610–623.
DOI
[74]

K. Laland and V. Janik, The animal cultures debate, Trends Ecol. Evol., vol. 21, no. 10, pp. 542–547, 2006.

[75]

N. Rogers and J. J. Jones, Using twitter bios to measure changes in self-identity: Are Americans defining themselves more politically over time, Journal of Social Computing, vol. 2, no. 1, pp. 1–13, 2021.

[76]

S. Lowry and G. Macpherson, A blot on the profession, Br. Med. J. Clin. Res. Ed, vol. 296, no. 6623, pp. 657–658, 1988.

[77]

T. von Hippel and C. von Hippel, To apply or not to apply: A survey analysis of grant writing costs and benefits, PLoS One, vol. 10, no. 3, p. e0118494, 2015.

[78]
L. Lucy, S. L. Blodgett, M. Shokouhi, H. Wallach, and A. Olteanu, “One-size-fits-all”? Observations and expectations of NLG systems across identity-related language features, arXiv preprint arXiv: 2310.15398, 2023.
[79]

M. Kohda, R. Bshary, N. Kubo, S. Awata, W. Sowersby, K. Kawasaka, T. Kobayashi, and S. Sogawa, Cleaner fish recognize self in a mirror via self-face recognition like humans, Proc. Natl. Acad. Sci. USA, vol. 120, no. 7, p. e2208420120, 2023.

[80]

D. Placido, B. Yuan, J. X. Hjaltelin, C. Zheng, A. D. Haue, P. J. Chmura, C. Yuan, J. Kim, R. Umeton, G. Antell, et al., A deep learning algorithm to predict risk of pancreatic cancer from disease trajectories, Nat. Med., vol. 29, no. 5, pp. 1113–1122, 2023.

[81]

D. Najafali, C. Hinson, J. M. Camacho, L. G. Galbraith, R. Gupta, and C. M. Reid, Can chatbots assist with grant writing in plastic surgery? Utilizing ChatGPT to start an R01 grant, Aesthet. Surg. J., vol. 43, no. 8, pp. NP663–NP665, 2023.

[82]

T. N. Fitria, Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay, ELT Forum J. Engl. Lang. Teach., vol. 12, no. 1, pp. 44–58, 2023.

[83]

P. Schramowski, C. Turan, N. Andersen, C. A. Rothkopf, and K. Kersting, Large pre-trained language models contain human-like biases of what is right and wrong to do, Nat. Mach. Intell., vol. 4, no. 3, pp. 258–268, 2022.

[84]

J. Rudolph, S. Tan, and S. Tan, ChatGPT: Bullshit spewer or the end of traditional assessments in higher education, J. Appl. Learn. Teach., vol. 6, no. 1, pp. 342–363, 2023.

[85]

M. Sloane and E. Moss, AI’s social sciences deficit, Nat. Mach. Intell., vol. 1, no. 8, pp. 330–331, 2019.

[86]

W. Isaacson, The real leadership lessons of Steve Jobs, Harv. Bus. Rev., vol. 90, no. 4, pp. 92–102, 2012.

Publication history
Copyright
Rights and permissions

Publication history

Received: 05 July 2023
Revised: 04 November 2023
Accepted: 06 November 2023
Published: 30 September 2023
Issue date: September 2023

Copyright

© The author(s) 2023.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return