AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (9.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Human Morality Difference when Programming and Actually Operating Autonomous Machines

Department of Automation, Tsinghua University, Beijing 100084, China
College of Information Science and Engineering, China University of Petroleum, Beijing 102249, China
Show Author Information

Abstract

Autonomous machines (AMs) are poised to possess human-like moral cognition, yet their morality is often pre-programmed for safety. This raises the question of whether the morality intended by programmers aligns with their actions during actual operation, a crucial consideration for a future society with both humans and AMs. Investigating this, we use a micro-robot swarm in a simulated fire scenario, with 180 participants, including 102 robot programmers, completing moral questionnaires and participating in virtual escape trials. These exercises mirror common societal moral dilemmas. Our comparative analysis reveals a “morality gap” between programming presets and real-time operation, primarily influenced by uncertainty about the future and heightened by external pressures, especially social punishment. This discrepancy suggests that operational morality can diverge from programmed intentions, underlining the need for careful AM design to foster a collaborative and efficient society.

References

[1]

M. M. Waldrop, Autonomous vehicles: No drivers required, Nature, vol. 518, no. 7537, pp. 20–23, 2015.

[2]

D. Floreano and R. J. Wood, Science, technology and the future of small autonomous drones, Nature, vol. 521, no. 7553, pp. 460–466, 2015.

[3]

R. Stone and M. Lavine, The social life of robots, Science, vol. 346, no. 6206, pp. 178–179, 2014.

[4]

M. Wu, N. Wang, and K. F. Yuen, Deep versus superficial anthropomorphism: Exploring their effects on human trust in shared autonomous vehicles, Comput. Human Behav., vol. 141, p. 107614, 2023.

[5]

E. C. Ferrer, T. Hardjono, A. Pentland, and M. Dorigo, Secure and secret cooperation in robot swarms, Sci. Robot., vol. 6, no. 56, p. eabf1538, 2021.

[6]

M. S. Talamali, A. Saha, J. A. R. Marshall, and A. Reina, When less is more: Robot swarms adapt better to changes with constrained communication, Sci. Robot., vol. 6, no. 56, p. eabf1416, 2021.

[7]

J. G. Lee and K. M. Lee, Polite speech strategies and their impact on drivers’ trust in autonomous vehicles, Comput. Human Behav., vol. 127, p. 107015, 2022.

[8]
OpenAI, Introducing ChatGPT, https://openai.com/blog/chatgpt, 2023.
[9]

P. Polak, C. Nelischer, H. Guo, and D. C. Robertson, “Intelligent” finance and treasury management: What we can expect, AI Soc., vol. 35, no. 3, pp. 715–726, 2020.

[10]
G. Spitale, N. Biller-Andorno, and F. Germani, AI model GPT-3 (dis)informs us better than humans, Sci. Adv., vol. 9, no. 26, p. eadh1850, 2023.
[11]

C. M. de Melo, S. Marsella, and J. Gratch, Social decisions and fairness change when people’s interests are represented by autonomous agents, Auton. Agent. Multi Agent Syst., vol. 32, no. 1, pp. 163–187, 2018.

[12]

C. M. de Melo, S. Marsella, and J. Gratch, Human cooperation when acting through autonomous machines, Proc. Natl. Acad. Sci. USA, vol. 116, no. 9, pp. 3482–3487, 2019.

[13]

E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J. F. Bonnefon, and I. Rahwan, The moral machine experiment, Nature, vol. 563, no. 7729, pp. 59–64, 2018.

[14]

P. Bello and S. Bringsjord, On how to build a moral machine, Topoi, vol. 32, no. 2, pp. 251–266, 2013.

[15]

W. Schwarting, A. Pierson, J. Alonso-Mora, S. Karaman, and D. Rus, Social behavior for autonomous vehicles, Proc. Natl. Acad. Sci. USA, vol. 116, no. 50, pp. 24972–24978, 2019.

[16]
J. Greene, Moral Tribes : Emotion, Reason, and the Gap Between Us and Them. New York, NY, USA: Penguin Press, 2013.
[17]
N. J. Goodall, Machine ethics and automated vehicles, in Road Vehicle Automation, G. Meyer and S. Beiker, eds. Cham, Switzerland: Springer, 2014, pp. 93–102.
[18]

J. F. Bonnefon, A. Shariff, and I. Rahwan, The social dilemma of autonomous vehicles, Science, vol. 352, no. 6293, pp. 1573–1576, 2016.

[19]
W. Wallach and C. Allen, Moral Machines : Teaching Robots Right from Wrong.Oxford, UK: Oxford University Press, 2008.
[20]

M. Moussaïd and M. Trauernicht, Patterns of cooperation during collective emergencies in the help-or-escape social dilemma, Sci. Rep., vol. 6, no. 1, p. 33417, 2016.

[21]
W. P. Sinnott-Armstrong, Moral Dilemmas, https://www.elibrary.ru/item.asp?id=7362329, 1988.
[22]

S. Nichols and R. Mallon, Moral dilemmas and moral rules, Cognition, vol. 100, no. 3, pp. 530–542, 2006.

[23]

C. G. McClintock and S. T. Allison, Social value orientation and helping behavior, J. Appl. Soc. Psychol., vol. 19, no. 4, pp. 353–362, 1989.

[24]

L. J. Cronbach, Coefficient alpha and the internal structure of tests, Psychometrika, vol. 16, no. 3, pp. 297–334, 1951.

[25]

J. Cameron, K. M. Banko, and W. D. Pierce, Pervasive negative effects of rewards on intrinsic motivation: The myth continues, Behav. Anal., vol. 24, no. 1, pp. 1–44, 2001.

[26]

R. L. Capa and C. A. Bouquet, Individual differences in reward sensitivity modulate the distinctive effects of conscious and unconscious rewards on executive performance, Front. Psychol., vol. 9, p. 148, 2018.

[27]

R. O. Murphy, K. A. Ackermann, and M. J. J. Handgraaf, Measuring social value orientation, Judgm. Decis. Mak., vol. 6, no. 8, pp. 771–781, 2011.

[28]

B. Simpson, Sex, fear, and greed: A social dilemma analysis of gender and cooperation, Soc. Forces, vol. 82, no. 1, pp. 35–52, 2003.

[29]

M. Van Vugt, D. de Cremer, and D. P. Janssen, Gender differences in cooperation and competition: The male-warrior hypothesis, Psychol. Sci., vol. 18, no. 1, pp. 19–23, 2007.

[30]

U. Fischbacher, S. Gächter, and E. Fehr, Are people conditionally cooperative? Evidence from a public goods experiment, Econ. Lett., vol. 71, no. 3, pp. 397–404, 2001.

[31]

J. Grujić, C. Fosco, L. Araujo, J. A. Cuesta, and A. Sánchez, Social experiments in the mesoscale: Humans playing a spatial Prisoner’s Dilemma, PLoS One, vol. 5, no. 11, p. e13749, 2010.

[32]

W. Güth and R. Tietz, Ultimatum bargaining behavior: A survey and comparison of experimental results, J. Econ. Psychol., vol. 11, no. 3, pp. 417–449, 1990.

[33]

H. Oosterbeek, R. Sloof, and G. Van De Kuilen, Cultural differences in ultimatum game experiments: Evidence from a meta-analysis, Exp. Econ., vol. 7, no. 2, pp. 171–188, 2004.

[34]

H. Rauhut and F. Winter, A sociological perspective on measuring social norms by means of strategy method experiments, Soc. Sci. Res., vol. 39, no. 6, pp. 1181–1194, 2010.

[35]

K. Fujita, Y. Trope, N. Liberman, and M. Levin-Sagi, Construal levels and self-control, J. Pers. Soc. Psychol., vol. 90, no. 3, pp. 351–367, 2006.

[36]

J. Agerström and F. Björklund, Temporal distance and moral concerns: Future morally questionable behavior is perceived as more wrong and evokes stronger prosocial intentions, Basic Appl. Soc. Psych., vol. 31, no. 1, pp. 49–59, 2009.

[37]

J. Agerström and F. Björklund, Moral concerns are greater for temporally distant events and are moderated by value strength, Soc. Cogn., vol. 27, no. 2, pp. 261–282, 2009.

[38]

C. K. W. De Dreu, M. Giacomantonio, S. Shalvi, and D. Sligte, Getting stuck or stepping back: Effects of obstacles and construal level in the negotiation of creative solutions, J. Exp. Soc. Psychol., vol. 45, no. 3, pp. 542–548, 2009.

[39]

V. Liberman, S. M. Samuels, and L. Ross, The name of the game: Predictive power of reputations versus situational labels in determining Prisoner’s Dilemma game moves, Pers. Soc. Psychol. Bull., vol. 30, no. 9, pp. 1175–1185, 2004.

[40]

D. G. Pruitt, Motivational processes in the decomposed Prisoner’s Dilemma game, J. Pers. Soc. Psychol., vol. 14, no. 3, pp. 227–238, 1970.

[41]

E. Fehr and S. Gächter, Cooperation and punishment in public goods experiments, Am. Econ. Rev., vol. 90, no. 4, pp. 980–994, 2000.

[42]

L. Molleman, F. Kölle, C. Starmer, and S. Gächter, People prefer coordinated punishment in cooperative interactions, Nat. Hum. Behav., vol. 3, no. 11, pp. 1145–1153, 2019.

Tsinghua Science and Technology
Pages 1648-1658
Cite this article:
Yi W, Wu W, Chen M, et al. Human Morality Difference when Programming and Actually Operating Autonomous Machines. Tsinghua Science and Technology, 2025, 30(4): 1648-1658. https://doi.org/10.26599/TST.2024.9010062

47

Views

3

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 28 December 2023
Revised: 20 March 2024
Accepted: 25 March 2024
Published: 03 March 2025
© The Author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return