AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (285.3 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Hypothesis and Thought Experiment: Can We Program AI Forms with the Foundations of Sentience to Protect Humanity?

Human Sentience Project, LLC, Tucson 85704, AZ, USA
Vatican Observatory, University of Arizona, Tucson 85719, AZ, USA
Show Author Information

Abstract

The speed, capacity, and strength of artificial intelligence units (AIs) could pose a self-inflicted danger to humanity’s control of its own civilization. In this analysis, three biologically-based components of sentience that emerged in the course of human evolution are examined: cultural capacity, moral capacity, and religious capacity. The question is posed as to whether some measure of these capacities can be digitized and installed in AIs and so afford protection from their dominance. Theory on the emergence of moral capacity suggests it is most likely to be amenable to digitization and therefore installation in AIs. If so, transfer of that capacity, in creating commonalities between human and AI, may help to protect humanity from being destroyed. We hypothesize that religious thinking and culturally elaborated theological creativity could, in not being easily transferred, afford even more protection by constructing impenetrable barriers between humans and AIs, along real/counterfactual lines. Difficulties in digitizing and installing the three capacities at the foundation of sentience are examined within current discussions of “superalignment” of superintelligent AIs. Human values articulate differently for the three capacities, with different problems and capacities for supervision of superintelligent AIs.

References

【1】
【1】
 
 
Journal of Social Computing
Pages 195-205

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Rappaport MB, Corbally CJ. Hypothesis and Thought Experiment: Can We Program AI Forms with the Foundations of Sentience to Protect Humanity?. Journal of Social Computing, 2024, 5(3): 195-205. https://doi.org/10.23919/JSC.2024.0017

727

Views

108

Downloads

0

Crossref

0

Scopus

Received: 27 February 2024
Revised: 19 July 2024
Accepted: 10 September 2024
Published: 30 September 2024
© The author(s) 2024.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).