Selfhood and Consciousness in Artificial Intelligence Daniel Montoya , Stephen Gill; Whitney Wall (Psychology, Fayetteville State University, Fayetteville, NC ) C2
A common idea in cognition indicates that the sense of self comes from the representation of the body (Bermudez, 1995; Churchland, 2002). Other theories indicate that the self develops as a consequence of linguistic interactions with others (Harr and Gilbert, 1994) and through mirrored social interactions (Rizzollatti and Cragheiro, 2004). This involves two elements for the normal development of consciousness. On one side, the proprioceptive information from the body and, on the other, the presence of other beings similar to ourselves on which we mirror our consciousness. All these developmental characteristics are considered essential for the normal development of a self-conscious individual and they would be missing in an artificial entity. From this perspective, the proposal of an AI that could attain self-consciousness, without being instantiated in a body subjected to developmental stages, creates questions about its capacity to be considered normal. In other words, can a disembodied AI which has attained consciousness be considered normal? It is not clear under which parameters we could asses the possibility that a disembodied AI would reach, not only reasonable parameters of understanding and self-consciousness but also be considered safe to interact with humans and able to make decisions that could affect many. In this presentation, we propose to discuss the concept of normalcy and to develop the central embodiment elements needed for the development of consciousness and how they can be deployed in AI research.