Artificial Intelligence and Metaphysical Limitations with Implications for the Nature of the Self and the Phenomenal Consciousness Mihretu Guta (Philosophy, Biola University, Azusa Pacific University and Addis Ababa University, La Mirada, Azusa and Addis Aba, CA ) C2
Contemporary discussions on artificial intelligence concerns whether or not; artificial intelligence, attributed to electronic machines such as digital computers, is qualitatively different from that of the natural intelligence ascribed to human beings. In this regard, there are two main reactions. Some theorists claim that whatever difference that is said to exist between artificial intelligence and natural intelligence, is a matter of degree. By contrast, others claim that the two species of intelligence are different in kind. Resolving this controversy is often assumed to require some sort of empirical solution. In this regard, research on Strong-AI takes centre stage. The goal of computer scientists who work on the Strong-AI hypothesis is significantly different from those who work in on Weak-AI. In the case of Weak-AI, computer scientists are interested in inventing machines that have high information processing capacity displaying human like ability. In this case, we do not confront any serious metaphysical problems no matter what the machines are said to be capable of doing say, play chess, prove mathematical theorems, write poetry, drive vehicles on a crowded street, diagnose diseases and the like (Russell and Novig 2014: 1; Searle 1980). So Weak-AI does not raise any direct challenge for the metaphysics of the self and phenomenal consciousness. By contrast, Strong-AI raises significant challenges for the metaphysics of the self and phenomenal consciousness. There are two key underlying assumptions that characterize the goal set by Strong-AI theorists. These assumptions can be stated as follows: Assumption # 1: if Strong-AI gets realized, then machines not only would enjoy equal status with human beings but they also can be superior to humans by being super-intelligent. Assumption # 2: if Strong-AI gets realized, then it would show the capability of human beings to bring about conscious beings which think and act as humans do despite such beings having a non-biological, i.e, electronic substrate. Some influential philosophers claim that no in-principle barriers exist for Strong-AI from being realized (Chalmers 1996: Ch. 9; see also Haugeland 1985; Charniak and McDermott 1985). These philosophers, just like most computer scientists, believe that if there are any challenges that can be said to stand in the way of establishing the assumptions stated above, then those would have to do with things such as the lack of sophisticated enough technology. But I see the challenge that bests Strong-AI to be primarily metaphysical in nature. So my goal is to investigate this matter to see if any empirical solution succeeds without first addressing central metaphysical issues that are rooted in the ontology of the substantial self. I will explore two problems, namely: the maker-product gap problem and the wrong location problem respectively. The former problem arises partly from assuming that artificial intelligence can be superior to that of the natural intelligence, whereas the latter problem arises from misunderstanding the nature of the relation between natural intelligence and artificial intelligence. These problems are interrelated. Any solutions proposed to tackle one will have an implication for the other and vice versa.