Is the Chinese Room Argument a problem for strong AI? Why or why not?
To answer this question, it is imperative to first explain what the Chinese Room argument actually is. The Chinese Room is a thought experiment conducted by John Searle that tried to demonstrate that a machine or a computer can never be described as having an understanding or a mind irrespective of how intelligence it may be. Suppose if a highly intelligent computer is constructed that understands Chinese and can convincingly produce Chinese language as output just like an average Chinese speaker, supporters of Artificial intelligence will be convinced that the computer understands Chinese. In the same way, the basis of Searle’s argument is that if an English speaking person is made to sit inside a room and follow instructions in English to manipulate the Chinese characters in such a way that to an outsider, it appears as if the person in the room actually understands Chinese even if he doesn’t understand Chinese at all.
The argument shows that programmed computers may appear to be conversant in natural languages; however, they are incapable of understanding language. They can only manipulate symbol strings with rules. Seale also identified a philosophical position called Strong AI. A highly programmed computer with the right inputs and outputs will have a mind like any human being. Strong AI would mean actually having a mind and Weak AI would mean only stimulating a mind.
The Chinese Room argument is definitely a problem for strong Artificial Intelligence. Firstly, computer programs use syntax and pay little attention to semantics. Meaning is of little importance to them. Minds, not brains, have ability to understand meanings or semantics. Programs are, therefore, not minds. Artificial intelligence can never produce a computer machine that can understand or think intelligently on its own as long as it runs on such programs. Also, strong AI relies on information system theory, that is, encoding the state of a physical system into another is possible. It is believed that a pattern of information is enough to create a mind. However, this belief is laughable at best.
According to Searle, Dualism might be behind strong AI. Dualism is the idea that the body and mind are made of a variety of substances. Strong AI would only make sense on the assumption that the brain doesn’t matter where the mind is concerned. However, brains actually cause the mind and mental activities of the human mind are dependent on the chemical properties of the brain. This position is known as biological naturalism.
To sum it up, the Chinese room argument does not in any way propose that highly intelligent machines can never be built. It simple advocates that machines can never have the intentionality of a brain. Artificial intelligence researchers can try all they want but they’ll only be able to simulate intelligence in a computer and never be able to make a computer that is truly as intelligent as the human mind.