←Back to posts


The misleading Chinese room argument

2019-02-12

AI, Programming

The Chinese room argument is a famous assertion by a philosopher named John Searle, which questions the validity of the Turing test as a test of intelligence. You can search about the Turing test if you don't know what it is, but it is basically a test in which a machine talks to a human being without being caught it is a machine. Although it is a extremely simple test, it is enough to be considered as a definition of intelligence; A machine is intelligent if and only if it passes a form of the tests. This is why the test is so widely used. In the argument, John Searle criticizes this idea. He probably also wanted to conclude that no digital computer can be truly intelligent based on that argument.

Here is an analogy proposed in the argument: Suppose that somebody who can only speaks English is in a closed room and has an English book with instructions to follow, along with sufficient papers, pencils, erasers, and filing cabinets. He could receive Chinese characters through a slot in the door, process them according to the book's instructions, and produce Chinese characters as output. In the outside, it seems there is a Chinese person hiding in the room who can communicate with the outside world.

Searle asks: Does the person in the room really "understand" Chinese? Or is he merely simulating the ability to understand Chinese? If a computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running a book version the program manually.

Image

When I first read the argument I thought it was interesting and even convincing. After thinking about the argument for some time, however, I decided that it contains some crucial errors.

First thing to note is that assuming the person in the room to be able to speak only English is a completely irrelevant setting. If the instruction book contains only simple instructions like ones in modern digital computers, you can easily replace the person in the room with a simple machine. The machine should only do dozens of pre-defined set of instructions and it doesn't even have to be equipped with numerous chips and electrically powered so some sort of mechanical machines will do. In this experiment, it is the book that you should expect to have intelligence, not the medium.

I expect you to see the voidness of the Chinese room argument at this point, but some of you may still think it's unlikely for a book to have intelligence. In this case, you first need to know how ridiculously difficult to make a book like that. It's a widely known fact that the number of possible Go games is more than the number of atoms in the universe. If you calculate all possible moves in GO and write a guide book that lets you know winning moves for all scenarios, the size of the book should be bigger than a million universes. Even if you condense that information into some magnetic disks and you can have all the disks made from all the factories in the world for a million year, there aren't enough disk space to store that program. To be able to store a program in some modern computer that can win a professional go player, the program should be much smaller in size and able to do extremely smart things within the limited disk space. Understanding Chinese is no easier than understanding Go games. So, if somebody wrote a magic book with which I can converse with any Chinese person, which is never likely to happen in my life time, I would say the book contains intelligence in it.


Reference