AI Quote Comments: Peace, Love, and Understanding in The Chinese Room / by Robert Smith

(Author’s note: I know I promised these comments once a day, but things got away from me. Not to worry: I intend to complete all 12 ASAP).

Today’s quote about AI, taken from a recent Forbes article by Rob Toews, is the following:

“In the literal sense, the programmed computer understands what the car or the adding machine understand: namely, exactly nothing.”

It comes from the eminent philosopher of mind John Searle and first appeared in his seminal 1980 paper on AI, Minds, Brains, And Programs. This is the paper that first presents Searle’s brilliant Chinese Room thought experiment.

The Chinese Room builds on The Turing Test, which Alan Turing actually called The Imitation Game in his equally seminal 1950 paper Computing Machinery and Intelligence. There is, of course, a great biopic of Turing called The Imitation Game, and the “Test” has morphed itself all over the popular culture, including a version that starts classic sci-fi flick Bladerunner. (in which it is called the Voight-Kampff Test).

However, the original version of the test is a bit vague in Turing’s paper. He starts off describing a game where two people are separated by a wall with a slot in it, through which they can pass written messages. One of the people is trying to convince the person on the other side of the wall that they are female. After establishing the ability of a man to convince someone that he is a woman, he poses the thought experiment of substituting the man for a computer. Turing thus had a baseline (man imitating a woman) against which to compare a computer. As the paper evolves, Turing’s speculation implicitly expands to a computer trying to convince a person that it is also a person, which is the way most people think of The Turing Test today. Turing posited that if the computer fools someone into thinking it is a person, then it has achieved true AI.

It is interesting that Turing started with the idea of the computer compared to a person on a particular task, but the popular conception has undoubtedly expanded to the broader sense that Turing implied in his paper. That more general sense is what we now call The Turing Test. Today, it’s easier for us to think not of a wall with a slot in it, but a laptop onto which messages are typed, and answers that appear on the laptop’s screen.

Searle’s argument against the possibility of “strong AI” is a linguistic extension of the Turing Test. Let’s say that the laptop has a Chinese keyboard, and answers with Chinese characters on its screen. Now recall that in an earlier comment, I noted that a human being could do any computer program manually, just by stepping through the instructions of that program. It would be laborious, and the human being would be liable to error, but there is no program for which it is impossible. Let’s say that the laptop, rather than running the program that passed the Turing Test locally, sends its messages to another, remote laptop screen. A person reads the characters from the screen, matches them to the inputs of the computer program that passed the Turing Test, then runs the steps of that program, starting with those inputs, manually. Once the outputs are derived, that person types them back into the remote laptop, and they appear on the screen of the original laptop.

But what if that person doesn’t understand Chinese, at all. Since the person is merely viewing the Chinese characters as scribbles to match against lookup tables in the computer program, it still passes the test. But then Searle asks, does this mean that the person, supplemented by the (manually executed) computer program understands Chinese? I think everyone will accept that it doesn’t; the person doesn’t understand anything about the messages passed through the slot. It’s just scribbles in and scribbles out.

But if that’s the case, how can we say that a computer that ran the same program is “intelligent,” since it also doesn’t “understand” the very language in which it was examined, any more than the person did!

You’ll recall that in another, earlier comment I mentioned the interesting etymology and meaning of “understand.”

There’s a great song from the late 1970s (written by Nick Lowe, and popularized by Elvis Costello and the Attractions) called What’s So Funny ‘Bout Peace, Love, and Understanding. Think about why “understanding” appears next to “peace” and “love.” It’s because the sense of “understanding” here is from the second dictionary definition:

a positive relationship between two people or groups in which they feel sympathy for each other

Clearly, that puts the word in the realm of deeply human concepts like “peace” (in the sense of inner peace) and “love.”

The first dictionary definition of “understanding” is:

knowledge about a subject, situation, etc. or about how something works

What I’m arguing is that real knowledge of complex things is as profoundly human as concepts like “peace,” “love,” and the second definition of “understanding.” That’s the reason Searle’s Chinese Room is so puzzling. We all implicitly know that understanding a language is more than a lookup table. It’s a set of complex idioms and contextualization that are intrinsically human. That intrinsic tie to humanity is why Searle is right: AI doesn’t “understand” anything.

That’s not to say AI isn’t useful. It’s just not us.

For more on this perspective, please have a look at my book, Rage