Today’s comment on an AI quote (from a Forbes article by Rob Toews)
“How do you know that when I speak to you, what you call ‘thinking’ is going on inside me? The Turing test is a fantastic probe — something like a particle accelerator in physics. Just as in physics, when you want to understand what is going on at an atomic or subatomic level, since you can’t see it directly, you scatter accelerated particles off the target in question and observe their behavior. From this you infer the internal nature of the target. The Turing test extends this idea to the mind. It treats the mind as a ‘target’ that is not directly visible but whose structure can be deduced more abstractly. By ‘scattering’ questions off a target mind, you learn about its internal workings, just as in physics.”
The quote comes from Douglas Hofstadter’s Metamagical Themas: Questing for the Essence of Mind and Pattern, published in 1981. It’s a great book, but not nearly as well known as Hofstadter’s Gödel, Escher, Bach: an Eternal Golden Braid, published in 1979, which won the National Book Award for Science and the Pulitzer Prize. GEB (as the book is known) is a real stunner that reads like a 777-page mind game. The book’s central spine is an elaborate proof, understandable by anyone, of an obscure but important mathematical theorem. Gödel’s incompleteness theorem, written in 1931, proves that any formal mathematical system is incomplete. It does so in the most amazing way, by showing that in any such system, you can construct self-referential “sentences” that can neither be proved true or false within that system. Something that is the mathematical equivalent of “This sentence is a lie.” Thus, every formal system has an intrinsic flaw, an inability to completely determine the truth or falsehood of statements within itself.
The theorem is closely related to one of Turing’s other great accomplishments: the proof that there’s a limitation to what all computers can do. Turing proved this in a manner similar to Godel, by constructing a class of computer problems called Halting Problems. Stated simply, they are the problem of determining whether a particular piece of computer code will eventually come to a stop, or run on forever and ever. By constructing programs that referred to their own halting time, Turing proved that no computer can determine whether all programs will halt or not. Thus, there are always things computers can’t do. Godel proved that formal mathematical systems are limited, and Turing proved that machines are limited. Hofstadter wrote an incredibly cool book about these formal system limitations.
GEB was one of the first books about AI that I ever read. I think it’s a masterpiece, and I believe that Hofstadter is a genius. So it’s surprising to me that I think this quote from him is so far off base and falls into the trap of physics envy.
Since the development of basic mechanics and calculus in the later 17th century, physics (and the mathematics associated with it) has been the most fundamental driver of technological development. We owe every neat gadget to math and physics. Given its great success, it’s unsurprising that since the 17th century, all scientists have been envious of physic’s precision in describing the world, and its productivity in changing the world. This is the reason that economics, the most political of all social sciences, drifted away from being a historical study of the real world, towards the pursuit of mathematical models of ideal worlds, worlds very like those of physics. Eric Beinhicker’s The Origin of Wealth does a fabulous job of recounting this turning point in history. And the phenomena isn’t limited to economics. All social scientists would love to have models of people, from their societies to their minds, that are as predictable as the billiard ball models of Newtonian physics. Even the advent of modern physics, with its quanta, probability distributions over juxtaposed states, and its paradoxes have found their way into models of people.
Why? Is there a scientific reason to think that thoughts are like unthinking physical particles (or even their interpretations as waves)? Indeed, the brain is physical, just like anything else. As far as any of us know, it contains no supernatural magic. However, thoughts, and their communications to others, and their interactions between people, are complex epiphenomena, with boundless innovations, and continual changes in their primary ways of interacting and effecting people. Sure, neurons are physics, but they are only the base level of an intrinsically intractable complex system within each of us.
Complexity science teaches us that while we can know how atoms interact, once we put them into the large groups that make up meaningfully-sized objects in the world, complete unpredictability becomes ubiquitous. There is no reason to think that the relationship between neurons and thoughts is any different.
So, asking questions in a Turing Test is nothing like bombarding samples with accelerated particles in a physics simulation. And the metaphor that these things are similar only entrenches a fundamental misunderstanding of the nature of human thought. We aren’t billiard balls or probability clouds, and the methods used to study those things aren’t the right methods for studying what we are.