I'm doing a series of comments on historical AI quotes that appeared in a recent article in Forbes by Rob Toews.
Today's quote is
"Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It is really an attempt to understand human intelligence and human cognition."
and it comes from the brilliant AI researcher (and now successful businessman) Sebastian Thrun.
I couldn't agree with this quote more, but perhaps not in the way that you might expect.
As AI has developed historically, it's been subject to what some have called "the moving yardstick." Many technologies that were once a mainstay of AI are now "mere programming." It seems hard to believe that iterative noise filters, branch-and-bound search, and even object-oriented programming were once considered AI. Now, they are all just engineering tools, because they worked, yet didn't open the magic door to anything like human capability. We kept using them in programs, but we removed them from the category "AI" forever. To paraphrase Omar Khayyam, the moving yardstick measured, moved on, and no piety nor wit shall lure it back to cancel half a line, nor tears wash out an inch of it.
All computer procedures are things that a person (or several people) could do “by hand.” You could perform all the steps of even the most sophisticated computer algorithm. You'd just be woefully slow at it, frustrated by the scale of the task, and likely to be inaccurate at any given step. Yet, at some point, even simple computer procedures were thought to be a potential key to "real" AI, just because computers can do these tasks at massive scale, speed and accuracy. There has always been the assumption that at some level, our brains must be running rote procedures at a pace and scale we just can't track, and all we need to do is hit upon the right procedures to make computer think like people.
Even though "AI" today usually refers to some variation of a "neural network," in reality most of these algorithms have more in common with nested function approximation and intractable statistical inference than with brains. The realisation of their limitations is already taking place, with deep learning coming over the top of Gartner's hype cycle in 2017 and 2018, and vanishing from the chart in 2019. One can only assume those technologies have been subjected to the moving yardstick, and are now progressing somewhere far into Gartner's "plateau of productivity," where they become useful engineering tools, rather than mysterious AI.
If this is the case, why do I agree with Thrun, that the pursuit of AI is like a branch of philosophy? The title of the final chapter in my book Rage is "The Hole and Not The Doughnut," because I believe every time a computer procedure fails to open that magic door to AI, it teaches us something specific and technical about how we are not machines. The negative space left by this effort is what defines humanity.
Read more about what I think that negative space looks like in Rage Inside the Machine.