AI: Don't believe the hype (new whitepaper) by Robert Smith

Investment managers Natixis have just posted a new online whitepaper based on an interview with me entitled AI: Don’t Believe the Hype. It’s a part of a broader initiative from Natixis called Thinking Inside the Box which promises “Lifting the lid on the world of algorithms, machine learning, and AI.” I hope this material has real impact on AI investing, and that some of you enjoy reading it as well.

Can Network Science teach us about the media's agenda? by Robert Smith

I’m excited to announce a new paper by my colleagues Sam Stern and Giacomo Livan (they’ve been so kind to include me as a co-author as well, for some ideas I contributed early in the paper’s development). It’s entitled A network perspective on intermedia agenda-setting and it’s in this month’s issue of Applied Network Science. You can read the whole paper online by clicking through..

The paper looks at using network science to study the influence on media sources on one another. Its findings are intriguing, and I think its methodology could be an important part of something that I’m trying to point towards in the latter chapters of Rage: that we can have a real science of how a healthy, diverse ecosystem of information distribution can be created.

I believe that by using science, like that in this paper, we can shape the algorithms that influence the information we receive, such that they promote the diversity and mixing of the information we receive and share, we can help make the world a better place for all of us. I think this is how we start to fulfil Rage’s subtitle and stop the Internet making bigots of us all.

Regardless of that big agenda, I think the paper is a fine contribution to the scientific literature, and I congratulate my fellow authors.

AI Quote Comments: Hofstadter and Physics Envy by Robert Smith

Today’s comment on an AI quote (from a Forbes article by Rob Toews)

“How do you know that when I speak to you, what you call ‘thinking’ is going on inside me? The Turing test is a fantastic probe — something like a particle accelerator in physics. Just as in physics, when you want to understand what is going on at an atomic or subatomic level, since you can’t see it directly, you scatter accelerated particles off the target in question and observe their behavior. From this you infer the internal nature of the target. The Turing test extends this idea to the mind. It treats the mind as a ‘target’ that is not directly visible but whose structure can be deduced more abstractly. By ‘scattering’ questions off a target mind, you learn about its internal workings, just as in physics.”

The quote comes from Douglas Hofstadter’s Metamagical Themas: Questing for the Essence of Mind and Pattern, published in 1981. It’s a great book, but not nearly as well known as Hofstadter’s Gödel, Escher, Bach: an Eternal Golden Braid, published in 1979, which won the National Book Award for Science and the Pulitzer Prize. GEB (as the book is known) is a real stunner that reads like a 777-page mind game. The book’s central spine is an elaborate proof, understandable by anyone, of an obscure but important mathematical theorem. Gödel’s incompleteness theorem, written in 1931, proves that any formal mathematical system is incomplete. It does so in the most amazing way, by showing that in any such system, you can construct self-referential “sentences” that can neither be proved true or false within that system. Something that is the mathematical equivalent of “This sentence is a lie.” Thus, every formal system has an intrinsic flaw, an inability to completely determine the truth or falsehood of statements within itself.

The theorem is closely related to one of Turing’s other great accomplishments: the proof that there’s a limitation to what all computers can do. Turing proved this in a manner similar to Godel, by constructing a class of computer problems called Halting Problems. Stated simply, they are the problem of determining whether a particular piece of computer code will eventually come to a stop, or run on forever and ever. By constructing programs that referred to their own halting time, Turing proved that no computer can determine whether all programs will halt or not. Thus, there are always things computers can’t do. Godel proved that formal mathematical systems are limited, and Turing proved that machines are limited. Hofstadter wrote an incredibly cool book about these formal system limitations.

GEB was one of the first books about AI that I ever read. I think it’s a masterpiece, and I believe that Hofstadter is a genius. So it’s surprising to me that I think this quote from him is so far off base and falls into the trap of physics envy.

Since the development of basic mechanics and calculus in the later 17th century, physics (and the mathematics associated with it) has been the most fundamental driver of technological development. We owe every neat gadget to math and physics. Given its great success, it’s unsurprising that since the 17th century, all scientists have been envious of physic’s precision in describing the world, and its productivity in changing the world. This is the reason that economics, the most political of all social sciences, drifted away from being a historical study of the real world, towards the pursuit of mathematical models of ideal worlds, worlds very like those of physics. Eric Beinhicker’s The Origin of Wealth does a fabulous job of recounting this turning point in history. And the phenomena isn’t limited to economics. All social scientists would love to have models of people, from their societies to their minds, that are as predictable as the billiard ball models of Newtonian physics. Even the advent of modern physics, with its quanta, probability distributions over juxtaposed states, and its paradoxes have found their way into models of people.

Why? Is there a scientific reason to think that thoughts are like unthinking physical particles (or even their interpretations as waves)? Indeed, the brain is physical, just like anything else. As far as any of us know, it contains no supernatural magic. However, thoughts, and their communications to others, and their interactions between people, are complex epiphenomena, with boundless innovations, and continual changes in their primary ways of interacting and effecting people. Sure, neurons are physics, but they are only the base level of an intrinsically intractable complex system within each of us.

Complexity science teaches us that while we can know how atoms interact, once we put them into the large groups that make up meaningfully-sized objects in the world, complete unpredictability becomes ubiquitous. There is no reason to think that the relationship between neurons and thoughts is any different.

So, asking questions in a Turing Test is nothing like bombarding samples with accelerated particles in a physics simulation. And the metaphor that these things are similar only entrenches a fundamental misunderstanding of the nature of human thought. We aren’t billiard balls or probability clouds, and the methods used to study those things aren’t the right methods for studying what we are.

AI Quote Comments: Turing (or, don't make Gods of Heroes) by Robert Smith

The AI quote (from an article in Forbes by Rob Toews) that I’m commenting on today is from one of the world’s great heroes:

“It is customary to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. I cannot offer any such comfort, for I believe that no such bounds can be set.”

It is almost inconceivable how important Alan Turing’s life is to everyone’s lives today. It would have been enough for him to have been a great mathematician, who contributed the theory of Universal Computation, in essence, the proof that all computers have equal capabilities, paving the way for all manner of computational machines to be applied to all manner of technical challenges. It would have been enough for him to have built one of the first working electronic computers. It would have been enough that he had ideas about AI techniques that were so far ahead of their time (including ideas about evolutionary computation and artificial neural networks). It would have been enough “The Turing Test,” discussed in a previous comment, has become an essential part of the zeitgeist.

But above all of this, Turing played a vital part in saving the world from Nazism. While one can argue that his contribution to breaking the Enigma code is sometimes overstated, neglecting not only the many other smart people who worked tirelessly at Bletchley Park but ignoring field soldiers and members of the underground resistance who were sacrificed to obtain intel and even Enigma machines to make his efforts possible. But one cannot argue that the breaking of Enigma, an accomplishment in which Turing was vital, brought an early end to WWII, saving countless lives, and smothering the Nazi threat, nearly to death.

What a hero. But I write this comment to caution against lauding heroes to the point of treating all their words as some sort of Gospel.

Turing was a mathematician. There is no evidence to say he had any deep understanding of human psychology or even the biological details of the human brain. To the contrary, he was a highly spectrum-ish individual who had an extremely difficult time understanding other people and communicating with them in a socially competent manner.

So when we read his quote that no “peculiarly human characteristic” is likely to exceed the capacities of computers, we really should take that with a grain of salt.

Turing was a great man, whose legend is made all the greater by the way society persecuted him to death for nothing more than his homosexuality after he had saved the entire free world. But that doesn’t mean everything he said about thinking and humanity has any more validity than anyone else. Statements like the one quoted here aren’t science, they are just casual opinion, from someone who has no more merit in that opinion than anyone else.

AI Quote Comments: Peace, Love, and Understanding in The Chinese Room by Robert Smith

(Author’s note: I know I promised these comments once a day, but things got away from me. Not to worry: I intend to complete all 12 ASAP).

Today’s quote about AI, taken from a recent Forbes article by Rob Toews, is the following:

“In the literal sense, the programmed computer understands what the car or the adding machine understand: namely, exactly nothing.”

It comes from the eminent philosopher of mind John Searle and first appeared in his seminal 1980 paper on AI, Minds, Brains, And Programs. This is the paper that first presents Searle’s brilliant Chinese Room thought experiment.

The Chinese Room builds on The Turing Test, which Alan Turing actually called The Imitation Game in his equally seminal 1950 paper Computing Machinery and Intelligence. There is, of course, a great biopic of Turing called The Imitation Game, and the “Test” has morphed itself all over the popular culture, including a version that starts classic sci-fi flick Bladerunner. (in which it is called the Voight-Kampff Test).

However, the original version of the test is a bit vague in Turing’s paper. He starts off describing a game where two people are separated by a wall with a slot in it, through which they can pass written messages. One of the people is trying to convince the person on the other side of the wall that they are female. After establishing the ability of a man to convince someone that he is a woman, he poses the thought experiment of substituting the man for a computer. Turing thus had a baseline (man imitating a woman) against which to compare a computer. As the paper evolves, Turing’s speculation implicitly expands to a computer trying to convince a person that it is also a person, which is the way most people think of The Turing Test today. Turing posited that if the computer fools someone into thinking it is a person, then it has achieved true AI.

It is interesting that Turing started with the idea of the computer compared to a person on a particular task, but the popular conception has undoubtedly expanded to the broader sense that Turing implied in his paper. That more general sense is what we now call The Turing Test. Today, it’s easier for us to think not of a wall with a slot in it, but a laptop onto which messages are typed, and answers that appear on the laptop’s screen.

Searle’s argument against the possibility of “strong AI” is a linguistic extension of the Turing Test. Let’s say that the laptop has a Chinese keyboard, and answers with Chinese characters on its screen. Now recall that in an earlier comment, I noted that a human being could do any computer program manually, just by stepping through the instructions of that program. It would be laborious, and the human being would be liable to error, but there is no program for which it is impossible. Let’s say that the laptop, rather than running the program that passed the Turing Test locally, sends its messages to another, remote laptop screen. A person reads the characters from the screen, matches them to the inputs of the computer program that passed the Turing Test, then runs the steps of that program, starting with those inputs, manually. Once the outputs are derived, that person types them back into the remote laptop, and they appear on the screen of the original laptop.

But what if that person doesn’t understand Chinese, at all. Since the person is merely viewing the Chinese characters as scribbles to match against lookup tables in the computer program, it still passes the test. But then Searle asks, does this mean that the person, supplemented by the (manually executed) computer program understands Chinese? I think everyone will accept that it doesn’t; the person doesn’t understand anything about the messages passed through the slot. It’s just scribbles in and scribbles out.

But if that’s the case, how can we say that a computer that ran the same program is “intelligent,” since it also doesn’t “understand” the very language in which it was examined, any more than the person did!

You’ll recall that in another, earlier comment I mentioned the interesting etymology and meaning of “understand.”

There’s a great song from the late 1970s (written by Nick Lowe, and popularized by Elvis Costello and the Attractions) called What’s So Funny ‘Bout Peace, Love, and Understanding. Think about why “understanding” appears next to “peace” and “love.” It’s because the sense of “understanding” here is from the second dictionary definition:

a positive relationship between two people or groups in which they feel sympathy for each other

Clearly, that puts the word in the realm of deeply human concepts like “peace” (in the sense of inner peace) and “love.”

The first dictionary definition of “understanding” is:

knowledge about a subject, situation, etc. or about how something works

What I’m arguing is that real knowledge of complex things is as profoundly human as concepts like “peace,” “love,” and the second definition of “understanding.” That’s the reason Searle’s Chinese Room is so puzzling. We all implicitly know that understanding a language is more than a lookup table. It’s a set of complex idioms and contextualization that are intrinsically human. That intrinsic tie to humanity is why Searle is right: AI doesn’t “understand” anything.

That’s not to say AI isn’t useful. It’s just not us.

For more on this perspective, please have a look at my book, Rage

Podcast of my UNSW talk by Robert Smith

While on my Australian book tour last month, I was lucky enough to give a talk at the lovely campus of the University of New South Wales, sponsored by the UNSW Centre for Ideas. First I gave a little lecture on the themes of Rage, then I had an onstage conversation with Katharine Kemp, who did a great job.

I’m happy to say that this event is now available as a podcast, which you can listen to here.

Comments on AI Quotes: Of Sci-Fi Stories and Sociopaths... by Robert Smith

Today’s contribution to my comments on AI quotes from a recent Forbes article by Rob Toews is on the following gem:

“The human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control.”

Those words were penned in 1995 by a man who in his youth was a mathematics prodigy, who achieved a doctorate in that subject at The University of Michigan in 1967. In the same year, aged 25, he became the youngest assistant professor of mathematics ever to be hired at The University of California, Berkley.

Two years later he resigned that post, eventually settling in Montana to live a simple, off-grid lifestyle while studying and writing about sociology. His writings eventually included Industrial Society and Its Future, from which this quote is drawn. In an effort to see that essay published in a major American newspaper, the author conducted a campaign of terrorist bombings, mainly by sending explosive devices to University professors across the USA. He also bombed an American Airlines flight and sent a bomb that injured the president of that airline. In total, he killed three people and injured 23 others, many seriously. The quote is of course from Ted Kaczynski, who the FBI labelled the University and Airline Bomber, or the Unabomber for short.

Like science fiction stories, psychopaths are often communicating in metaphors, whether they realise it or not.

In the previous comment, I mentioned Day of the Triffids, a story about the planet coming to be dominated by plants, which was written in 1951 when Kaczynski was 9. I have no idea whether the Unabomber ever read this book, or saw the popular 1962 film. Still, Triffids fits into an entire genre of 50s and 60s stories about mindless creatures that take over the world, including Invasion of the Bodysnatchers, The Blob, and many more. One can certainly be sure that Kaczynski grew up in an America that was steeped in these paranoid visions.

Recall that electronic computing did not exist until the mid-50s, and it probably didn’t enter into the national consciousness until the 60s, when International Business Machines transformed itself into a computer company. When that happened, America’s vision of the mindless entities they should fear shifted from outer space plants to electronic AIs. This is reflected in 1968’s 2001: A Space Odyssey (developed simultaneously by author Arthur C. Clarke and filmmaker Stanley Kubrik), where the computer HAL, turns murderous due to rigid adherence to programmed objectives. HAL was named by decrementing “IBM” by one letter in the alphabet.

HAL is only one in a series of mindless AIs that threatened humans and humankind in sci-fi stories at the time, from Clarke’s 1964 story Dial F for Frankenstein (about a telephone network which becomes sentient, a story which later inspired Tim Berners-Lee to invent the World-Wide Web), to 1966’s Colossus, about a defence computer that takes over the world, to Harlan Ellison’s 1967 I Have No Mouth, and I Must Scream, a genuinely terrifying tale where another war AI, the Allied Mastercomputer, destroys almost all of humanity, then gains its only ongoing pleasure from torturing those few people that remain.

These stories have been continuously re-invented for American films to this day. In fact, Ellison, who also penned two 1964 Outer Limits episodes about a soldier from the future, sued the producers of The Terminator successfully. You will now see an acknowledgement of his work at the end of that 1984 classic, which was added as a part of the settlement.

Some film critics have focused on American sci-fi paranoia flicks of this kind as metaphors for the communist threat. Indeed Hollywood started making them during the red scare, which gripped the USA beginning with the Bolshevik Revolution of 1917 but intensified into 1950s McCarthyism.

I’d argue that all these stories, which started in the 50s and continue to today, reflect not just red-baiting, but a more general fear that arose in the mid-20th century. The focus of our society on economic value, rather than human values, became more evident after WWII when the industrial revolution’s progress was revived in a post-war boom and the rise of the consumerist society. I’d posit that all people in the world, particularly in the USA, have, ever since, had a gnawing fear that they are being swallowed up in a dehumanising and dehumanised system, given how their lives have been more-and-more focused on work in a world dominated by fewer and fewer powerful corporate players.

Stories of inhumane invading overlords, whether they are botany from space, super-powerful computers, or the reds under the bed, all reflect this common fear. That’s why the movies are so popular still.

Kaczynski was paranoid, and I imagine that his focus on machines coming to dominate people was a reflection of how he felt dehumanised in his own life. Before going to pursue his doctorate at Michigan, he studied at Harvard. While there he was a part of a group of undergraduate volunteers who were subjected to psychological experiments conducted by Henry Murray (who later oversaw psychedelic drug experiments conducted by Timothy Leary). Murray’s experiments on Kaczynski and his peers involved exposure to extreme stress which “… Murray called ‘vehement, sweeping and personally abusive’ attacks. Specifically-tailored assaults to their egos, cherished ideals and beliefs were used to cause high levels of stress and distress. The subjects then viewed recorded footage of their reactions to this verbal abuse repeatedly.” Conspiracy theorists have connected these experiments to CIA’s Project MKUltra, a supposed effort to develop methods of mind control.

Regardless, these experiments were undoubtedly dehumanising, and have been subsequently denounced as inhumane. Kaczynski’s lawyers attributed some of his paranoid delusions to the aftermath of this experience. The Unabomber transferred some of his fear of dehumanisation into a metaphor: the fear of machines making decisions for people.

I don’t think he was wrong in fearing that, but I do believe he was wrong in thinking that in a complex world of the future, machines would eventually make decisions better than people. I’d say that the radical uncertainties created by the world’s complexity are precisely the reason that mechanised thinking is inadequate in that complex world, which doesn’t exist in the future, but is already with us today.

For more on that perspective, have a look at my book, Rage Inside the Machine.

Comments on AI Quotes: Should we envy flowers (and AIs)? by Robert Smith

Yet another in a series of comments on quotes about AI that appeared in a recent Forbes article by Rob Toews.

Today I’m looking over:

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”

While I think I’ve understood all the quotes in Mr Toews article thus far, this one, by computer pioneer Alan Kay (whose accomplishments from the 60s to today are too numerous to mention here) has me stumped. So I’m going to have to go through it rather carefully.

I believe that Dr Kay is saying that there is something in common between a flower and AI, and that whatever that something is, it should make anyone in his (or I assume her) right mind feel inferior to both the flower and AI. I’m trying to think of what that intimidating commonality might be. All I can come up with is dogged determination towards pre-programmed goals. If that’s what Dr Kay is alluding too, I have all sorts of questions.

Flowers are certainly admirably single-minded (if they can be said to minded at all, which I think they can’t). A flower (and I’m speaking not just of the bloom but the whole plant) is a beautiful machine which pumps nutrient-bearing water from the ground, while using sun and photosynthesis to turn carbon dioxide into the mass of the plant’s pumping machinery. The bloom of many flowers is programmed to be colourful, shapely, and fragrant, sometimes to trick insects into attempting to mate with them. This co-evolved deception aides the plant’s reproduction, and thus evolution. Plants can do all sorts of other amazing things to continue to thrive and evolve while coping with change, like abandoning their reproductive strategies to become hermaphrodites and fertilise themselves.

I love flowers, not just because of their wondrous machinery, but for their beauty. I imagine a part of that appreciation may be due to some evolutionary advantage for human beings feeling joy in some colours and scents. However, I think attributing human aesthetic appreciation merely to survival advantage undermines the importance of human intellectual and cultural evolution. And that evolutionary system is, in fact, far more complex than biological evolution.

I am not envious of flowers, because they do not think. As I noted, flowers have powerful adaptive strategies to deal with adversity, but those strategies are not thoughts. Flowers are indeed dogged in their pursuit of the single goal of survival and reproduction. But while they can inspire the ideas of lovers and poets, they do not participate in any evolution of thoughts, as all people do.

Human beings do far more than simply strive to survive. We cooperate and create. In that way, we have harnessed the power of emergence in complex systems to become something more than cogs in the machinery of some crude survival-of-the-fittest game. We evolve qualities, not just our quantity, not just quantitative evaluations of our “fitness.” As I said in the previous comment, qualities are real, first-class objects in the world. They are thoughts that we, as human beings, create and evolve. This is not a hallmark card; it is science; it is a real aspect of the physical universe, as much as DNA, black holes, or entropy.

I believe Dr Kay is correct in his comparison of flowers to AI because some instances of the latter are becoming intractable, complex entities, as single “minded” in their pursuits as flowers are in surviving. However, AIs also do not participate as full-fledged, active partners in the ongoing evolution of thoughts. They can inspire human thought evolution (as they did in the fighter aircraft manoeuvre work I describe in my book Rage). But their mechanical single-mindedness is the very reason they cannot play a fully active role in human societies.

Overlooking this distinction, beginning to envy machines for their fast, accurate, and relentless pursuit of simple-minded objectives, and entrusting those machines with vital human decisions, is dangerous, and invites a dystopia worthy of a sci-fi film. In fact, this plant/machine comparison is probably the reason Skynet isn’t the only kind of overlord used in the dystopian metaphors of sci-fi. Remember Day of the Triffids?

Just to be clear, I am in no way dissing flowers. As I write this, I am looking out on my garden admiring new blossoms of clematis, but while I am thankful for them, I do not envy them. Instead, I’d say that flowers (and AIs) should envy us. Or they would, if they could only think.

Comments on AI Quotes: Can our brains understand our brains? by Robert Smith

One more in my series of comments on AI quotes from a recent Forbes article by Rob Toews.

Today's quote is:

"If the human brain were so simple that we could understand it, we would be so simple that we couldn't."

This one has fascinating origins. It first appeared as an epigraph of a chapter in The Biological Origin Of Human Values, a 1977 book by George Edgin Pugh, attributed to the author's father, Emerson Pugh, who worked for decades at IBM and developed important computer memory technology.

I have not read George Edgin's book, but he has an interesting background relative to its titular topic. The younger Mr Pugh worked in modelling real-world phenomena with computers, in what mostly seem to be US government projects, including distributions of radioactive fallout and the efficiency of anti-segregation bussing. Several critical essays indicate that the book reflects a computer modeller's perspective on the complexity of the brain, evolution, and human values.

However, I think the interesting word in his father's quote is understand. The etymology of this simple English word is genuinely fascinating. Of course, it means "to stand under." However, the sense of under here isn't in the usual sense of beneath. Instead, it comes from either the Sanskrit antar for "among" or "between," or the Latin inter, which has similar connotations, or from the Greek entera for "intestines." Understand uses under in the sense of "under such circumstances." So, can our thinking "stand amongst and between" a conception of the brain? Can we get into its guts, as it were?

I think this indicates here is the difference between understanding, in the sense of an algorithmic computer model etched and executed in computer memory, and the sort of wholistic understanding that is a hallmark of what human thinking really does. I say human thinking, rather than the human brain because it is an error to see the brain as the seat of all human thinking, as has been discussed previously in this series of comments.

Quantitative understanding, as in the sense of computers, generally involves creating a model that behaves in a manner that is sufficiently similar to the thing being modelled that one can draw useful conclusions. Qualitative understanding is to stand amongst the nature of a thing. For truly complex things like human thinking, this may be the best we can hope for. But I think that's not a bad thing.

One of the most important messages of modern complexity science is that systems can generate emergent properties that cannot be described through a reduction of those systems. When we talk about qualitative understanding, or in fact qualities themselves, I believe what we are talking about is thoughts. And, if thoughts can't be reduced to quantitative models (quantitative understanding), then qualities are first-class objects that exist in the world.

For these reasons, I believe that qualitative understanding is in no way less than or "beneath" quantitative understanding. Thus even if our thinking is simpler than the brain (or, in fact, the whole human thinking system), that does not mean we can't stand amongst it, and understand.

Comments on AI Quotes: Minsky says brains are just machines by Robert Smith

Yet another in my series of comments on quotes about AI that recently appeared in a Forbes article by Rob Toews. Today’s quote is another one from Marvin Minsky:

“The hardest problems we have to face do not come from philosophical questions about whether brains are machines or not. There is not the slightest reason to doubt that brains are anything other than machines with enormous numbers of parts that work in perfect accord with physical laws. As far as anyone can tell, our minds are merely complex processes. The serious problems come from our having had so little experience with machines of such complexity that we are not yet prepared to think effectively about them.”

I was once berated from the podium by Minsky, but it wasn’t about this nonsense. Wish it had been. Dr Minsky took issue with me about a comment I made after a meandering talk he gave at an AI conference, which included a lengthy exposition on how he felt Rosanne Barr (who was a popular star at the time) should be grateful for the abuse she suffered in her youth because it gave her great material for comedy. But I digress.

To say that a brain is a machine is to warp what the word “machine” means. Of course, brains obey the laws of physics. So do rocks, but we don’t call them machines. And we don’t call an avalanche of rocks a machine, either, even though it is a mechanical process with complex behaviours. Here’s what Webster says that the word machine means:


ma·​chine | \ mə-ˈshēn  \

Definition of machine

 (Entry 1 of 2)

1a: a mechanically, electrically, or electronically operated device for performing a task: a machine for cleaning carpets

b: CONVEYANCEVEHICLE especially AUTOMOBILE

c: a coin-operated device: a cigarette machine

d(1): an assemblage (see ASSEMBLAGE sense 1) of parts that transmit forces, motion, and energy one to another in a predetermined manner

(2): an instrument (such as a lever) designed to transmit or modify the application of power, force, or motion


To get to anything that would fit a brain, we’d have to go to definition d). The first part of that definition includes the word “predetermined,” and the case of brains, one would have to ask, “predetermined by who?!?”. In the second part appears the word “designed,” and we’d have to ask the same question there.

So, let’s do away with the “brains are just machines” assumption: it only holds up as a reduction to the absurd.

Are minds complex processes? Absolutely. But it always amazes me that people have so overlooked what we already knew about complexity by 1986, when Minsky published Society of Mind, from which this quote is drawn. One can argue that complexity science had started at least by the time of Poincare at the turn of the 20th century, but the most groundbreaking observations of the field were well-established by the 1970s. The first organization dedicated to their study (The Santa Fe Institute, where I spent a couple of wonderful summers) was founded in 1984.

By then, people knew that there were very simple systems that yielded intractably complex behaviours. They knew that many complicated systems were very likely to have sophisticated emergent behaviours. While Minsky was right, we hadn’t figured out complex systems like the brain in ‘86, we had most firmly established that we would never figure them out, in the sense of the predetermined or designed behaviour of a machine.

Even today, most people haven’t realized that the defining characteristic of life is that it continuously generates behaviours that are complex, emergent, and self-sustaining in a profound way that defies both predetermination and entropy.

One doesn’t have to resort to religion or the supernatural to realize that our brains, and more importantly, our selves, are not just “machines.” One can do that with science, alone.

Read more on this perspective in my book, Rage Inside the Machine.



One-a-Day Comments on AI Quotes in Forbes: Thrun by Robert Smith

I'm doing a series of comments on historical AI quotes that appeared in a recent article in Forbes by Rob Toews.

Today's quote is

"Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It is really an attempt to understand human intelligence and human cognition."

and it comes from the brilliant AI researcher (and now successful businessman) Sebastian Thrun.

I couldn't agree with this quote more, but perhaps not in the way that you might expect.

As AI has developed historically, it's been subject to what some have called "the moving yardstick." Many technologies that were once a mainstay of AI are now "mere programming." It seems hard to believe that iterative noise filters, branch-and-bound search, and even object-oriented programming were once considered AI. Now, they are all just engineering tools, because they worked, yet didn't open the magic door to anything like human capability. We kept using them in programs, but we removed them from the category "AI" forever. To paraphrase Omar Khayyam, the moving yardstick measured, moved on, and no piety nor wit shall lure it back to cancel half a line, nor tears wash out an inch of it.

All computer procedures are things that a person (or several people) could do “by hand.” You could perform all the steps of even the most sophisticated computer algorithm. You'd just be woefully slow at it, frustrated by the scale of the task, and likely to be inaccurate at any given step. Yet, at some point, even simple computer procedures were thought to be a potential key to "real" AI, just because computers can do these tasks at massive scale, speed and accuracy. There has always been the assumption that at some level, our brains must be running rote procedures at a pace and scale we just can't track, and all we need to do is hit upon the right procedures to make computer think like people.

Even though "AI" today usually refers to some variation of a "neural network," in reality most of these algorithms have more in common with nested function approximation and intractable statistical inference than with brains. The realisation of their limitations is already taking place, with deep learning coming over the top of Gartner's hype cycle in 2017 and 2018, and vanishing from the chart in 2019. One can only assume those technologies have been subjected to the moving yardstick, and are now progressing somewhere far into Gartner's "plateau of productivity," where they become useful engineering tools, rather than mysterious AI.

If this is the case, why do I agree with Thrun, that the pursuit of AI is like a branch of philosophy? The title of the final chapter in my book Rage is "The Hole and Not The Doughnut," because I believe every time a computer procedure fails to open that magic door to AI, it teaches us something specific and technical about how we are not machines. The negative space left by this effort is what defines humanity.

Read more about what I think that negative space looks like in Rage Inside the Machine.

One-a-Day Comments on AI Quotes in Forbes: The AI "Engine" Metaphor by Robert Smith

This is one in a series of commentaries on quotes about AI offered in a recent article in Forbes by Rob Toews. Today’s quote is a recent one:

“For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. The most important general-purpose technology of our era is artificial intelligence, particularly machine learning.”

which appears in an article in a 2018 article in The Harvard Business Review entitled The Business of Artificial Intelligence: what it can - and cannot - do for your organization. The article is by Erik Brynjolfsson and Andrew McAfee, who are co-directors of the MIT Initiative on the Digital Economy, at the Sloan School of Management.

If one looks up the Wikipedia entry on “general purpose technologies,” one finds that economists use this term to describe “technologies that can affect an entire economy (usually at a national or global level)[.” As an engineer, I find the term a bit jarring. Affecting national or global economies is undoubtedly a measure of a technologies success, rather than its generality. If one were to examine the history of technologies, they are almost always for a specific purpose. Then, if they are useful, people find a myriad of other uses for them.

Take, for instance, the first technology listed in the quote: the steam engine. While there are many ancient and historical uses of steam to turn this-and-that, the industrial origin of the steam engine is clearly the atmospheric engine, which arose from a very particular set of circumstances and a single purpose. In England, coal mines tended to fill up with water, necessitating pumping said water out. Having coal and water at hand, it made sense to burn coal to heat the water, use the resulting steam to raise a piston, then evacuate the steam into the atmosphere, let the piston fall, and using the rising-and-falling action to pump more water out of the mine. Channelling the excess water into canals led to the idea of floating the coal on the canals with horses pulling it along to market. The final step was, of course, to build a closed-circuit system like the atmospheric engine that you could put on the boat to haul the coal, and presto, the steam engine was born. And indeed, it did find general purpose.

But what of electricity and the internal combustion engine? Once again, as an engineer, I find these a jarring combination with the steam engine. Surely the proper set is the steam engine, the internal combustion engine (which was an evolution of steam engine concepts), and the electric motor. Each of these devices is a means of converting the potential energy in a fluid power source into mechanical motion. And surely any device that does that can become a general-purpose technology, in both the common and economic senses of that phrase.

Are AI and machine learning similar sorts of things? Are we now constructing general-purpose technologies of this sort, that convert some sort of “fluid” data into “intelligence” or “learning(s)”? This brings to mind the metaphor “data as the new oil,” which you can find discussed in Wired magazine, then later refuted in Forbes.

Computing has undoubtedly been a general-purpose technology. And Babbage did call his invention, the first general-purpose computer The Analytical Engine. But it is a fallacy to think of intelligence as mere computing unless we simply decide to make those two words synonyms. If we take a more considered approach to the nature of human intelligence, it only reduces to just being computation in a reduction to the absurd, where we say that since all physical processes can be simulated on a computer, they are a computer.

Think about it: this is like saying that because we can simulate all the physical processes in a rock, we can make computer programs that are rocks. This is even more absurd when one considers the realities of Complexity, as was discussed in the previous comment on a quote about the “singularity” by Von Neumann.

Computing is certainly like an engine, taking in data, and converting it into other data. But the conversion to real intelligence occurs when people examine data from computational engines.

Is AI a general-purpose technology? Is AI even a single thing? Is machine learning a general-purpose technology? Or is it also better described as an evolving bag of computational tricks?

I think the answer is that pseudointelligence (which I consider to be a much more useful term than AI) is a set of ever-evolving, beneficial technologies. But that we must realize they are not the engines of creation that human beings are, for very fundamental reasons.

Read more about those reasons in my book, Rage Inside the Machine.

One-a-Day Comments on AI Quotes in Forbes: Von Neumann by Robert Smith

This is the second in a series of comments on historical AI quotes mentioned in a recent article in Forbes by Rob Toews.

Today’s quote is the following:

“The ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

The article lists this quote as being from John von Neumann in 1958, but he died in 1957, and Stanislav Ulam wrote the quote as a paraphrase of Von Neumann in a posthumous tribute. Regardless, it’s the first quote that uses the word singularity in a certain way that has come to be related to AI.

If you check the Wikipedia entry for “Technological Singularity,” you’ll find that the term has come to mean a moment in the future where computers become “superintelligent,” uncontrollable, and permanently alter human civilization.

A related use of the term is as a time when humans can “upload” their intelligence into computers, and essentially become immortal machines. For a great sci-fi on that possibility, see Neal Stephenson’s Fall, or Dodge in Hell. (which a friend of mine described as a fictional sequel to my book Rage).`I suspect Stephenson and I harbour similar doubts about transferring people’s minds to machines, but the book explores the upshot of this actually happening, which includes some fascinating speculation on the day-to-day impact on non-digitized humanity. Read the book, but suffice it to say that the social, legal, and resource implications of undead people in a box for living people outside that box are massive.

However, the idea of real “disembodied” humanity relates directly back to the question of AI, because one must ask if intelligence can be disembodied. As discussed in the previous quote, from Minsky, more than the brain is involved in real human intelligence. People literally think with their guts, as well as their immune, hormonal, and peripheral nervous system, and perhaps even more complex, embodied systems that continuously interact with one another. What would it mean to “disembody” that intelligence?

One could say that we could simply simulate all of that gooey human body stuff in a computer, as well. But this overlooks perhaps the most important thing about the revolution of Complexity Science. Any error in simulating a complex system means complete divergence of the simulation from reality. Moreover, the impossibility of breaking Complex Systems into components whose superposition yields the same behaviour as the original, undecomposed system is one of the main characteristics of such systems.

Thus, I don’t think we are going to disembody human intelligence into computers in any way that “preserves” the original person possessing that intelligence anytime soon, if ever.

“Intelligence,” “sentience,” “consciousness,” and even “soul,” share the idea that there is an aspect of humanity which can be separated from the human being. In the case of artificial intelligence, it’s also implicitly assumed that there is some objective, disembodied ideal or superior intelligence for dealing with real-world decision making. Certainly, there are well-framed problems that can be best and most quickly solved via computation. However, the framing of real-world problems that people care about is not such a task, in general. This is the reason that human decision making requires embodiment in a person, as well as a society. Superintelligence is a rationalist myth, and therefore, so is the singularity.

Read more about this in my book, Rage Inside The Machine.

One-a-Day Comments on AI Quotes in Forbes: Minsky by Robert Smith

In a recent article in Forbes, Rob Toews offered a series of historical quotes about "AI" (I prefer the term #PseudoIntelligence). As a conversation starter, I'd like to provide comments on each of those quotes, one-a-day, and hope that interesting threads emerge.

The first quote is by Marvin Minsky, who I met when he was alive and could tell some interesting stories about, but I'm saving those for my next book. In 1986, Minsky said:

"The question is not whether intelligent machines can have any emotions, but whether machines can be intelligent without any emotions."

Out of context, this is classic AI Hype framing, where the idea that machines can have emotions is assumed to be true without any intellectual, let alone, scientific examination of that question. In the original context, which is a section of Minsky's Society of The Mind entitled Emotion, I'm sorry to say that not a lot more intellectual or scientific consideration is given to this issue, either.

To find a more thorough consideration of this issue, a useful source is Antonio Damasio's wonderful and insightful book The Strange Order of Things: Life, Feeling, and the Making of Cultures, which I highly recommend. Amongst many other perspective-altering observations, Damasio draws a distinction between feelings and emotions. In this dichotomy, the former are activities of the "old brains" that exist in our bodies, which evolved as complex thinking entities long before the emergence of the brains in our heads. These systems are our guts, our immune and hormone systems, as well as our peripheral nervous systems. Most people don't consider the complexity of these systems, and that they do a lot on their own, without reference to the central nervous system, while also interactively communicating with the it.

Complex states of those "old brains" are feelings. Cognitive impressions of them in the head brain are emotions. Emotions, as Minsky rightly points our, are essential to human decision making ("intelligence") under real-world complexity.

However, one must ask, can a machine have emotions without feelings? And what are feelings if not a part of a biological gut, immune and hormone system, or peripheral nervous system, as well as its highly complex interactions with the brain?

Maybe a machine can have something like feelings? We'd have to check any assertion in that area carefully for wishful mnemonics. But for now, I think we can say that machines can't have feelings in any human sense. And therefore, they can't have anything like emotions, either.

For more discussion of this, please see my book, Rage Inside the Machine.

SciReports article on ideas in Rage by Robert Smith

I’m very pleased to announce that the work on social media dynamics mentioned in Rage is now in an article in the prestigious Nature journal Scientific Reports. The paper is entitled A Minimalistic Model of Bias, Polarization and Misinformation in Social Networks, and it shows how polarization is a natural dynamic of the social media through which we get much of our news today. And, of course, that dynamic can be manipulated by algorithms.

The papers authors are Orowa Sikder, myself, Pierpaulo Vivo, and Giacomo Livan. Giacomo penned a really good Twitter thread on the paper, the beginning of which I’m including below. It was a great team, and I’m really glad to have been a part of developing such interesting (and I hope impactful) work.

SHIFT+CTRL: The Implicit Prejudice of Algorithms and How to Debug Them: New Article by Robert Smith

Journalist Sarah Haque interviewed me and wrote a really clever and insightful piece that you can read here.

She’s a skilled interviewer, and she managed to capture a quote from me that I really like, and I never really seemed to have gotten out before:

“‘When we talk about artificial intelligence, we act as if intelligence is some abstract quality that can be pulled out of an individual and be described separately. The reality is, I don’t think that can be done; it’s an integrated quality. That’s the grand reality of the century – that quality actually exists. Quality exists as a separate thing from quantity.’ “

Great work Sarah.