This is the second in a series of comments on historical AI quotes mentioned in a recent article in Forbes by Rob Toews.
Today’s quote is the following:
“The ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
The article lists this quote as being from John von Neumann in 1958, but he died in 1957, and Stanislav Ulam wrote the quote as a paraphrase of Von Neumann in a posthumous tribute. Regardless, it’s the first quote that uses the word singularity in a certain way that has come to be related to AI.
If you check the Wikipedia entry for “Technological Singularity,” you’ll find that the term has come to mean a moment in the future where computers become “superintelligent,” uncontrollable, and permanently alter human civilization.
A related use of the term is as a time when humans can “upload” their intelligence into computers, and essentially become immortal machines. For a great sci-fi on that possibility, see Neal Stephenson’s Fall, or Dodge in Hell. (which a friend of mine described as a fictional sequel to my book Rage).`I suspect Stephenson and I harbour similar doubts about transferring people’s minds to machines, but the book explores the upshot of this actually happening, which includes some fascinating speculation on the day-to-day impact on non-digitized humanity. Read the book, but suffice it to say that the social, legal, and resource implications of undead people in a box for living people outside that box are massive.
However, the idea of real “disembodied” humanity relates directly back to the question of AI, because one must ask if intelligence can be disembodied. As discussed in the previous quote, from Minsky, more than the brain is involved in real human intelligence. People literally think with their guts, as well as their immune, hormonal, and peripheral nervous system, and perhaps even more complex, embodied systems that continuously interact with one another. What would it mean to “disembody” that intelligence?
One could say that we could simply simulate all of that gooey human body stuff in a computer, as well. But this overlooks perhaps the most important thing about the revolution of Complexity Science. Any error in simulating a complex system means complete divergence of the simulation from reality. Moreover, the impossibility of breaking Complex Systems into components whose superposition yields the same behaviour as the original, undecomposed system is one of the main characteristics of such systems.
Thus, I don’t think we are going to disembody human intelligence into computers in any way that “preserves” the original person possessing that intelligence anytime soon, if ever.
“Intelligence,” “sentience,” “consciousness,” and even “soul,” share the idea that there is an aspect of humanity which can be separated from the human being. In the case of artificial intelligence, it’s also implicitly assumed that there is some objective, disembodied ideal or superior intelligence for dealing with real-world decision making. Certainly, there are well-framed problems that can be best and most quickly solved via computation. However, the framing of real-world problems that people care about is not such a task, in general. This is the reason that human decision making requires embodiment in a person, as well as a society. Superintelligence is a rationalist myth, and therefore, so is the singularity.
Read more about this in my book, Rage Inside The Machine.