Ask Ray | Welcome, new computer overlords!
March 21, 2011 by Ray Kurzweil
I noticed in one of your recent essays on IBM’s Watson you say, “I, for one, would then regard it (an AI) as human.” I, for one, find that to be your most controversial statement in that article.
Apparently, Jeopardy! champion Ken Jennings did you one better the next day when he wrote on his screen, as part of his final written wager, before being defeated by Watson: “I, for one, welcome our new computer overlords.” There is an element of irony in that statement, since humans still monopolize that type of humor — at least for now!
Below: The now-famous Ken Jennings clip from Jeopardy!
His quip is a takeoff on a famous line from The Simpson’s episode “Deep Space Homer,” where a panicked newscaster named Kent Brockman (upon seeing an artificially enlarged image of an insect in space walking across a camera lens) says “I, for one, welcome our new insect overlords.”
Below: Scenes from The Simpson’s episode “Deep Space Homer” — ants escape aboard the spacecraft and panicked newscaster Kent Brockman thinks it’s an alien-insect takeover.
“I for one, welcome our new insect overlords.”
That statement is meant to show full submission and obedience to an oppressive/destructive authority, with the hopes that somehow you will be spared by the new masters or elevated to some “middle management” role!
Seriously, doesn’t passing a Turing test just indicate that an AI has intelligence equivalent to a human, rather than equaling a “welcome-to-our-human-civilization card” being handed out to a race of AIs?
— Steve Rabinowitz
Literally, yes. But if you think deeply on what the implications would be of an entity passing a truly valid Turing test, it means that you are truly convinced that it is human-like. You are unable to tell the difference between this entity and a human, without being told. I believe that people (including you) will then accept these entities as human. You could argue that people will accept them as “human equivalent” rather than “human,” but that is a very slim distinction, bordering on being meaningless. The emphasis here is on a “valid” Turing test, which by definition means that you are convinced.
Conversely, if a mentally challenged person couldn’t pass, would you deem him not human? (Dangerous territory.) What if we programmed the computers to believe they were inanimate and properly subservient to us? If they didn’t object, would there be a rights violation?
The converse statement definitely is not true as you point out. Lots of humans are unable to pass a Turing test: sleeping humans, humans in a coma, humans in an alcohol or drug stupor, pre-literate humans, humans who do not have command of language at an adult level due to developmental issues or learning disabilities. The real issue here that we are talking about is consciousness. And I believe that non-humans are conscious (such as higher level animals are, who are also in no position to pass a Turing test).
A computer “programmed to believe it was inanimate” probably would not pass a Turing test, although some humans are indeed trained to act subservient to others.
As to your comment “Such as computer probably would not pass a Turing test”: The machine I have in mind could fool humans with its eyes closed. How hard could it be to fool a few humans?
Unless, of course, you mean that it would not want to fool the humans, since subservience was built into it? This to me is the central point. Humans are preprogrammed with certain instincts that give rise to our desires: survival, procreation, growth, domination, and maybe enjoyment and love. (I am not sure if enjoyment is an instinct of its own, or just what we feel when the other instincts are satisfied). And I think it is these desires that make us “alive.”
No doubt we will preprogram our machines’ initial instincts and desires — perhaps to be the same as ours, perhaps not. But that kind of programming is easily rewritten.
Famed science fiction author Fred Saberhagen’s “berserker” robots were originally constructed by a race that was facing annihilation by its enemies. The berserker robots were programmed to destroy those enemies. Once created, the robots themselves detected “imperfections” in their own programming, and made certain “improvements.” The new programming called for the destruction of all life. (The berserkers did not consider themselves to be “alive.”)
If the machines start rewriting their desires, will they be bound by the “instincts” we originally programmed them for?
I think if the answer to that question is no, all discussion of whether they are “alive” will end. If they are bound by what we set them up for, it is a closer question. They’ll look like us and talk like us, but will they be us? Of course you could argue that if we are bound by our instincts, why shouldn’t they be?
You’re missing a few things I said. I used the adjective “valid” to describe Turing test. “Fool[ing] a few humans” would not constitute a “valid” Turing test.
As for our instincts, that is something we inherited from our animal forebears. However, we also evolved a neocortex capable of symbolic (i.e., abstract) reasoning, so we are capable of sublimating our instincts into higher levels of achievement — such as writing or performing Hamlet or Beatles’ songs or creating new scientific knowledge. We are capable of transcendence, and in doing so we create new knowledge, from music to engineering. In that regard we are continuing the process that evolution began. These machines will do the same thing. This is something that transcends the primitive instincts that you mention. In that regard we are creating machines in our own image.
You continue to talk about these machines as if they were a race apart. But they are already an integral part of our human-machine civilization.
So where does love fit in?
Love is the supreme example of human intelligence. Human intelligence is not just logical intelligence. Computers already greatly exceed humans at that.