Ray Kurzweil Q&A with Darwin Magazine

December 3, 2001 by Ray Kurzweil

Machine consciousness is the subject of this dialog with Darwin Magazine.

Originally published December 3, 2001 at darwinmag.com. Published on KurzweilAI.net December 3, 2001.

Darwin: Will robots ever become conscious?

Kurzweil: We have a basic assumption that other people are conscious, regardless of whether their subjective experiences are the same as our own. That’s an assumption and people think it’s only something that a philosopher would worry about. But it’s actually a really practical question when we get outside this shared assumption of human beings being conscious. Because when we go outside this assumption, the consensus breaks down. Take animals, for example. There is not a consensus on whether or not animals are conscious. Some people feel that at least higher order animals are conscious, other people say, “No, you can’t be conscious unless you have a mastery of human language; animals just operate by instinct, instinct is a machinelike automatic response, so these animals just operate like machines.”

If you go on to nonbiological entities, such as robots, there are more vexing questions. On the one hand, robots are more dissimilar to us than animals are because they’re nonbiological. On the other hand, I’d predict that robotic entities we may meet 30 years from now will be displaying more humanlike behaviors than animals do because they’ll be very explicitly patterned or modeled on human behavior. By that time, we will have completely reverse engineered the human brain; we’ll be able to understand how it works, and program robots accordingly.

I’m not saying all artificial intelligence (AI) will be humanlike. Some will be created without human personalities because they’ll have specific jobs to do where they won’t require human characteristics. But some will have human personalities because we’ll want to interact with them in a humanlike way, through human language. And in order to understand human language, you need to understand human nature and common sense–you have to be pretty human in order to understand human language.

So some of these machines will be humanlike, more humanlike than animals. They will be copies of humans, they will be making humanlike statements, and they’ll be very convincing. Ultimately, they’ll convince people that they are conscious.

Darwin: Won’t people have a hard time accepting the notion that a robot is conscious?

Kurzweil: I get e-mails all the time that say, “But Ray, the computer, it’s just a machine, and even if it’s a complicated machine, and if it surprises you sometimes, there’s really nobody home, it’s not aware of its existence.” And that seems like a pretty reasonable statement with regard to computers today, because computers today are still a million times simpler than the human brain, and they’re constructed very, very differently.

But that’s not an accurate description of machines 30 years from now. They will have the complexity of the human brain and the organization and design of the brain. They’ll claim to be conscious and have feelings–to be happy, angry, sad, whatever–and they’ll have the subtle cues that we associate with those claims.

True, today you can make a virtual character in a game that claims to be happy. But the character is not very convincing. So you might go along with the fantasy for a while, but you don’t really believe that this virtual character is having an emotional experience; the complexity and depth of its experience is not on a human level. Thirty years from now, that won’t be the case. Machines will have the subtlety, complexity, richness and depth of human behavior.

Darwin: How would we ever prove that a machine is–or isn’t–conscious?

Kurzweil: It’s not a scientifically resolvable question, in my view. You can’t build any sort of consciousness detection machine that doesn’t have some philosophical assumptions built in to it. But my idea that machines will convince us they are conscious is not an ultimate philosophical, scientific statement. It’s more of a political prediction. These machines will be intelligent and persuasive enough that we’ll believe them. If we don’t, they’ll get mad at us, and we don’t want them to get mad at us. But that’s not philosophical proof that they’re conscious. Ultimately, consciousness is a first person phenomena. Beyond that, I’m just making assumptions. But that assumption is going to be tested as we create entities that are humanlike. It’s also going to be tested by another phenomena, which is redefining human intelligence itself.

How is human intelligence going to change?

Human and machine intelligence are going to become intertwined. There are quite a few human beings that already have computers in their brains, and it doesn’t upset us too much because they’re doing very narrow things today, like the Parkinson’s implant that reduces tremors, and cochlear implants for deaf people. I envision a scenario where we’ll be able to send billions of nanobots, which are tiny robots the size of blood cells, into the human brain, where they can communicate wirelessly with our biological neurons. Rather than an implant that’s located in one position, these nanobots could be highly distributed, communicating with the brain in millions of places, and therefore become part of the brain. So when you deal with a human in 2035 or 2040, you’ll be dealing with an entity that has a very complicated biological brain, intimately integrated with nonbiological thinking processes that will be equally complex and ultimately more complex. It’s not going to be computers on the left side of the room, humans on the right; it’s going to be a very intimate integration. By 2040, 2050, even biological people will be mostly nonbiological. That clearly raises the spiritual issue of what is a person.

So you think nonbiological intelligence will dominate human intelligence?

Yes–the crossover point is somewhere in the 2030s, certainly by 2040 or 2050. One of the reasons that a nonbiological intelligence can ultimately be superior to human intelligence is that it will combine the powers of human intelligence with certain nonbiological intelligence advantages. Computers today can already do some things better than human intelligence can. A thousand dollar PC can remember billions of things fairly accurately; we’re hard pressed to remember a handful of phone numbers. Computers are inherently much faster. The electrochemical information processing in the brain is literally 100 times slower than electric circuits today. Most important, machines can share their knowledge. If you want some capability on your computer, you can just load the evolved learning of one computer onto it. Not so with humans. If I read War and Peace or learn French, I can’t download that to you. Humans have an advantage today, in that our pattern recognition is much more profound than what machines can do. But machines will be able to encompass all the skills of humans, and combine that with these other advantages I mentioned–being able to think faster, having very large, accurate memories. And they’ll keep growing in their capability, doubling every year.

Darwin: Are these computer-induced changes you predict a threat to human civilization as we know it?

Kurzweil: In my mind, this is very much a part of the human civilization. This is very much a human endeavor. This is not an invasion of machines coming from outer space and taking us over. These machines are emerging from within our civilization. We’re already very close to them; they’re very intimately integrated into our civilization. If all the computers in the world stopped today our civilization would grind to a halt.