February 21, 2001
author |
Mitchell Waldrop

The complexities of the mind mirror the challenges of Artificial Intelligence. This article discusses the nature of thought itself–can it be replicated in a machine? From Ray Kurzweil’s revolutionary book The Age of Intelligent Machines, published in 1990.

At a time when computer technology is advancing at a breakneck pace and when software developers are glibly hawking their wares as having artificial intelligence, the inevitable question has begun to take on a certain urgency: Can a computer think? Really think? In one form or another this is actually a very old question, dating back to such philosophers as Plato, Aristotle, and Descartes. And after nearly 3,000 years the most honest answer is still “Who knows?” After all, what does it mean to think? On the other hand, that’s not a very satisfying answer. So let’s try some others.

Who cares? If a machine can do its job extremely well, what does it matter if it really thinks? No one runs around asking if taxicabs really walk.

How could you ever tell? This attitude is the basis of the famous Turing test, devised in 1950 by the British mathematician and logician Alan Turing: Imagine that you’re sitting alone in a room with a teletype machine that is connected at the other end to either a person or a computer. If no amount of questioning or conversation allows you to tell which it is, then you have to concede that a machine can think.

No, thinking is too complicated. Even if we someday come to understand all the laws and principles that govern the mind, that doesn’t mean that we can duplicate it. Does understanding astrophysics mean that we can build a galaxy?

Yes, machines can think in principle, but not necessarily in the same way we do. AI researcher Seymour Papert of the Massachusetts Institute of Technology maintains that artificial intelligence is analogous to artificial flight: ‘This leads us to imagine skeptics who would say, ‘You mathematicians deal with idealized fluids–the real atmosphere is vastly more complicated: or ‘You have no reason to suppose that airplanes and birds work the same way-birds have no propellers, airplanes have no feathers.’ But the premises of these criticisms is true only in the most superficial sense: the same principles (for example, Bernoulli’s law) applies to real as well as ideal fluids, and they apply whether the fluid flows over a feather or an aluminum wing.”

No! This is the most often heard answer, and the most heartfelt. “I am not a machine [goes the argument]. I’m me. I’m alive. And you’re never going to make a computer that can say that. Furthermore, the essence of humanity isn’t reason or logic or any of the other things that computers can do: it’s intuition, sensuality, and emotion. So how can a computer think if it does not feel, and how can it feel if it knows nothing of love, anguish, exhilaration, loneliness, and all the rest of what it means to be a living human being?”

“Sometimes when my children were still little,” writes former AI researcher Joseph Weizenbaum of MIT, “my wife and I would stand over them as they lay sleeping in their beds. We spoke to each other only in silence, rehearsing a scene as old as mankind itself. It is as Ionesco told his journal: ‘Not everything is unsayable in words, only the living truth: “

Can a Machine Be Aware?

As this last answer suggests, the case against machine intelligence always comes down to the ultimate mystery, which goes by many names: consciousness, awareness, spirit, soul. We don’t even understand what it is in humans. Many people would say that it is beyond our understanding entirely, that it is a subject best left to God alone. Other people simply wonder if a brain can ever understand itself, even in principle. But either way, how can we ever hope to reproduce it, whatever it is, with a pile of silicon and software?

That question has been the source of endless debate since the rise of AI, a debate made all the hotter by the fact that people aren’t arguing science. They’re arguing philosophical ideology-their personal beliefs about what the true theory of the mind will be like when we find it.

Not surprisingly, the philosophical landscape is rugged and diverse. But it’s possible to get some feel for the overall topography 6y looking at two extremes. At one extreme at the heart of classical AI we find the doctrines first set down in the 1950s by AI pioneers Allen Newell and Herbert Simon at Carnegie-Mellon University: (1) thinking is information processing; (2) information processing is computation, which is the manipulation of symbols; and (3) symbols, because of their relationships and linkages, mean something about the external world. In other words, the brain per se doesn’t matter, and Turing was right: a perfect simulation of thinking is thinking.

Tufts University philosopher Daniel C. Dennett, a witty and insightful observer of AI, has dubbed this position High Church Computationalism. Its prelates include such establishment figures as Simon and MIT’s Marvin Minsky; its Vatican City is MIT, “the East Pole.”

Then from out of the West comes heresy–a creed that is not an alternative so much as a denial. As Dennett describes it, the assertion is that “thinking is something going on in the brain all right, but it is not computation at all: thinking is something holistic and emergent-and organic and fuzzy and warn and cuddly and mysterious.”

Dennett calls this creed Zen holism. And for some reason its proponents do seem to cluster in the San Francisco Bay area. Among them are the gurus of the movement: Berkeley philosophers John Searle and Hubert Dreyfus.

The computationalists and the holists have been going at it for years, ever since Dreyfus first denounced AI in the mid 1960s with his caustic book What Computers Can’t Do. But their definitive battle came in 1980, in the pages of the journal Behavioral and Brain Sciences. This journal is unique among scientific journals in that it doesn’t just publish an article; first it solicits commentary from the author’s peers and gives the author a chance to write a rebuttal. Then it publishes the whole thing as a package-a kind of formal debate in print. In this case the centerpiece was Searle’s article “Minds, Brains, and Programs,” a stinging attack on the idea that a machine could think. Following it were 27 responses, most of which were stinging attacks on Searle. The whole thing is worth reading for its entertainment value alone. But it also highlights the fundamental issues with a clarity that has never been surpassed.

The Chinese Room

Essentially, Searle’s point was that simulation is not duplication. A program that uses formal rules to manipulate abstract symbols can never think or be aware, because those symbols don’t mean anything to the computer.

To illustrate, he proposed the following thought experiment as a parody of the typical AI language-understanding program of his day: “Suppose that I’m locked in a room and given a large batch of Chinese writing,” he said. “Suppose furthermore (as is indeed the case) that I know no Chinese …. To me, the Chinese writing is just so many meaningless squiggles.” Next, said Searle, he is given a second batch of Chinese writing (a “story”), together with some rules in English that explain how to correlate the first batch with the second (a “program”). Then after this is all done, he is given yet a third set of Chinese symbols (“questions”), together with yet more English rules that tell him how to manipulate the slips of paper until all three batches are correlated, and how to produce a new set of Chinese characters (“answers”), which he then passes back out of the room. Finally, said Searle, “after awhile I get so good at manipulating the instructions for the Chinese symbols and the programmers get so good at writing the programs that from the external point of view . . . my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.” In other words, Searle learns to pass the Turing test in Chinese.

Now, according to the zealots of strong AI, said Searle, a computer that can answer questions in this way isn’t just simulating human language abilities. It is literally understanding the story. Moreover, the operation of the program is in fact an explanation of human understanding.

And yet, said Searle, while he is locked in that imaginary room he is doing exactly what the computer does. He uses formal rules to manipulate abstract symbols. He takes in stories and gives out answers exactly as a native Chinese would. But he still doesn’t understand a word of Chinese. So how is it possible to say that the computer understands? In fact, said Searle, it doesn’t. For comparison, imagine that the questions and the answers now switch to English. So far as the people outside the room are concerned, the system is just as fluent as before. And yet there’s all the difference in the world, because now he isn’t just manipulating formal symbols anymore. He understands what’s being said. The words have meaning for him-or, in the technical jargon of philosophy, he has intentionality. Why? “Because I am a certain sort of organism with a certain biological (i.e., chemical and physical/ structure,” he said, “and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena.” In other words, Searle concluded that it is certainly possible for a machine to think-”in an important sense our bodies with our brains are precisely such machines”-but only if the machine is as complex and as powerful as the brain. A purely formal computer program cannot do it.


Searle’s Chinese roam clearly struck a sensitive nerve, as evidenced by the number and spirit of the denunciations that followed. It was clear to everyone that when Searle used the word “intentionality,” he wasn’t just talking about an obscure technical matter. In this context intentionality is virtually synonymous with mind, soul, spirit, or awareness. Here is a sampler of some of the main objections:

The comparison is unfair. The programs that Searle ridiculed demonstrated a very crude kind of understanding at best, and no one in AI seriously claims anything more for them. Even if they were correct in principle, said the defenders, genuine humanlike understanding would require much more powerful machines and much more sophisticated programs.

Searle quite correctly painted out, however, that this argument is irrelevant: of course computers are getting more powerful: what he objected to was the principle.

The Chinese Room story is entertaining and seductive, but it’s a fraud. Douglas R. Hofstadter of Indiana University, author of the best-selling Godel, Escher, Bach, pointed out that the jump from the AI program to the Turing test is not the trivial step that Searle makes it out to be. It’s an enormous leap. The poor devil in the Chinese room would have to shuffle not just a few slips of paper but millions or billions of slips of paper. It would take him years to answer a question, if he could do it at all. In effect, said Hofstadter, Searle is postulating mental processes slowed down by a factor of millions, so no wonder it looks different.

Searle’s reply-that he could memorize the slips of paper and shuffle them in his head-sounds plausible enough. But as several respondents have pointed out, it dangerously undermines his whole argument: once he memorizes everything, doesn’t he now understand Chinese in the same way he understands English?

The entire system does understand Chinese. True, the man in the room doesn’t understand Chinese himself. But he is just part of a larger system that also includes the slips of paper, the rules, and the message-passing mechanism. Taken as a whole, this larger system does understand Chinese. This “systems” reply was advanced by a number of the respondents. Searle was incredulous-”It is not easy for me to imagine how someone who was not in the grip of an ideology could find the idea at all plausible”-yet the concept is subtler than it seems. Consider a thermostat: a bimetallic strip bends and unbends as the temperature changes. When the room becomes too cold, the strip closes an electrical connection, and the furnace kicks on. When the room warms back up again, the connection reopens, and the furnace shuts off. Now, does the bimetallic strip by itself control the temperature of the room? No. Does the furnace by itself control the temperature? No. Does the system as a whole control the temperature? Yes. Connections and the organization make the whole into mare than the sum of its parts.

Searle never makes clear what intentionality is, or why a machine can’t have it. As Dennett pointed out, “For Searle, intentionality is rather like a wonderful substance secreted by the brain the way the pancreas secretes insulin.” And make no mistake: Searle’s concept of intentionality does require a biological brain. He explicitly denied that a robot could have intentionality, even if it were equipped with eyes, ears, arms, legs, and all the other accoutrements it needed to move around and perceive the world like a human being. Inside, he said, the robot would still just be manipulating formal symbols.

That assertion led psychologist Zenon Pylyshyn of the University of Western Ontario to propose his own ironic thought experiment: ‘Thus, if more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.” In short, you would become a zombie.

Dennett took up the same theme in his own article. So far as natural selection is concerned, he pointed out, Pylyshyn’s zombie or Searle’s robot is just as fit for survival as those of us with Searlestyle intentional brains. Evolution would make no distinction. Indeed, from a biological point of view, intentionality is irrelevant, as useless as the appendix. So how did it ever arise? And having arisen, how did it survive and prosper when it offered no natural-selection value? Aren’t we lucky that some chance mutation didn’t rob our ancestors of intentionality? Dennett asked. If it had, he said, “we’d behave just as we do now, but of course we wouldn’t mean it!” Needless to say, bath Pylyshyn and Dennett found this absurd.

In retrospect, the great debate has to be rated a standoff. Searle, not surprisingly, was unconvinced by any of his opponents’ arguments; to this day he and his fellow Zen holists have refused to yield an inch. Yet they have never given a truly compelling explanation of why a brain and only a brain can secrete intentionality. The computationalists, meanwhile, remain convinced that they are succeeding where philosophers have failed for 3,000 years-that they are producing a real scientific theory of intelligence and consciousness. But they can’t prove it. Not yet, anyway.

And in all fairness, the burden of proof is on AI. The symbol-processing paradigm is an intriguing approach. If nothing else, it’s an approach worth exploring to see how far it can go. But still, what is consciousness?

Science as a Message of Despair

One way to answer that last question is with another question: Do we really want to know? Many people instinctively side with Searle, horrified at what the computationalist position implies: If thought, feeling, intuition, and all the other workings of the mind can be understood even in principle, if we are machines, then God is not speaking to our hearts. And for that matter, neither is Mozart. The soul is nothing more than the activations of neuronal symbols. Spirit is nothing more than a surge of hormones and neurotransmitters. Meaning and purpose are illusions. And besides, when machines grow old and break dawn, they are discarded without a thought. Thus, for many people, AI is a message of despair. Of course, this is hardly a new concern. For those who choose to see it that way, science itself is a message of despair.

In 1543 with the publication of De Revolutionibus the Polish astronomer Nicholas Copernicus moved the earth from the center of the universe and made it one planet among many and thereby changed humankind’s relationship with God. In the earthcentered universe of Thomas Aquinas and other medieval theologians, man had been poised halfway between a heaven that lay just beyond the sphere of the stars and a hell that burned beneath his feet. He had dwelt always under the watchful eye of God, and his spiritual status had been reflected in the very structure of the cosmos. But after Copernicus the earth and man were reduced to being wanderers in an infinite universe. For many, the sense of lass and confusion were palpable.

In 1859 with the publication of The Origin of Species Charles Darwin described how one group of living things arises from another through natural selection and thereby changed our perception of who we are. Once man had been the special creation of God, the favored of all his children. Now man was just another animal, the descendent of monkeys.

In the latter part of the nineteenth century and the early decades of the twentieth with the publication of such works as The Interpretation of Dreams (1901), Sigmund Freud illuminated the inner workings of the mind and again changed our perception of who we are. Once we had been only a little lower than the angels, masters of our own souls. Now we were at the mercy of demons like rage, terror, and lust, made all the more hideous by the fact that they lived unseen in our own unconscious minds.

So the message of science can be bleak indeed. It can be seen as a proclamation that human beings are nothing more than masses of particles collected by blind chance and governed by immutable physical law, that we have no meaning, that there is no purpose to existence, and that the universe just doesn’t care. I suspect that this is the real reason for the creationists’ desperate rejection of Darwin. It has nothing to do with Genesis; it has everything to do with being special in the eyes of a caring God. The fact that their creed is based on ignorance and a willful distortion of the evidence makes them both sad and dangerous. But their longing for order and purpose in the world is understandable and even noble. I also suspect that this perceived spiritual vacuum in science lies behind the fascination so many people feel for such pseudosciences as astrology. After all, if the stars and the planets guide my fate, then somehow I matter. The universe cares. Astrology makes no scientific sense whatsoever. But far those who need such reassurance, what can science offer to replace it?

Science as a Message of Hope

And yet the message doesn’t have to be bleak. Science has given us a universe of enormous extent filled with marvels far beyond anything Aquinas ever knew. Does it diminish the night sky to know that the planets are other worlds and that the stars are other suns? In the same way, a scientific theory of intelligence and awareness might very well provide us with an understanding of other possible minds. Perhaps it will show us more clearly how our Western ways of perceiving the world relate to the perceptions of other cultures. Perhaps it will tell us how human intelligence fits in with the range of other possible intelligences that might exist in the universe. Perhaps it will give us a new insight into who we are and what our place is in creation.

Indeed, far from being threatening, the prospect is oddly comforting. Consider a computer program. It is undeniably a natural phenomenon, the product of physical forces pushing electrons here and there through a web of silicon and metal. And yet a computer program is mare than just a surge of electrons. Take the program and run it on another kind of computer. Now the structure of silicon and metal is completely different. The way the electrons move is completely different. But the program itself is the same, because it still does the same thing. h is part of the computer. It needs the computer to exist. And yet it transcends the computer. In effect, the program occupies a different level of reality from the computer. Hence the power of the symbol-processing model: By describing the mind as a program running on a flesh-and-blood computer, it shows us how feeling, purpose, thought, and awareness can be part of the physical brain and yet transcend the brain. It shows us how the mind can be composed of simple, comprehensible processes and still be something more.

Consider a living cell. The individual enzymes, lipids, and DNA molecules that go to make up a cell are comparatively simple things. They obey well-understood laws of physics and chemistry.