Why I Think I Will Win
April 9, 2002 by Mitch Kapor
Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can’t because it lacks understanding of subtle human experiences and emotions.
Published April 9, 2002 on KurzweilAI.net. Click here to read an explanation of the bet and its background, with rules and definitions. Read why Ray thinks he will win here. Also see Ray Kurzweil’s final word on why he will win.
The essence of the Turing Test revolves around whether a computer can successfully impersonate a human. The test is to be put into practice under a set of detailed conditions which rely on human judges being connected with test subjects (a computer and a person) solely via an instant messaging system or its equivalent. That is, the only information which will pass between the parties is text.
To pass the test, a computer would have to be capable of communicating via this medium at least as competently as a person. There is no restriction on the subject matter; anything within the scope of human experience in reality or imagination is fair game. This is a very broad canvas encompassing all of the possibilities of discussion about art, science, personal history, and social relationships. Exploring linkages between the realms is also fair game, allowing for unusual but illustrative analogies and metaphors. It is such a broad canvas, in my view, that it is impossible to foresee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge.
While it is possible to imagine a machine obtaining a perfect score on the SAT or winning Jeopardy–since these rely on retained facts and the ability to recall them–it seems far less possible that a machine can weave things together in new ways or to have true imagination in a way that matches everything people can do, especially if we have a full appreciation of the creativity people are capable of. This is often overlooked by those computer scientists who correctly point out that it is not impossible for computers to demonstrate creativity. Not impossible, yes. Likely enough to warrant belief in a computer can pass the Turing Test? In my opinion, no. Computers look relatively smarter in theory when those making the estimate judge people to be dumber and more limited than they are.
- We are embodied creatures; our physicality grounds us and defines our existence in a myriad of ways.
- We are all intimately connected to and with the environment around us; perception of and interaction with the environment is the equal partner of cognition in shaping experience.
- Emotion is as or more basic than cognition; feelings, gross and subtle, bound and shape the envelope of what is thinkable.
- We are conscious beings, capable of reflection and self-awareness; the realm of the spiritual or transpersonal (to pick a less loaded word) is something we can be part of and which is part of us.
When I contemplate human beings in this way, it becomes extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation, much less to believe that its achievement is within our lifespan.. Computers don’t have anything resembling a human body, sense organs, feelings, or awareness after all. Without these, it cannot have human experiences, especially of the ones which reflect our fullest nature, as above. Each of knows what it is like to be in a physical environment; we know what things look, sound, smell, taste, and feel like. Such experiences form the basis of agency, memory and identity. We can and do speak of all this in a multitude of meaningful ways to each other. Without human experiences, a computer cannot fool a smart judge bent on exposing it by probing its ability to communicate about the quintessentially human.
Additionally, part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil’s approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail. It might be possible to produce a kind of machine as idiot savant by scanning a library, but a judge would not have any more trouble distinguishing one from an ordinary human as she would with distinguishing a human idiot savant from a person not similarly afflicted. It is not in what the computer knows but what the computer does not know and cannot know wherein the problem resides.
Given these considerations, a skeptic about machine intelligence could fairly ask how and why the Turing Test was transformed from its origins as a provocative thought experiment by Alan Turing to a challenge seriously sought. The answer is to be found in the origins of the branch of computer science its practitioners have called Artificial Intelligence (AI).
In the 1950′s a series of computer programs were written which first demonstrated the ability of the computer to carry out symbolic manipulations in software in ways which the performance (not the actual process) began to approach human level on tasks such as playing checkers and proving theorems in geometry. These results fueled the dreams of computer scientists to create machines which were endowed with intelligence. Those dreams, however, repeatedly failed to be realized. Early successes were not followed with more success, but with failure. A pattern of over-optimism was first seen which has persisted to this day. Let me be clear I am not referring to most computer scientists in the field of AI, but to those who take an extreme position.
For instance, there were claims in the 1980′s that expert systems would come be of great significance, in which computer would perform as well or better than human experts in a wide variety of disciplines. This belief triggered a boom in investment in AI-based startups in the 1980′s, followed by a bust when audacious predictions of success failed to be met and the companies premised on those claims also failed.
In practice, expert systems proved to be fragile creatures, capable at best of dealing with facts in narrow, rigid domains, in ways which were very much unlike the adaptable, protean nature of intelligence demonstrated by human experts. As we call them today, knowledge-based systems do play useful roles in a variety of ways, but there is broad consensus that the knowledge of these knowledge-based systems is a very small and non-generalizable part of overall human intelligence.
Ray Kurzweil’s arguments seek to go further. To get a computer to perform like a person with a brain, a computer should be built to work the way a brain works. This is an interesting, intellectually challenging idea.
He assumes this can be accomplished by using as yet undeveloped nano-scale technology (or not–he seems to want to have it both ways) to scan the brain in order to reverse engineer what he refers to as the massively parallel digital controlled analog algorithms that characterize information processing in each region. These then are presumably what control the self-organizing hierarchy of networks he thinks constitute the working mechanism of the brain itself. Perhaps.
But we don’t really know whether “carrying out algorithms operating on these networks” is really sufficient to characterize what we do when we are conscious. That’s an assumption, not a result. The brain’s actual architecture and the intimacy of its interaction, for instance, with the endocrine system, which controls the flow of hormones, and so regulates emotion (which in turn has an extremely important role in regulating cognition) is still virtually unknown. In other words, we really don’t know whether in the end, it’s all about the bits and just the bits. Therefore Kurzweil doesn’t know, but can only assume, that the information processing he wants to rely on in his artificial intelligence is a sufficiently accurate and comprehensive building block to characterize human mental activity.
The metaphor of brain-as-computer is tempting and to a limited degree fruitful, but we should not rely on its distant extrapolation. In the past, scientists have sought to employ metaphors of their age to characterize mysteries of human functioning, e.g., the heart as pump, the brain as telephone switchboard (you could look this up). Properly used, metaphors are a step on the way to development of scientific theory. Stretched beyond their bounds, the metaphors lose utility and have to be abandoned by science if it is not to be led astray. My prediction is that contemporary metaphors of brain-as-computer and mental activity-as-information processing will in time also be superceded and will not prove to be a basis on which to build human-level intelligent machines (if indeed any such basis ever exists).
Ray Kurzweil is to be congratulated on his vision and passion, regardless of who wins or loses the bet. In the end, I think Ray is smarter and more capable than any machine is going to be, as his vision and passion reflect qualities of the human condition no machine is going to successfully emulate over the term of the bet. I look forward to comparing notes with him in 2029.