THE AGE OF INTELLIGENT MACHINES | Chapter 1: The Roots of Artificial Intelligence
September 24, 2001
- author |
- Ray Kurzweil
- year published |
The postindustrial society will be fueled not by oil but by a new commodity called artificial intelligence (AI). We might regard it as a commodity because it has value and can be traded. Indeed, as will be made clear, the knowledge imbedded in AI software and hardware architectures will become even more salient as a foundation of wealth than the raw materials that fueled the first Industrial Revolution. It is an unusual commodity, because it has no material form. It can be a flow of information with no more physical reality than electrical vibrations in a wire.
If artificial intelligence is the fuel of the second industrial revolution, then we might ask what it is. One of the difficulties in addressing this issue is the amount of confusion and disagreement regarding the definition of the field. Other fields do not seem to have this problem. Books on biology do not generally begin with the question, What is biology, anyway? Predicting the future is always problematic, but it will be helpful if we attempt to define what it is we are predicting the future of.
One view is that AI is an attempt to answer a central question that has been debated by scientists, philosophers, and theologians for thousands of years. How does the human brain-three pounds of “ordinary” matter-give rise to thoughts, feelings, and consciousness? While certainly very complex, our brains are clearly governed by the same physical laws as our machines.
Viewed in this way, the human brain may be regarded as a very capable machine. Conversely, given sufficient capacity and the right techniques, our machines may ultimately be able to replicate human intelligence. Some philosophers and even a few AI scientists are offended by this characterization of the human mind as a machine, albeit an immensely complicated one. Others find the view inspiring: it means that we will ultimately be able to understand our minds and how they work.
One does not need to accept fully the notion that the human mind is “just” a machine to appreciate both the potential for machines to master many of our intellectual capabilities and the practical implications of doing so.
The Usual Definition
Artificial Stupidity (AS) may be defined as the attempt by computer scientists to create computer programs capable of causing problems of a type normally associated with human thought.
- Wallace Marshal, Journal of Irreproducible Results (1987)
Probably the most durable definition of artificial intelligence, and the one most often quoted, states that: “Artificial Intelligence is the art of creating machines that perform functions that require intelligence when performed by people.”1 It is reasonable enough as definitions go, although it suffers from two problems. First, it does not say a great deal beyond the words “artificial intelligence.” The definition refers to machines and that takes care of the word “artificial.” There is no problem here: we have never had much difficulty defining artificial. For the more problematic word “intelligence” the definition provides only a circular definition: an intelligent machine does what an intelligent person does.
A more serious problem is that the definition does not appear to fit actual usage. Few AI researchers refer to the chess-playing machines that one can buy in the local drug store as examples of true artificial intelligence, yet chess is still considered an intellectual game. Some equation-manipulation packages perform transformations that would challenge most college students. We consider these to be quite useful packages, but again, they are rarely pointed to as examples of artificial intelligence.
The Moving Frontier Definition
Mr. Jabez Wilson laughed heavily. “Well, I never!” said he. “I thought at first that you had done something clever, but I see that there was nothing in it, after all?” “I began to think, Watson,” said Holmes, “that I made a mistake in explaining. ‘Omne ignatum pro magnifico,’ you know, and my poor little reputation, such as it is, will suffer shipwreck if I am so candid.”
- Sir Arthur Conan Doyle, The Complete Sherlock Holmes
“The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behavior or if there seems to be little underlying plan, we have little temptation to imagine intelligence. With the same object, therefore, it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behavior.”
- Alan Turing (1947)
“AI is the study of how to make computers do things at which, at the moment, people are better.”
- Elaine Rich
This leads us to another approach, which I like to call the “moving frontier” definition: artificial intelligence is the study of computer problems that have not yet been solved. This definition, which Marvin Minsky has been advocating since the 1960s, is unlike those found in other fields. A gene-splicing technique does not stop being part of bioengineering the moment it is perfected. Yet, if we examine the shifting judgments as to what has qualified as “true artificial intelligence” over the years, we find this definition has more validity than one might expect.
When the artificial intelligence field was first named at a now famous conference held in 1956 at Dartmouth College, programs that could play chess or checkers or manipulate equations, even at crude levels of performance, were very much in the mainstream of AI.2 As I noted above, we no longer consider such gameplaying programs to be prime examples of AI, although perhaps we should.
One might say that this change in perception simply reflects a tightening of standards. I feel that there is something more profound going on. We are of two minds when it comes to thinking. On the one hand, there is the faith in the AI community that most definable problems (other than the so-called “unsolvable” problems, see “The busy beaver” in chapter 3) can be solved, often by successively breaking them down into hierarchies of simpler problems. While some problems will take longer to solve than others, we presently have no clear limit to what can be achieved.
On the other hand, coexisting with the faith that most cognitive problems can be solved is the feeling that thinking or true intelligence is not an automatic technique. In other words, there is something in the concept of thinking that goes beyond the automatic opening and closing of switches. Thus, when a method has been perfected in a computerized system, we see it as just another useful technique, not as an example of true artificial intelligence. We know exactly how the system works, so it does not seem fundamentally different from any other computer program.
A problem that has not yet been solved, on the other hand, retains its mystique. While we may have confidence that such a problem will eventually be solved, we do not yet know its solution. So we do not yet think of it as just an automatic technique and thus allow ourselves to view it as true cybernetic cognition.3
Consider as a current example the area of artificial intelligence known as expert systems. Such a system consists of a data base of facts about a particular discipline, a knowledge base of codified rules for drawing inferences from the data base, and a high-speed inference engine for systematically applying the rules to the facts to solve problems.4 Such systems have been successfully used to locate fuel deposits, design and assemble complex computer systems, analyze electronic circuits, and diagnose diseases. The judgments of expert systems are beginning to rival those of human experts, at least within certain well-defined areas of expertise.
Today expert systems are widely regarded as a central part of artificial intelligence, and hundreds of projects exist today to apply this set of techniques to dozens of fields. It seems likely that expert systems will become within the next ten years as widespread as computer spreadsheet programs and data-base management systems are today. I predict that when this happens, AI researchers will shift their attention to other issues, and we will no longer consider expert systems to be prime examples of AI technology. They will probably be regarded as just obvious extensions of data-base-management techniques.
Roger Schank uses the example of a pool sweep, a robot pool cleaner, to illustrate our tendency to view an automatic procedure as not intelligent.5 When we first see a pool sweep mysteriously weaving its way around the bottom of a pool, we are impressed with its apparent intelligence in systematically finding its way around. When we figure out the method or pattern behind its movements, which is a deceptively simple algorithm of making preprogrammed changes in direction every time it encounters a wall of the pool, we realize that it is not very intelligent after all.
Another example is a computer program named ELIZA designed in 1966 by Joseph Weizenbaum to simulate a psychotherapist.6 When interacting with ELIZA, users type statements about themselves and ELIZA responds with questions and comments. Many persons have been impressed with the apparent appropriateness and insight of ELIZA’s ability to engage in psychoanalytic dialog. Those users who have been given the opportunity to examine ELIZA’s algorithms have been even more impressed at how simple some of its methods are.
We often respond to people the same way. When we figure out how an expert operates and understand his or her methods and rules of thumb, what once seemed very intelligent somehow seems less so.
It will be interesting to see what our reaction will be when a computer takes the world chess championship. Playing a master game of chess is often considered an example of high intellectual (even creative) achievement. When a computer does become the chess champion, which I believe will happen before the end of the century, we will either think more of computers, less of ourselves, or less of chess.7
Our ambivalence on the issue of the ability of a machine to truly emulate human thought tends to regard a working system as possibly useful but not truly intelligent. Computer-science problems are only AI problems until they are solved. This could be seen to be a frustrating state of affairs. As with the carrot on a stick, the AI practitioner can never quite achieve the goal.
What Is Intelligence, Anyway?
“It could be simply an accident of fate that our brains are too weak to understand themselves. Think of the lowly giraffe, for instance, whose brain is obviously far below the level required for self-understanding-yet it is remarkably similar to our own brain. In fact, the brains of giraffes, elephants, baboons-even the brains of tortoises or unknown beings who are far smarter than we are-probably all operate on basically the same set of principles. Giraffes may lie far below the threshold of intelligence necessary to understand how those principles fit together to produce the qualities of mind; humans may lie closer to that threshold-perhaps just barely below it, perhaps even above it. The point is that there may be no fundamental (i.e., Gödelian) reason why those qualities are incomprehensible; they may 6e completely clear to more intelligent beings.”
- Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid
“A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, “No, I didn’t actually build it. But it’s based on an idea of mine.”
- Edward Fredkin
If we can replace the word “artificial” with “machine,” the problem of defining artificial intelligence becomes a matter of defining intelligence. As might be expected, though, defining intelligence is at least as controversial as defining artificial intelligence. One approach is to define intelligence in terms of its constituent processes: a process comprised of learning, reasoning, and the ability to manipulate symbols.
Learning is not simply the acquisition of facts, which a data-base-management system can do; it is also the acquisition of knowledge. Knowledge consists of facts, an understanding of the relationships between the facts, and their implications. One difference between humans and computers lies in the relative strengths in their respective abilities to understand symbolic relationships and to learn facts. A computer can remember billions of facts with extreme precision, whereas we are hard pressed to remember more than a handful of phone numbers. On the other hand, we can read a novel and understand and manipulate the subtle relationships between the characters-something that computers have yet to demonstrate an ability to do. We often use our ability to understand and recall relationships as an aid in remembering simple things, as when we remember names by means of our past associations with each name and when we remember phone numbers in terms of the geometric or numeric patterns they make. We thus use a very complex process to accomplish a very simple task, but it is the only process we have for the job. Computers have been weak in their ability to understand and process information that contains abstractions and complex webs of relationships, but they are improving, and a great deal of AI research today is directed toward this goal.
Reason is the ability to draw deductions and inferences from knowledge with the purpose of achieving a goal or solving a problem. One of the strengths of human intelligence is its ability to draw inferences from knowledge that is imprecise and incomplete. The very job of a decision maker, whether a national leader or a corporate manager, is to draw conclusions and make decisions based on information that is often contradictory and fragmentary. To date, most computer-based expert systems have used hard rules, which have firm antecedents and certain conclusions. For some problems, such as the job of DEC’s XCON, which configures complex computer systems, hard rules make sense. A certain-sized computer board will either fit or not fit in a certain chassis. Other types of decision making, such as the structuring of a marketing program for a product launch or the development of national monetary policy, must take into account incomplete present knowledge and the probabilities of unknown future events. The latest generation of expert systems are beginning to allow rules based on what is called fuzzy logic, which provides a mathematical basis for making optimal use of uncertain information.8 This methodology has been used for years in such pattern-recognition tasks as recognizing printed characters or human speech.
The ability to learn and acquire knowledge and to manipulate it inferentially and deductively is often referred to as symbolic reasoning, the ability to manipulate symbols. A symbol is a name or sign that stands for something else, generally a structure or network of facts and other symbols. Symbols are typically organized in complicated patterns rather than simple lists. Another strength of human intelligence is our ability to recognize the patterns represented by the symbols we know even when they occur in contexts different than the ones in which we originally learned the symbol. One of the reasons that the LISP programming language has been popular in developing AI applications is its strength in manipulating symbols that represent complex patterns and their relationships rather than orderly lists of facts (despite its name, which derives from “list processing”).
Rather than defining intelligence in terms of its constituent processes, we might define it in terms of its goal: the ability to use symbolic reasoning in the pursuit of a goal. Symbolic reasoning is
used to develop and carry out strategies to further the goals of its possessor. A question that then arises is, What are the goals? With machine intelligence, the goals have been set by the human designer of each system. The machine may go on to set its own subgoals, but its mission is imbedded in its algorithms. Science-fiction writers, however, have long speculated on a generation of intelligent machines that set their own agendas. With living creatures or species, the goals are often expressed in terms of survival either of the individual or the species. This is consistent with the view of intelligence as the ultimate (most recent) product of evolution.
The evidence does not yet make clear whether intelligence does in fact support the goal of survival. Intelligence has allowed our species to dominate the planet. We have also been sufficiently “intelligent” to unlock the destructive powers that result from manipulating physical laws. Whether intelligence, or at least our version of it, is successful in terms of survival is not yet clear, particularly when viewed from the long time scale of evolution.
Thus far Homo sapiens are less than 100,000 years old. Dinosaurs were a successful, surviving class of creatures for 160 million years. They have always been regarded as unintelligent creatures, although recent research has cast doubt on this view. There are, however, many examples of unintelligent creatures that have survived as a species (e.g. palm trees, cockroaches, and horseshoe crabs) for long periods of time.
Humans do use their intelligence to further their goals. Even if we allow for possible cultural bias in intelligence testing, the evidence is convincing that there is a strong correlation between intelligence, as measured by standardized tests, and economic, social, and perhaps even romantic success. A larger question is whether we use our intelligence in setting our goals. Many of our goals appear to stem from desires, fears, and drives from our primitive past.9
In summary, there appears to be no simple definition of intelligence that is satisfactory to most observers, and most would-be definers of intelligence end up with long checklists of its attributes. Minsky’s Society of Mind can be viewed as a book-length attempt at such a definition. Allen Newell offers the following list for an intelligent system: it operates in real-time; exploits vast amounts of knowledge; tolerates erroneous, unexpected, and possibly unknown inputs; uses symbols and abstractions; communicates using some form of natural language; learns from the environment; and exhibits adaptive goal-oriented behavior.10
The controversy over what intelligence is, is reminiscent of a similar controversy over what life is. Both touch on our vision of who we are. Yet great progress has been made, much of it in recent years, in understanding the structures and methods of life. We have begun to map out DNA, decode some of the hereditary code, and understand the detailed chemistry of reproduction. The concern many have had that understanding these mechanisms would lessen our respect for life has thus far been unjustified. Our increasing knowledge of the mechanisms of life has, if anything, deepened our sense of wonder at the order and diversity of creation.
We are only now beginning to develop a similar understanding of the mechanisms of intelligence. The development of machine intelligence helps us to understand natural intelligence by showing us methods that may account for the many skills that comprise intelligence. The concern that
understanding the laws of intelligence will trivialize it and lessen our respect for it may also be unjustified. As we begin to comprehend the depth of design inherent in such “deep” capabilities as intuition and common sense, the awe inherent in our appreciation of intelligence should only be enhanced.
Evolution as an Intelligent Process
God reveals himself in the harmony of what exists.
A central tenet of AI is that we, an intelligent species, can create intelligent machines. At present the machines we have created, while having better memories and greater speed, are clearly less capable than we are at most intellectual tasks. The gap is shrinking, however. Machine intelligence is rapidly improving. The same cannot be said for human intelligence. A controversial question surrounding AI is whether the gap can ultimately be eliminated. Can machine intelligence ultimately equal that of human intelligence? Can it surpass human intelligence? A broader statement of the question is, Can an intelligent entity be more intelligent than the intelligence that created it?
One way to gain insight into these questions might be to examine the relationship of human intelligence to the intelligent process that created it-evolution. Evolution created human and many other forms of intelligence and thus may be regarded as an intelligent process itself.11
One attribute of intelligence is its ability to create and design. The results of an intelligent design process-to wit, intelligent designs-have the characteristics of being aesthetically pleasing and functionally effective. It is hard to imagine designs that are more aesthetically pleasing or functionally effective than the myriad of life forms that have been produced by the process we call evolution. Indeed, some theories of aesthetics define aesthetic quality or beauty as the degree of success in emulating the natural beauty that evolution has created.12
Evolution can be considered the ultimate in intelligence-it has created designs of indescribable beauty, complexity, and elegance. Yet, it is considered to lack consciousness and free will-it is just an “automatic” process. It is what happens to swirling matter given enough time and the right circumstances.
Evolution is often pitted against religious theories of creation. The religious theories do share one thing in common with the theory of evolution-both attribute creation to an ultimate intelligent force. The most basic difference is that in the religious theories this intelligent force is conscious and does have free will, although some theologies, such as Buddhism, conceive of God as an ultimate force of creativity and intelligence and not as a personal willful consciousness.13
The theory of evolution can be simply expressed as follows. Changes in the genetic code occur through random mutation; beneficial changes are retained, whereas harmful ones are discarded through the “survival of the fittest.”14 In some ways it makes sense that the survival of the fittest would retain good changes and discard bad ones, since we define “good” to mean more survivable.
Yet let us consider the theory from another perspective. The genetic code is similar to an extraordinarily large computer program, about six billion bits to describe a human, in contrast to a few tens of millions of bits in the most complex computer programs. It is indeed a binary code, and we are slowly learning its digital language.15 The theory says that changes are introduced essentially randomly, and the changes are evaluated for retention by survival of the entire organism and its ability to reproduce. Yet a computer program controls not just the one characteristic that is being changed but literally millions of other characteristics. Survival of the fittest appears to be a rather crude technique capable of concentrating on at most a few fundamental characteristics at a time. While a few characteristics were being optimized, thousands of others could degrade through the increasing entropy of random change.16 If we attempted to improve our computer programs in this way, they would surely disintegrate.
The method we use to improve the programs we create is not the introduction of random code changes but carefully planned and designed changes and experiments designed to focus in on the changes just introduced. It has been proposed that evolution itself has evolved to where changes are not entirely random but in some way planned, and that changes are “tested” in some way other than overall survival, in which a change just introduced would be competing with thousands or even millions of other factors.17
Yet no one can describe a mechanism in which such planning and isolated evaluation could take place in the process of evolution. There appears, therefore, to be a gap in the theory of evolution. Clearly, the fossil and biochemical evidence is overwhelming that species have indeed undergone a slow but dramatic evolution in complexity and sophistication, yet we do not fully understand the mechanism. The proposed mechanism seems unlikely to work; its designs should disintegrate through increasing entropy.
One possible perspective would state that the creator of an intelligence is inherently superior to the intelligence it creates. The first step of this perspective seems to be well supported in that the intelligence of evolution appears vast. Yet is it?
While it is true that evolution has created some extraordinary designs; it is also true that it took an extremely long period of time to do so. Is the length of time required to solve a problem or create a design relevant to an evaluation of the level of an intelligence? Clearly it is. We recognize this by timing our intelligence tests. If someone can solve a problem in a few minutes, we consider that better than solving the same problem in a few hours or a few years. With regard to intelligence as an aid to survival, it is clearly better to solve problems quickly than slowly. In a competitive world we see the benefits of solving problems quickly.
Evolution has achieved intelligent work on an extraordinarily high level yet has taken an extraordinarily long period of time to do so. It is very slow. If we factor its achievements by its ponderous pace, I believe we shall find that its intelligence quotient is only infinitesimally greater than zero. An IQ of only slightly greater than zero is enough for evolution to beat entropy and create extraordinary designs, given enough time, in the same way that an ever so slight asymmetry in the physical laws may have been enough to allow matter to almost completely overtake antimatter.
The human race, then, may very well be smarter than its creator, evolution. If we look at the speed of human progress in comparison to that of evolution, a strong case can be made that we are far more intelligent than the ponderously slow process that created us. Consider the sophistication of our creations over a period of only a few thousand years. In another few thousand years our machines are likely to be at least comparable to human intelligence or even surpass it in all likelihood, and thus humans will have clearly beaten evolution, achieving in a matter of thousands of years as much or more than evolution achieved in several billion years. From this perspective, human intelligence may be greater than its creator.18