Machine Intelligence: The First 80 Years

August 6, 2001 by Ray Kurzweil

A brief history of machine intelligence written for “The Futurecast,” a monthly column in the Library Journal.

Originally published August, 1991. Published on KurzweilAI.net August 6, 2001.

A new form of intelligence has recently emerged on Earth. To assess the impact of this most recent branch of evolution, let’s take a quick journey through its first 80 years.

Drawing upon a diversity of intellectual traditions and fueled by the exigencies of war, the first computers were developed independently and virtually at the same time in three different countries, one of which was at war with the other two. The first operational computer, developed by Alan Turing and his English colleagues in 1940, was named Robinson, after a popular cartoonist who drew Rube Goldberg machines. Turing’s computer was able to decipher the German “Enigma” code and is credited with enabling the Royal Air Force to win the Battle of Britain and withstand the Nazi war machine.

Turing’s agenda

The similarity of computer logic to at least some aspects of our thinking process was not lost on Turing, and he is credited with having established much of the theoretical foundations of computation and the ability to apply this new technology to the emulation of intelligence.

In his classic 1950 paper, “Computing Machinery and Intelligence,” Turing lays out an agenda that would in fact occupy the next century of advanced computer research: game playing, decision making, natural language understanding, translation, theorem proving, and, of course, the cracking of codes.

Turing went on to predict that by early in the next century society will simply take for granted the pervasive intervention of intelligent machines in all phases of life, that people will speak routinely of machines making critical intelligent decisions without anyone thinking it strange.

If we think about the Gulf War of 1991, we saw perhaps the first dramatic example of the increasingly dominant role of machine intelligence. The cornerstones of military power from the beginning of recorded history through most of the 20th century–geography, manpower, fire power, and battle-station defenses–were largely replaced by the intelligence of software and electronics. Intelligent scanning by unstaffed airborne vehicles; weapons finding their way to their destinations through machine vision and pattern recognition; intelligent communications and coding protocols; and other manifestations of the information age began to rapidly transform the nature of war.

Infiltrated by machine intelligence

By the end of the 1980s, we also saw the pervasive infiltration of our financial institutions by machine intelligence. Not only were the stock, bond, currency, commodity, and other markets managed and maintained by computerized networks, but the majority of buy-and-sell decisions were initiated by software programs that contained increasingly sophisticated models of their markets. The 1987 stock market crash was blamed in large measure on the rapid interaction of trading programs. Trends that otherwise would have taken weeks to manifest themselves developed in minutes. Suitable modifications to these algorithms have managed to avoid a repeat performance.

Since 1990, your electrocardiogram (ECG) has come complete with the computer’s own diagnosis of your cardiac health. Intelligent image-processing programs enabled doctors to peer deep into your bodies and brains, and computerized bioengineering technology enabled drugs to be designed on biochemical simulators.

The world of music had been transformed through intelligent software and electronics. Not only were most sounds heard in recordings and sound tracks generated by intelligent signal processing algorithms, but the lines of music themselves were increasingly an assemblage of both human and computer-assisted improvisation.

The handicapped have been a particularly fortunate beneficiary of the age of intelligent machines. Reading machines have been reading to blind and dyslexic persons since the 1970s, and speech recognition and robotic devices have been assisting the hands impaired since the 1980s.

Taking it all for granted

With the increasingly important role of intelligent machines in all phases of our lives–military, medical, economic, political–it was odd to keep reading articles with titles such as Whatever Happened to Artificial Intelligence? This was a phenomenon that Turing predicted, that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail to even notice it.

It reminds me of people who walk in the rain forest and ask, “Where are all these species that are supposed to live here?” when there are a hundred species of ant alone within 50 feet of them. Our many species of machine intelligence have woven themselves so seamlessly into our modern rain forest that they are all but invisible.

Turing also offers an explanation of why we would fail to acknowledge intelligence in our machines. In 1947, he writes:

The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behavior we have little temptation to imagine intelligence. With the same object, therefore, it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behavior.

I am also reminded of Elaine Rich’s definition of artificial intelligence (AI) as the “study of how to make computers do things at which, at the moment, people are better.”

The 90s: paperless books

Now it was in the 1990s that things started to get interesting. The nature of books and other written documents underwent several transformations during this decade. In the early 1990s, written text began to be created on voice-activated word processors. By the mid-1990s, we saw the advent of paperless books with the introduction of portable and wireless displays that had the resolution and contrast qualities of paper.

LJ ran a series of articles on the impact on libraries of books that no longer required a physical form. One of these articles pointed out that despite paperless publishing and the so-called paperless office, the use of paper continued nonetheless to increase. American use of paper for books and other documents grew from 850 billion pages in 1981 to 2.5 trillion pages in 1986 to six trillion pages in 1995.

The nature of a document also underwent substantial change and now routinely included voice, music, and other sound annotations. The graphic part of documents became more flexible: fixed illustrations turned into animated pictures. Documents included the underlying knowledge and flexibility to respond intelligently to the inputs and reactions of the reader. The “pages” of a document were no longer necessarily ordered sequentially; they became capable of forming intuitive patterns that reflected the complex web of relationships among ideas.

Communications also became transformed. Late in the decade, we saw the first effective translating telephones demonstrated, although the service was not routinely offered. Both the recognitions and translations were far from perfect, but they appeared to be usable.

We also saw the introduction of listening machines for the deaf, which converted human speech into a visual display of text–essentially the opposite of reading machines for the blind. Another handicapped population that was able to benefit from these early AI technologies was paraplegic individuals, who were now able to walk using exoskeletal robotic devices they controlled using a specialized cane.

As we entered the first decade of the 21st century, the translating telephones demonstrated late in the last century began to be offered by the telephone companies competing for international customers. The quality varied considerably from one pair of languages to another. Even though English and Japanese are very different in structure, this pair of languages appeared to offer the best performance, although translation among different European languages was close. Outside of English, Japanese, and a few European languages, performance fell off dramatically.

The Disabled Act of 2004

The output displays for the listening machines for the deaf were now built into the user’s eyeglasses, essentially providing subtitles on the world. Specific funding was included in the Omnibus Disabled Act of 2004 to provide these sensory aids for deaf persons who could not afford them, although complex regulations on verifying income levels slowed the implementation of this program.

The standard personal computers of 2005 were now palmtop devices that combined unrestricted speech recognition with handprint and gesture recognition as primary input modalities. They also included knowledge navigators with two-way voice communication and customizable personalities.

TIM technology arrives

Everyone recalls the flap when TIM was first introduced. TIM, which stands for Turing’s IMage, was created at the University of Texas and was presented as the first computer to pass the Turing Test. The Turing Test, first described by Turing in the same 1950 paper mentioned above, involves a computer that attempts to fool a human judge into thinking that it rather than a human “foil” is the real human.

The researchers claimed that they had even exceeded Turing’s original challenge because you could converse with TIM by voice rather than through terminal lines as Turing had originally envisioned. In an issue of LJ devoted to the TIM controversy, Hubert Dreyfus, the persistent critic of the AI field, dismissed the original announcement of TIM as the usual hype we have come to expect from the AI community.

Eventually, even AI leaders rejected the claim citing the selection of a human “judge” unfamiliar with the state of the art in AI and the fact that not enough time had been allowed for the judge to interview the computer and the human. TIM became, however, a big hit at Disney World, where 2000 TIMs were installed in the Microsoft Pavilion.

The TIM technology was subsequently integrated into artificial reality systems (computerized systems with visual goggles and headphones that enable the wearer to enter and interact with an artificial world) that had already revolutionized the educational field, not to mention the game industry. Artificial reality with integrated conversational capabilities became quite controversial. One radical school of thought questioned the need for books on history when you could now go back and actually participate in historical events yourself.

Rather than read about the Constitutional Convention, a student could now debate a simulated Ben Franklin on executive war powers, the role of the courts, or any other issue. An LJ editorial pointed out that books provided needed perspective rather than just experiences. It is hard now to recall that the medium they used to call television was itself controversial in its day, despite the fact that it was of low resolution, two dimensional, and noninteractive. Artificial reality was still a bit different from the real thing, though, in that the ability of the artificial people in artificial reality to really understand what you were saying still seemed a bit stilted.

The vision from 2020

So here we are in the year 2020. Translating telephones are now used routinely, and, while the languages available are still limited, there is now more choice with reasonable performance for Chinese and Korean.

The knowledge navigators available on today’s personal computers, unlike those of ten years ago, can now interview humans in their search for knowledge instead of just other computers. People use them as personal research assistants.

Communications are quite a bit different from the days when phone calls went through wires and the old television medium went through the air. Now everyone is online all the time with low bandwidth communication (like voice) through cellular radio. This has strained the conventions of phone courtesy as it is now difficult to be “away” from your phone. High-resolution communication, such as moving three-dimensional holographic images, now go through wired, fiber-optic wires, of course. Japan did beat us by six years in laying down a fiber-optic information highway, but the American system is now second to none.

The listening systems in your eyeglasses, originally developed for the deaf, are now routinely used by almost everyone, listening impaired or not, as they also include language translation capabilities and optional commentaries of what you see through them. Artificial reality is now much more lifelike, and there has been a recent phenomenon of people who spend virtually all their time in artificial reality and do not want to come out. What to do about this is a topic of considerable recent debate.

The University of Texas has announced a new version of TIM, which has received a more enthusiastic reception from AI experts. Marvin Minsky, one of the fathers of AI, who was contacted at his retirement home in Florida, hailed the development as the realization of Turing’s original vision. Dreyfus, however, remains unconvinced and recently challenged the Texas researchers to use him as the human judge in their experiments.

And in the New York Times this morning, there was a front-page article entitled, “Whatever Happened to Artificial Intelligence?”

Reprinted with permission from Library Journal, August 1991. Copyright © 1991, Reed Elsevier, USA

Other Futurecast columns