Artificial Intelligence in the World Wide Web

March 7, 2001 by David G. Stork

The Internet is a new metaphor for the human brain. It makes it possible for hundreds of millions of Web users to teach computers common-sense knowledge, similar to SETI@home’s search for E.T., says Dr. David G. Stork, a leading AI researcher. This can even be accomplished just by playing games on the Net.

Originally published March 7, 2001 on KurzweilAI.net.

The notion that artificial intelligence (AI) will arise in the World Wide Web is neither particularly new nor particularly insightful (once you accept the premise that intelligence can appear in non-biological hardware), nevertheless it is a suggestion that deserves scrutiny so that we can judge the assumptions underlying it, its overall plausibility, be alert to indications it is or is not occurring and, for some of us, to suggest avenues to facilitate such a development.

First I should clarify, if only informally, what I mean by artificial intelligence in this context. I consider an artificial system “intelligent” if it can recognize patterns, discern their parts and relations, learn and remember, communicate through an abstract language, have common sense and other “informal” knowledge, reason and make subtle inferences, plan, infer the goals and desires of others, appreciate double meanings as in irony, puns or deception, and sense and respond to a changeable environment, all at the level of an average human.

You’ll note that I don’t demand such a system “be conscious” or “have a soul” or “experience qualia” (such as the “redness” or “sweetness” of a cherry or pain upon touching fire). Philosophers have debated these latter properties for quite some time. For the purposes here, we should be satisfied with an unconscious “zombie” that behaves intelligently. Let me acknowledge immediately that building artificially intelligent systems is surely one of the hardest problems in all of science.

Consider some of the models or metaphors for the brain of the last several hundred years. One of the earliest was due to René Descartes, who asked whether a complicated system of pipes, gears, pulleys and other mechanical contraptions could in principle think. Two centuries later the metaphor to gain some currency was the telephone switchboard; this properly acknowledged the role of the rich interconnections between neurons in the brain.

The next dominant metaphor was the digital computer, and indeed this metaphor has been so compelling that some computers have been called “thinking machines,” as was so well illustrated by the HAL 9000 computer in Stanley Kubrick’s 2001: A Space Odyssey. In the midst of the ascendency of the computer-as-brain metaphor, there was a short, ill-fated digression in which a few scholars and an uncritical public thought that the hologram was an acceptable metaphor for the brain.+

The Internet: New Metaphor for the Brain

We’re now entering a period of a new metaphor for the brain, indeed a new platform for development of intelligent systems: the Internet. There are many attributes of the Internet that make this metaphor compelling. The first is that the total computing power and data potentially accessible over the Internet is enormous, and growing every day–by some estimates already greater than that of a human brain.

The second is that the architecture of the Internet matches that of a brain more faithfully than does that of a traditional supercomputer. For instance, just as there is no “master” region of the brain, so too there is no centralized “master” or central processor guiding communication and computation on the Internet. Just as the neurons making up a brain are imprecise, faulty, and die, so too the personal computers and databases accessible over the Internet contain imprecise or contradictory data, have hardware and software faults, and are occasionally turned off or crash.

Just as brain neurons are richly interconnected and communicate with a simple code of neural spikes and electrical potentials, so too the computers on the Internet are richly interconnected and communicate using fairly simple protocols and languages such as TCP/IP and HTML. Just as a human brain gets information from sensors such as eyes and ears, so too, increasingly, is the Web becoming connected to sensors such as Webcams, Webmicrophones, telescopes, microscopes, barometers, as well as to personal digital assistants, cell phones and even kitchen appliances.

Just as the human brain controls muscles for grasping and locomotion, so too are manipulators being connected to the Internet for watering gardens, opening doors, pointing telescopes, and much more. No metaphor is perfect, of course, and there are several areas where the Internet is unlike a brain, nevertheless, the Internet seems to be the best current metaphor for the brain, increasingly supplanting the large mainframe computer in this regard.

While the structural similarities between the Internet and the brain may help enable the development of artificially intelligent systems in the Web, the most important impetus underlying such a development comes from economics and the value proposition of AI on the Web. Searching, sorting, and interpreting information on the Web is the “killer application” of AI, and hundreds of millions of people want it and would be willing to pay for it. A broad range of people would like to search for images or video clips on the Web based on visual content, to ask natural language questions and get summaries of large and diverse databases on the Web, and so on.

Users also want these systems to know their personal interests and preferences, the better to filter unwanted information such as spam and to alert them to new information of personal interest. Likewise, as more and more commerce appears on the Web, corporations will seek intelligent bots to find the best price on goods and services, alert lawyers to key provisions in online legal documents, and much, much more. NASA is not going to build an intelligent HAL-like computer to run spaceships, but Web content providers, search engine companies, Web portal companies, and a broad range of corporations making transactions on the Web all strongly desire to add artificial intelligence to their systems.

As mentioned, the computational resources potentially available over the Internet are immense, and ever more frequently these resources are being used for large collaborative projects. One of the earliest and most noteworthy is SETI@home, where (at present) three million individual computers have contributed the equivalent of 60000 years of a single personal computer for digitally filtering radio telescope signals in search of indications of extraterrestial intelligence. A similar project is AIDS@home, which assists in the discovery of AIDS therapies.

Several startup companies are trying to commercialize such distributed computing as well. For example, Entropia Corporation distributes large computing tasks of its (paying) client corporations to the networked personal computers of participating individuals. Such individuals are motivated to donate time on their computers because a portion of this collective computing resource is directed to philanthropic projects of their choice.

Such raw computational power is but one requirement for intelligent systems–one that frankly I feel has been overrated. Moore’s law–that on virtually all criteria such as speed, performance, and cost, computer hardware improves by a factor of two roughly every 18 months–is indeed the rising tide that lifts all boats in the information age.

Software, however, obeys no such equivalent law of improvement. It is hard to argue that software such as the UNIX operating system or even proprietary applications such as spreadsheets, word processing programs or AI systems such as speech recognizers have improved significantly over the last two decades–surely they haven’t improved nearly as much as hardware.

Current supercomputers have the computational power of a fly’s nervous system, nevertheless despite the existence of reconstructions of the fly’s neural system and much algorithmic effort, we still lack the software to duplicate a fly’s ability to recognize objects, perform rapid flight control, and identify substances by smell or taste. In short, software, more than hardware, is the bottleneck associated with the construction of most AI systems.

A key ingredient needed for the development of AI software is data. Indeed, there is theoretical and experimental evidence that it is the lack of data that is retarding development of many systems such as speech recognizers, handwritten character recognizers, common sense reasoners and many others.

For instance, state-of-the-art speech recognizers are trained with hundreds or thousands of hours of representative spoken words along with their corresponding written transcriptions. Traditionally, in a commercial setting, knowledge engineers enter such data by hand–the more the better–and the resulting database is then a vital, guarded corporate asset.

In the public arena, the Defense Advanced Research Projects Agency has funded the work of the Linguistic Data Consortium, which has collected, transcribed and processed a wealth of linguistic data–everything from spoken Mandarin to parts of speech in sentences from several languages. Such data is distributed widely and has become vital to the development of many artificial speech recognizers and language systems, and the more high-quality data is available, the higher the performance of these systems.

In a few domains–particularly finance, commerce and sports–some of the relevant data can be extracted from the Web by data mining. In many other important cases, however, the data simply doesn’t exist on the Web, for instance handwritten characters with their transcriptions or common sense knowledge. For instance, read the following two sentences, selected nearly randomly from the Web:

“Finding good places to eat with little ones can be difficult when traveling. You might even want to consider choosing a drive-in if you’re traveling with a very fussy baby.”

Now consider the ambiguities in that passage:

  • “Finding good places to eat”? I sometimes have difficulty finding good apples to eat. I don’t want to eat a place; I’d rather be at a place and eat some food. How would a computer know the author means places at which to eat, rather than places to be eaten?
  • “Finding good places to eat with little ones”? I don’t care if the place has “little ones” at it; in fact I prefer to go to a place that doesn’t have any little ones. How would a computer know that in this sentence “with little ones” modifies “Finding” and not instead “good places”?
  • “eat with little ones”? I eat my sushi with chopsticks; I eat my sandwich with soup. How would I eat a place by means of “little ones” or while also eating “little ones”? Furthermore, how would a computer know the author meant here that “little ones” means “little people” and not little places?
  • “traveling”? Does this apply to traveling on a cruise ship? an airplane? a train? a submarine? or while commiting the foul in basketball of walking with the ball without dribbling it?
  • “very fussy”? Very fussy about what? food? place?

These kinds of ambiguities pervade the information on the Web, and indeed all writing and discourse. We use tacit, common sense knowledge in order to understand such sentences, and so must any artificial language understanding system. The needed data is called “tacit” or “informal” because we are rarely aware of it and we learn it indirectly through experience.

Everyone knows that “When you’re dead you stay dead,” “Animals run faster forward than sideways,” and “A mother is always older than her biological son,” even those these common sense facts were not explicitly taught to us in formal settings such as schools. Most importantly, up to now, such common sense information has never been collected or accessible in any database on the Web.

Open Mind Initiative

Where might a computer system get such “informal” information needed to understand those two apparently simple sentences? One company, Cycorp, has invested nearly 350 person-years of effort over 17 years to enter such common sense information by hand.

Another way, however, is to use the Web itself for collecting data contributed by non-expert Web users or “netizens”; this is the approach of the Open Mind Initiative, a novel world-wide collaborative effort to help develop intelligent software. The Open Mind Initiative collects information from netizens in order to teach computers the myriad things which we all know and which underlie our general intelligence.

The Initiative extends two important trends in the development of software: the increasing number of collaborators per software project, and the decreasing average expertise of each contributor. In principle, hundreds of millions of Web users could contribute informal information through the Open Mind Initiative, and they need no more “expertise” than knowing how to point and click on a Web broswer.

In the Open Mind commonsense project, for instance, netizens answer simple questions, fill in the blanks, describe the relation between two words, describe pictures, and so forth, all in order to build a database of common sense facts. These could be used in future bots, search engines, smart games or other “intelligent” software.

Likewise in the Open Mind Initiative’s handwriting recognition project, netizens view handwritten characters or handwritten words on a browser and type in a transcription. In all the Initiative’s projects, data acquisition is speeded by a technique called “interactive learning,” where the tasks that are most informative to the classifier or AI system are automatically presented to netizens.

There are other collaborative projects in which netizens freely contribute information, but none quite like the Open Mind Initiative. For instance, NASA has begun a project in which netizens (whom they refer to as “clickworkers”) classify satellite images online. However the NASA project does not exploit the power of interactive learning and the contributed information is not used to train intelligent software as in the Open Mind Initiative.

Similarly, there are several collaborative projects in which netizens contribute short articles to an online encyclopedia, open to all and available for revision or amending. Here too, there is no training of classifier or AI software as in the Open Mind Initiative.

Given the networked hardware resources, emerging data for training intelligent software, and most importantly a strong value proposition and identified customer base, how might intelligent systems develop on the Internet? A crucial step will be a growing number of frameworks for sharing learned informatio–not merely the raw databases, but the common sense and informal knowledge not currently on the Web. This may be in repositories (as might arise in the Open Mind Initiative) or data from specializts. Bots and search engines will dip into this growing self-monitoring system.

Fun and Games

One development that was anticipated fully by few if any futurists or science fiction writers is this: the emergence of fun and games as a driver of technology. Neither Jules Verne nor H. G. Wells nor Arthur C. Clarke nor Philip K. Dick nor any but a handful of science fiction authors appreciated that fun and games act as an important impetus for improving and extending technology.

The fact that the gross income of the videogame industry exceeds that of Hollywood and the feature film industry, that an astonishing proportion of young children (and many adults) own a Gameboy, that some children, barely able to speak, spend hours “feeding” and otherwise tending a 256-pixel image of a Pokémon on a tiny wristwatch-borne LCD screen, all would have surprised pre-1970 visionaries and science fiction writers alike. Of course, once electronic games exploded onto the scene, writers and futurists were quick to incorporate them into their visions, as for instance did Orson Scott Card in Ender’s Game (1985).

We should not again underestimate the power of fun and games. Who knows what games will be played on the Internet on PCs and on smarter portable peripherals, personal digital assistants and cell phones. Just as children once played scavenger hunt or picked up a stick, tennis ball, piece of rope or tin can in the backyard to invent an informal game, perhaps someday there will be software tools and sophisticated trainable bots that allow players to romp in cyberspace using resources and data that are not nominally “part of a game.”

Imagine a chase game, with virtual opponents careening through databases (e.g., finding the current temperature in Oslo), collecting data from sensors (e.g., a live Webcam photo of the Eiffel Tower), controlling manipulators (e.g., a Web-connected robot watering a garden) in games we can now only vaguely imagine. Building such a game would be a challenge since the game would compete with commercial games for players’ attention. On the other hand, simple games such as Solitaire and Tetris manage to capture hundreds of trillions of mouse clicks worldwide so perhaps this challenge isn’t so daunting after all. The more adaptive and intelligent such games, the more we can expect them to attract players.

Conversely, as described for the Open Mind Initiative, the more Web users play a game, the more information will be learned “in the background”–information that could be used in intelligent systems. For instance, imagine an online version of Dungeons and Dragons in the Open Mind Initiative in which players answer questions, fill in the blanks, and so on, all the time unconsciously providing common sense and linguistic information to a database. A compelling game might attract players for hundreds of millions of hours in total, and given proper algorithmic safeguards on data quality, the resulting database could be used to improve a range of AI systems.

What might be other indications that artificial intelligence is developing on the Internet? It is a truism that bots–software that can move throughout the Web, read and process data on Websites–will become increasingly powerful and intelligent. Currently bots find the best deal on an automobile or home loan or alert you to the publication of a book by a favorite author or news stories about your home town. Bots will learn more about your interests and preferences as you let them monitor your behavior online.

Likewise, search engine companies will improve and add more natural interfaces and use far more knowledge–and intelligence–based reasoning to searches and allow you to ask questions of databases. An interesting development will be when a compelling game is written in which data can be collected, as described above.

How does the evolution of artificial intelligence on the Internet compare with other approaches? There have been interesting efforts at building humanoid robots that interact with the world and people. The general philosophy underlying the Cog project at MIT, for instance, is that intelligence arises in a collection of modules more or less specialized for different tasks–seeking novelty, seeing faces, grasping and moving, and so forth–and that a close link between perception and action allows these systems to learn from experience.

There are many attractive aspects of this research program, and I think this general “mixture-of-agents” architecture is promising indeed. My concern is that once something is learned by such an “embodied” robot, it is difficult to transfer that knowledge to other systems. For instance if based on visual and tactile feedback, a humanoid robot learns how to reach and grasp, such specific knowledge would be hard to transfer to a robot having different number and types of manipulators.

Moreover, there is far less collaborative effort and computing power that can be brought to the project than in Web-based AI. Most importantly, the financial incentives for building such humanoid robots pales in comparison to the immense incentives associated with Web-based intelligence.

I must stress that I don’t feel that large datasets alone are sufficient, nor that the development of AI on the Web is imminent. Our field has suffered mightily both in funding and public understanding due to overly zealous evangelists spouting hype, particularly during AI’s early years.

We cannot allow that to happen again. Building cognitve system is astoundingly difficult and most people–including many scientists–do not realize the magnitude of the problem. Some do not even realize the existence of problems such as automatic scene analysis, that is, making a computer system that could describe pictures such as appear in a magazine.

Anyone who thinks deeply about this problem for an hour or so will realize this is one of the most profoundly hard problems in all of science, surely harder than putting a man on the moon or Mars. While we can all applaud the exponential growth in computing power, the problems lie more, I believe, in getting proper data, computer representation and software, none of which have improved at the rate hardware has. We do have, though, a sufficiently strong value proposition and compelling “killer application,” or at least “killer arena”–the Internet–and this gives me guarded optimism.

In summary, we are passing from the era where the Internet is supplanting the large, mainframe computer as the best metaphor and model for mind, and the Internet is becoming a platform and commercial arena for the development of AI. Collaborative projects–harvesting networked computer power and increasingly networked human brain power–are building blocks in the grand effort to build AI on the Web.