Foreword to Virtual Humans

October 20, 2003 by Ray Kurzweil

By the end of this decade, we will have full-immersion visual-auditory environments, populated by realistic-looking virtual humans. These technologies are evolving today at an accelerating pace, as reflected in the book Virtual Humans. By the 2030s, virtual reality will be totally realistic and compelling and we will spend most of our time in virtual environments. By the 2040s, even people of biological origin are likely to have the vast majority of their thinking processes taking place in nonbiological substrates. We will all become virtual humans.

To be published in Virtual Humans, AMACOM, November 2003. Published on KurzweilAI.net October 20, 2003.

If you ask what is unique about the human species, you’re likely to get a variety of responses, including use of language, creation of technology, even the wearing of clothes. In my mind, the most salient distinguishing feature of the leadership niche we occupy in evolution is our ability to create mental models. We create models of everything we encounter from our experiences to our own thinking. The ancient arts of story telling were models of our experiences, which evolved into theater and the more modern art of cinema.

Science represents our attempts to create precise mathematical models of the world around us. Our inclination to create models is culminating in our rapidly growing efforts to create virtual environments and to populate these artificial worlds with virtual humans.

We’ve had at least one form of virtual reality for over a century&#8212it’s called the telephone. To people in the late nineteenth century, it was remarkable that you could actually “be with” someone else without actually being in the same room, at least as far as talking was concerned. That had never happened before in human history. Today, we routinely engage in this form of auditory virtual reality at the same time that we inhabit “real” reality.

Virtual humans have also started to inhabit this virtual auditory world. If you call British Airways, you can have a reasonably satisfactory conversation with their virtual reservation agent. Through a combination of state-of-the-art, large-vocabulary, over-the-phone speech recognition and natural language processing, you can talk to their pleasant-mannered virtual human about anything you want, as long as it has to do with making reservations on British Airways flights.

On the Web, we’ve added at least a crude version of the visual sense to our virtual environments, albeit low-resolution and encompassing only a small portion of our visual field. We can enter virtual visual-auditory virtual environments (e.g., Internet-based videoconferencing) with other real people. We can also engage in interactions with an emerging genre of Web-based virtual personalities with a visual presence incorporating real-time animation. There are also a number of virtual worlds with animated avatars representing participants.

My own “female alter-ego,” named Ramona, has been gathering a following on our Web site, KurzweilAI.net, for over two years. Like a number of other emerging “avatars” on the web, Ramona is a virtual human who works for a living. Aside from demonstrating real-time animation and language processing technologies, she is programmed with a knowledge of our Web site content and acts as an effective Web hostess.

By the end of this decade, we will have full-immersion visual-auditory environments, with images written directly onto our retinas by our eyeglasses and contact lenses. All of the electronics for the computation, image reconstruction, and very- high-bandwidth wireless connection to the Internet will be embedded in our glasses and woven into our clothing, so computers as distinct objects will disappear. We will be able to enter virtual environments that are strikingly realistic recreations of earthly environments (or strikingly fantastic imaginary ones) either by ourselves or with other “real” people.

Also populating these virtual environments will be realistic-looking virtual humans.  Although these circa-2010 virtual humans won’t yet pass the Turing test (i.e., we won’t mistake them for biological humans), they will have reasonable facility with language. We’ll interact with them as information assistants, virtual sales clerks, virtual teachers, entertainers, even lovers (although this application won’t really be satisfactory until we achieve satisfactory emulation of the tactile sense).

Virtual reality and virtual humans will become a profoundly transforming technology by 2030. By then, nanobots (robots the size of human blood cells or smaller, built with key features at the multi-nanometer&#8212billionth of a meter&#8212scale) will provide fully immersive, totally convincing virtual reality in the following way. The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons.

For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer.

Nanobot-based virtual reality is not yet feasible in size and cost, but we have made a good start in understanding the encoding of sensory signals.  For example, Lloyd Watts and his colleagues have developed a detailed model of the sensory coding and transformations that take place in the auditory processing regions of the human brain.  We are at an even earlier stage in understanding the complex feedback loops and neural pathways in the visual system. 

When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.

The Web will provide a panoply of virtual environments to explore. Some will be recreations of real places, others will be fanciful environments that have no “real” counterpart. Some indeed would be impossible in the physical world (perhaps because they violate the laws of physics). We will be able to “go” to these virtual environments by ourselves, or we will meet other people there, both real people and virtual people.

By 2030, going to a web site will mean entering a full-immersion virtual-reality environment. In addition to encompassing all of the senses, these shared environments could include emotional overlays, since the nanobots will be capable of triggering the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions.

In the same way that people today beam their lives from Web cams in their bedrooms, “experience beamers” circa 2030 will beam their entire flow of sensory experiences, and if so desired, their emotions and other secondary reactions. We’ll be able to plug in (by going to the appropriate Web site) and experience other people’s lives as in the plot concept of “Being John Malkovich.” Particularly interesting experiences could be archived and relived at any time.

By 2030, there won’t be a clear distinction between real and virtual people. “Real people,” i.e., people of a biological origin, will have the potential of enhancing their own thinking using the same nanobot technology. For example, the nanobots could create new virtual connections, so we will no longer be restricted to a mere hundred trillion interneuronal connections.

We will also develop intimate connections to new forms of nonbiological thinking. We will evolve thereby into a hybrid of biological and nonbiological thinking. Conversely, fully nonbiological “AI’s” (artificial intelligent entities) will be based at least in part on the reverse engineering of the human brain and thus will have many human-like qualities.

These technologies are evolving today at an accelerating pace. Like any other technology, virtual reality and virtual humans will not emerge in perfect form in a single generation of technology. By the 2030s, however, virtual reality will be totally realistic and compelling and we will spend most of our time in virtual environments. In these virtual environments, we won’t be able to tell the difference between biological people who have projected themselves into the virtual environment and fully virtual (i.e., nonbiological) people.

Nonbiological intelligence has already secured a foothold in our brains. There are many people walking around whose brains are now a hybrid of biological thinking with computer implants (e.g., a neural implant for Parkinson’s Disease that replaces the function of the biological cells destroyed by that disease).

It is the nature of machine intelligence that its powers grow exponentially. Currently, machines are doubling their information processing capabilities every year and even that exponential rate is accelerating. As we get to the 2040s, even people of biological origin are likely to have the vast majority of their thinking processes taking place in nonbiological substrates.

We will all become virtual humans.

© 2004 Peter Plantec