Cyborg Babies and Cy-Dough-Plasm

May 23, 2001 by Sherry Turkle

The way in which children interact with virtual worlds reveals insights into how we think of ourselves in virtual worlds. Sherry Turkle uses her observations of children to explore issues of consciousness and self in the context of virtual reality.

Originally published 1998 in the book Cyborg Babies: From Techno-Sex to Techno-Tots. Published on KurzweilAI.net May 23, 2001.

The genius of Jean Piaget (1960) showed us the degree to which it is the business of childhood to take the objects in the world and use how they “work” to construct theories–of space, time, number, causality, life, and mind. Fifty years ago, when Piaget was formulating his theories, a child’s world was full of things that could be understood in simple, mechanical ways. A bicycle could be understood in terms of its pedals and gears, a wind-up car in terms of its clockwork springs. Children were able to take electronic devices such as basic radios and (with some difficulty) bring them into this “mechanical”systemof understanding. Since the end of the 1970s, however, with the introduction of electronic toys and games, the nature of objects and how children understand them have changed. When children today remove the back of their computer toys to “see” how they work, they find a chip, a battery, and some wires. Sensing that trying to understand these objects “physically” will lead to a dead end, children try to use a “psychological” kind of understanding (Turkle 1984:29-63). Children ask themselves if the games are conscious, if the games know, if they have feelings, and even if they “cheat.” Earlier objects encouraged children to think in terms of a distinction between the world of psychology and the world of machines, but the computer does not. Its “opacity” encourages children to see computational objects as psychological machines.

Over the last twenty years I have observed and interviewed hundreds of children as they have interacted with a wide range of computational objects, from computer programs on the screen to robots off the screen (Turkle 1984, 1995). My methods are ethnographic and clinical. In the late 1970s and early 1980s, I began by observing children playing with the first generation of electronic toys and games. In the 1990s, I have worked with children using a new generation of computer games and software and experimenting with on-line life on the Internet.

Among the first generation of computational objects was Merlin, which challenged children to games of tic-tac-toe. For children who had only played games with human opponents, reaction to this object was intense. For example, while Merlin followed an optimal strategy for winning tic-tac-toe most of the time, it was programmed to make a slip every once in a while. So when children discovered a strategy that would sometimes allow them to win, and then tried it again, it usually didn’t work. The machine gave the impression of not being “dumb enough” to let down its defenses twice. Robert, seven, playing with his friends on the beach, watched his friend Craig perform the “winning trick,” but when he tried it, Merlin did not make its slip and the game ended in a draw. Robert, confused and frustrated, accused Merlin of being a cheating machine.” Children were used to machines being predictable. But this machine surprised.

Robert threw Merlin into the sand in anger and frustration. “Cheater. I hope your brains break.” He was overheard by Craig and Greg, aged six and eight, who salvaged the by-now-very-sandy toy and took it upon themselves to set Robert straight. Craig offered the opinion that “Merlin doesn’t know if it cheats. It won’t know if it breaks. It doesn’t know if you break it, Robert. It’s not alive.” Greg adds: “It’s smart enough to make the right kinds of noises. But it doesn’t really know if it loses. That’s how you can cheat it. It doesn’t know you are cheating. And when it cheats, it doesn’t even know it’s cheating.” Jenny, six, interrupted with disdain: “Greg, to cheat you have to know you are cheating. Knowing is part of cheating.

In the early 1980s, such scenes were not unusual. Confronted with objects that spoke, strategized, and “won,” children were led to argue the moral and metaphysical status of machines on the basis of their psychologies: Did the machines know what they were doing? Did they have intentions, consciousness, and feelings? These first computers that entered children’s lives were evocative objects: they became the occasion for new formulations about the human and the mechanical. For despite Jenny’s objections that “knowing is part of cheating,” children did come to see computational objects as exhibiting a kind of knowing. She was part of a first generation of children who were willing to invest machines with qualities of consciousness as they rethought the question of what is alive in the context of “machines that think.”

In the past twenty years, the objects of children’s lives have come to include machines of even greater intelligence, toys and games and programs that make these first cybertoys seem primitive in their ambitions. The answers to the classical Piagetian question of how children think about life are being renegotiated as they are posed in the context of computational objects that explicitly present themselves as exemplars of “artificial life.”

1. FROM PHYSICS TO PSYCHOLOGY

Piaget, studying children in the world of “traditional”–that is, non-computational–objects, found that as children matured, they homed in on a definition of life which centered around “moving of one’s own accord.” First, everything that moved was taken to be alive, then only things that moved without an outside push or pull. Gradually, children refined the notion of “moving of one’s own accord” to mean the “life motions” of breathing and metabolism. This meant that only those things that breathed and grew were taken to be alive. But from the first generation of children who met computers and electronic toys and games (the children of the late 1970s and early 1980s), there was a disruption in this classical story. Whether or not children thought their computers were alive, they were sure that how the toys moved was not at the heart of the matter. Children’s discussions about the computer’s aliveness came to center on what the children perceived as the computer’s psychological rather than physical properties. To put it too simply, motion gave way to emotion and physics gave way to psychology as criteria for aliveness.

Today, only a decade later, children have learned to say that their computers are “just machines,” but they continue to attribute psychological properties to them. The computational objects are said to have qualities (such as having intentions and ideas) that were previously reserved for people. Thus today’s children seem comfortable with a reconstruction of the notion of a machine” which includes having a psychology. And children often use the phrase “sort of alive” to describe the computer’s nature.

An eleven-year-old named Holly watches a group of robots navigate a maze. The robots use different strategies to reach their goal, and Holly is moved to comment on their “personalities” and their “cuteness.” She finally comes to speculate on the robots’ “aliveness” and blurts out an unexpected formulation: “It’s like Pinocchio.”

First Pinocchio was just a puppet. He was not alive at all. Then he was an alive puppet. Then he was an alive boy. A real boy. But he was alive even before he was a real boy. So I think the robots are like that. They are alive like Pinocchio [the puppet], but not “real boys.”

She sums up her thought: “They [the robots] are sort of alive.”

In September 1987, more than one hundred scientists and technical researchers gathered together in Los Alamos, New Mexico, to found a discipline devoted to working on machines that might cross the boundary between “sort of” to “really” alive. They called their new enterprise “artificial life.”

From the outset, many of artificial life’s pioneers developed their ideas by writing programs on their personal computers. These programs, easily shipped off on floppy disks or shared via the Internet, have revolutionized the social diffusion of ideas. Christopher Langton (1989:13), one of the founders of the discipline of artificial life, argued that biological evolution relies on unanticipated bottom-up effects: simple rules interacting to give rise to complex behavior. He further argued that artificial life would only be successful if it shared this aesthetic of “emergent effects” with nature.

The cornerstone idea of decentralized, bottom-up emergence is well illustrated by a program written in the mid-1980s known as “boids.” Its author, the computer animator Craig Reynolds, wanted to explore whether flocking behavior, whether in fish, birds, or insects, might happen without a flock leader or the intention to flock. Reynolds wrote a computer program that caused virtual birds to flock, in which each “bird” acted “solely on the basis of its local perception of the world” (1987:27). Reynolds called the digital birds “boids,” an extension of high-tech jargon that refers to generalized objects by adding the suffix “oid.” A boid could be any flocking creature. Each “boid” was given three simple rules: (1) if you are too close to a neighboring boid, move away from it; (2) if not as fast as your neighboring boid, speed up; if not as slow as your neighboring boid, slow down; (3) if you are moving toward the greater density of boids, maintain direction; if not, do so. The rules working together created flocks of boids that could fly around obstacles and change direction. The boids program posed the evocative question: How could it be established that the behavior produced by the boids (behavior to which it was easy to attribute intentionality and leadership) was different from behavior in the natural world? Were animals following simple rules that led to their complex “lifelike” behavior. Were people following simple rules that led to their complex “lifelike” behavior?

In writing about the dissemination of ideas about microbes and the bacteria] theory of disease in late-nineteenth-century France, the sociologist of science Bruno Latour (1988) argued that the message of Louis Pasteur’s writings was less significant than the social deployment of an army of “hygienists,” state employees who visited every French farm to spread the word. The hygienists were the “foot soldiers” of Pasteur’s revolution. In the case of artificial life the foot soldiers are “shippable” products in the form of computer programs, commercial computer games, and small robots, some of which are sold as toys. I do not argue that these products are artificial life, but that they are significant actors for provoking a new discourse about aliveness. Electronic toys and games introduced psychology into children’s categories for talking about the “quality of life”; a new generation of computational objects is introducing ideas about decentralization and emergence.

2. “ALIVE” ON THE SCREEN

In the mid-1980s, the biologist Thomas Ray set out to create a computer world in which self-replicating digital creatures could evolve by themselves. Ray imagined that the motor for the evolution of the artificial organisms would be their competition for CPU (central processing unit) time. The less CPU time that a digital organism needed to replicate, the more “fit” it would be in its “natural” computer environment. Ray called hissystemTierra, the Spanish word for “Earth.”

In January 1990, Ray wrote the computerprogram for his first digital creature. It consisted of eighty instructions. It evolved progeny which could replicate with even fewer instructions. This meant that these progeny were “fitter” than their ancestor because they could compete better in an environment where computer memory was scarce. Further evolution produced ever smaller self-replicating creatures, digital “parasites” that passed on their genetic material by latching onto larger digital organisms. When some host organisms developed immunity to the first generation of parasites, new kinds of parasites were born. For Ray, asystemthat self-replicates and is capable of open-ended evolution is alive. From this point of view, Rav believed that Tierra, running on his Toshiba laptop computer, was indeed alive.

Ray made Tierra available on the Internet, ready for “downloading” via modem. And it was downloaded all over the world, often to school science clubs and biology classes. A fifteen-year-old high school student said that working with Tierra made him feel as though he were “looking at cells through an electron microscope. I know that it is all happening in the computer, but I have to keep reminding myself.” Tierra was an object-to-think-with for considering self-replication and evolution as essential to life. “You set it up and you let it go. And a whole world starts,” said the student. “I have to keep reminding myself that it isn’t going to jump out of the machine. . . . I dreamt that I would find little animals in there. Two times I ran it at night, but it’s not such a great idea because I couldn’t really sleep.”

Ray could not predict the developments that would take place in his SYSTEM. He too stayed up all night, watching new processes evolve. The lifelike behavior of his digital Tierrans emerged from the “bottom up.” At MIT’s Media Laboratory, the computer scientist and educational researcher Mitchel Resnick worked to bring the artificial life aesthetic into the world of children. He began by giving them a robot construction kit that included sensors and motors as well as standard Lego blocks. Children quickly attributed lifelike properties to their creations. Children experienced one little robot as “confused” because it moved back and forth between two points (because of rules that both told it to seek out objects and to move away quickly if it sensed an object). Children classified other robots as nervous, frightened, and sad. The first Lego-Logo robots were tethered by cables to a “mother” computer, but eventually researchers were able to give the robots an “onboard” computer. The resulting autonomy made the Lego-Logo creations seem far less like machines and far more like creatures. For the children who worked with them, this autonomy further suggested that machines might be creatures and creatures might be machines.

Resnick also developed programming languages, among these a language he called StarLogo that would enable children to control the parallel actions of many hundreds of “creatures” on a computer screen. Traditional computer programs follow one instruction at a time; with Resnick’s StarLogo program, multiple instructions were followed at the same time, simulating the way things occur in nature. And as in nature, simple rules led to complex behaviors. For example, a population of screen “termites” in an environment of digital “wood chips” were given a set of two rules: if you’re not carrying anything and you bump into a wood chip, pick it up; if you’re carrying a wood chip and you bump into another wood chip, put down the chip you’re carrying. Imposing these two rules at the same time will cause the screen termites to make wood chip piles (Resnick 1992:76). So children were able to get the termites to stockpile woodchips without ever giving the termites a command to do so. Similarly, children could use StarLogo to model birds in a flock, ants in a colony, cars in a traffic jam–all situations in which complex behavior emerges from the interaction of simple rules. Children who worked with these materials struggled to describe the quality of emergence that their objects carried. “In this version of Logo,” said one, “you can get more [out of the program] than what you tell it to do” (Resnick 1992:131-132).

An object such as StarLogo opens up more than new technical possibilities: it gives children concrete material for thinking in what Resnick has termed a “decentralized mindset.” The key principle here is self-organization–complexity results although there is no top-down intervention or control. In a centralized model of evolution, God designs the process, sets it in motion, and keeps close tabs to make sure it conforms to design. In a decentralized model, God can be there, but in the details: simple rules whose interactions result in the complexity of nature.

StarLogo teaches how decentralized rules can be the foundation for behavior that may appear to be “intentional” or a result of “following the leader.” It also provides a window onto resistance to decentralized thinking. When confronted with the wood chip piles, Resnick reports that most adults prefer to assume that a leader stepped in to direct the process or that there was an asymmetry in the world that gave rise to a pattern, for example, a food source near the final location of a stockpile (Resnick, Turtles, Termites, and Traffic Jams: 137ff). In reflecting on this resistance to ideas about decentralization, Resnick (1992:122) cites the historian of science Evelyn Fox Keller who, in reviewing the history of resistance to decentralized ideas in biology, was led to comment: “We risk imposing on nature the very stories we like to hear.”

Why do we like to hear centralized stories? There is our Western monotheistic tradition; there is our experience over millennia of large-scale societies governed by centralized authority and controlled through centralized bureaucracy; there is our experience of ourselves as unitary and intentional actors (the ego as “I”); and there is also the fact that we have traditionally lacked concrete objects in the world with which to think about decentralization. Objects such as StarLogo present themselves as objects to think with for thinking about emergent phenomena and decentralized control. As a wider range of such objects enter our culture, the balance between our tendency to assume decentralized emergence or centralized command may change. The children who tinkered with parallel computation became comfortable with the idea of multiple processes and decentralized methods. Indeed, they came to enjoy this “quality of emergence” and began to associate it with the quality of aliveness.

The idea that the whole is greater than the sum of its parts has always been resonant with religious and spiritual meaning. Decentered, emergent phenomena combine a feeling that one knows the rules with the knowledge that one cannot predict the outcome. To children, emergent phenomena seem almost magical because one can know what every individual object will do but still have no idea of what thesystemas a whole will look like. This is the feeling that the children were expressing when they described getting “more out” of the StarLogo program than they told it to do. The children know that there are rules behind the magic and that there is magic in the rules. In a cyborg consciousness, objects are re-enchanted.

When children programming in StarLogo got a group of objects on the screen to clump together in predictable groups by commanding them to do so, they did not consider this “interesting” in the sense that it did not seem “lifelike.” But if they gave the objects simple rules that had no obvious relation to clumping behavior but clumping “happened” all the same, that behavior did seem lifelike. So, for children working in the StarLogo learning culture, “teaching” computer birds to flock by explicitly telling them where to go in every circumstance seemed to be a kind of “cheating.” They were developing an “ethic of simulation” in which decentralization and emergence became requirements for things to seem “alive enough” to be interesting.

In this example, computational media show their potential to generate new ways of thinking. Just as children exposed to electronic toys and games begin to think differently about the definition of aliveness (thinking in terms of psychology rather than physical motion), so children exposed to parallel processing begin to think about life in terms of emergent phenomena.

3. THE “SIMS”

The authors of the “Sim” series of computer games (among these SimAnt, SimCity, Sim-Health, SimLife) write explicitly of their effort to use the games to communicate ideas about artificial life (Bremer 1991:163). For example, in the most basic game of SimAnt (played on one “patch” of a simulated backyard), a player learns about local bottom-up determination of behavior: each ant’s behavior is determined by its own state, its assay of its direct neighbors, and a set of rules. Like Reynolds’s “boids” and the objects in StarLogo, the ants change their state in reference to who they are and with whom they are in contact. SimAnt players learn about pheromones, the virtual tracer chemicals by which ants, as well as Resnick’s StarLogo objects, “communicate” with one another. Beyond this, SimAnt players learn how in certain circumstances, local actions that seem benign (mating a few ants) can lead to disastrous global results (population overcrowding and death). Children playing Sim games make a connection between the individual games and some larger set of ideas. Tim, a thirteen-year-old player, says of SimLife: “You get to mutate plants and animals into different species . . . . You are part of something important. You are part of artificial life.” As for the Sim creatures themselves, Tim thinks that the “animals that grow in the computer could be alive,” although he adds, “This is kind of spooky.”

Laurence, a more blasé fifteen-year-old, doesn’t think the idea of life on the screen is spooky at all. “The whole point of this game,” he tells me,

is to show that you could get things that are alive in the computer. We get energy from the sun. The organisms in a computer get energy from the plug in the wall. I know that more people will agree with me when they make a SimLife where the creatures are smart enough to communicate. You are not going to feel comfortable if a creature that can talk to you goes extinct.

Robbie, a ten-year-old who has been given a modem for her birthday, uses her experience of the game to develop some insight into those computer processes that led adults to use the term “virus” for programs that “traveled.” She puts the emphasis on mobility instead of communication when she considers whether the creatures she has evolved on SimLife are alive.

I think they are a little alive in the game, but you can turn it off and you cannot “save” your game, so that all the creatures you have evolved go away. But if they could figure out how to get rid of that part of the program so that you would have to save the game and if your modem were on, then they could get out of your computer and go to America Online.

Sean, thirteen, who has never used a modem, comes tip with a variant on Robbie’s ideas about SimLife creatures and their Internet travel: “The [Sim] creatures could be more alive if they could get into DOS.”

In Piaget’s classical studies of the 1920s on how children thought about what was alive, the central variable was motion. Simply put, children took up the question of an object’s “life status” by asking themselves if the object could move of its own accord. When in the late 1970s and early 1980s I studied children’s reactions to a first generation of computer objects which were physically “stationary” but which nonetheless accomplished impressive feats of cognition (talking, spelling, doing math, and playing tic-tac-toe), I found that the focus had shifted to an object’s psychological properties when children considered the question of its “aliveness.” Now, in children’s comments about the creatures that exist on simulation games, the emphasis is on evolution. But this emphasis also includes a recapitulation of criteria that draw from physics and psychology. Children talk about digital “travel” via circulating disks or over modems. They talk of viruses and networks. In this language, biology and motion are resurfacing in a new guise, now bound up in the ideas of communication and evolution. Significantly, the resurfacing of motion (Piaget’s classical criterion) is bound up with notions of presumed psychology: children were most likely to assume that the creatures on Sim games have a desire to “get out” of thesystemand evolve in a wider computational world.

4. “CYCLING THROUGH”

Although the presence of computational objects disrupted the classical Piagetian story for talking about aliveness, the story children were telling about computational objects in the early 1980s had its own coherency. Faced with intelligent toys, children took a new world of objects and imposed a new world order, based not on physics but on psychology. In the 1990s, that order has been strained to the breaking point. Children will now talk about computers as “just machines” but describe them as sentient and intentional. Faced with ever-more-complex computational objects, children are now in the position of theoretical bricoleurs, or tinkerers, “making do” with whatever materials are at hand, “making do” with whatever theory can fit a prevailing circumstance. They cycle through evolution and psychology and resurface ideas about motion in terms of the communication of bits.

My current collection of comments about life by children who have played with small mobile robots, the games of the “Sim” series, and Tierra includes the following notions: the robots are in control but not alive, would be alive if they had bodies, are alive because they have bodies, would be alive if they had feelings, are alive the way insects are alive but not the way people are alive; the Tierrans are not alive because they are just in the computer, could be alive if they got out of the computer, are alive until you turn off the computer and then they’re dead, are not alive because nothing in the computer is real; the Sim creatures are not alive but almost-alive, they would be alive if they spoke, they would be alive if they traveled, they’re alive but not real,” they’re not alive because they don’t have bodies, they are alive because they can have babies, and, finally, for an eleven-year-old who is relatively new to SimLife, they’re not alive because these babies don’t have parents. She says: “They show the creatures and the game tells you that they have mothers and fathers but I don’t believe it. It’s just numbers, it’s not really a mother and a father.” There is a striking heterogeneity of theory here. Different children hold different theories, and individual children are able to hold different theories at the same time.

The heterogeneity of children’s views is apparent when they talk about something as “big” as the life of a computational creature and about something as “small” as why a robot programmed with “emergent” methods might move in a certain way. One fifth-grader named Sara jumped back and forth from a psychological to a mechanical language when she talked about the Lego-Logo creature she had built. When Sara considered whether her machine would sound a signal when its “touch sensor” was pushed, she said: “It depends on whether the machine wants to tell . . . if we want the machine to tell us . . . if we tell the machine to tell us” (Resnick 1989:402). In other words, within a few seconds, Sara “cycled through” three perspectives on her creature (as psychological being, as intentional self, as instrument of its programmer’s intentions). The speed of her alternations suggests that these perspectives are equally present for her at all times. For some purposes, she finds one or another of them more useful.

In the short history of how the computer has changed the way we think, it has often been children who have led the way. For example, in the early 1980s, children–prompted by computer toys that spoke, did math, and played tic-tac-toe–disassociated ideas about consciousness from ideas about life, something that historically had not been the case. These children were able to contemplate sentient computers that were not alive, a position that grownups are only now beginning to find comfortable. Today’s cyborg children are taking things even further; they are pointing the way toward a radical heterogeneity of theory in the presence of computational artifacts that evoke “life.” In his history of artificial life, Steven Levy (1992:6-7) suggested that one way to look at where artificial life can “fit in” to our way of thinking about life is to envisage a continuum in which Tierra, for example, would be more alive than a car but less alive than a bacterium. My observations suggest that children are not constructing hierarchies but are heading toward parallel, alternating definitions.

The development of heterogeneity in children’s theories is of course taking place in a larger context. We are all living in the presence of computational objects that carry emergent, decentralized theories and encourage a view of the self as fluid and multiple. Writers from many different disciplinary perspectives are arguing for a multiple and fluid notion of the self. Daniel C. Dennett (1991) argues for a “multiple drafts” theory of consciousness. The presence of the drafts encourages a respect for the many different versions, and it imposes a certain distance from being identified with any one of them. No one aspect of self can be claimed as the absolute, true self. Robert Jay Lifton (1993) views the contemporary self as “protean,” multiple yet integrated, allowing for a “sense of self” without being one self. Donna Haraway equates a “split and contradictory self” with a “knowing self”: “The knowing self is partial in all its guises, never finished, whole, simply there and original; it is always constructed and stitched together imperfectly; and therefore able to join with another, to see together without claiming to be another” (1991a:22). In computational environments, such ideas about identity and multiplicity are “brought down to earth” and enter children’s lives from their earliest days. Even the operating system on the computers they use to play games, to draw, and to write carries the message. A computer’s “windows” have become a potent metaphor for thinking about the self as a multiple and distributedsystem(Turkle 1995). Hypertext links have become a metaphor for a multiplicity of perspectives. On the Internet, people who participate in virtual communities may be “logged on” to several of them (open as several open-screen windows) as they pursue other activities. In this way, they may come to experience their lives as a “cycling through” screen worlds in which they may be expressing different aspects of self. But such media-borne messages about multiple selves and theories are controversial.

Today’s adults grew up in a psychological culture that equated the idea of a unitary self with psychological health and in a scientific culture that taught that when a discipline achieves maturity, it has a unifying theory. When adults find themselves cycling through varying perspectives on themselves (“I am my chemicals” to “I am my history” to “I am my genes”), they usually become uncomfortable (Kramer 1993: xii-xiii). But such alternations may strike the generation of cyborg children who are growing up today as “just the way things are.”

Children speak easily about factors which encourage them to see the stuff” of computers as the same “stuff” of which life is made. Among these are the ideas of “shape shifting” and “morphing.” Shape shifting is tile technique used by the evil android in Terminator II to turn into the form of anything he touched–including people. A nine-year-old showed an alchemist’s sensibility when lie explained how this occurs: “It is very simple. In the universe, anything can turn to anything else when you have the right formula. So you can be a person one minute and a machine the next minute.” Morphing is a general term that covers form changes which may include changes across the animate/inanimate barrier. A ten-year-old boy had a lot to say about morphing, all of it associated with the lifestyle of “The Mighty Morphin’ Power Rangers,” a group of action heroes who turn from teenagers to androidal/–mechanical “dinozords” and “megazords” and back. “Well,” he patiently explains, “the dinozords are alive; the Power Rangers are alive, but not all the parts of the dinozords are alive, but all the parts of the Power Rangers are alive. The Power Rangers become the dinozords.” Then, of course, there are seemingly omnipresent “transformer toys” which shift from being machines to being robots to being animals (and sometimes people). Children play with these plastic and metal objects, and in the process they learn about the fluid boundaries between mechanism and flesh.

I observe a group of seven-year-olds playing with a set of plastic transformer toys that can take the shape of armored tanks, robots, or people. The transformers can also be put into intermediate states so that a robot arm can protrude from a human form or a human leg from a mechanical tank. Two of the children are playing with the toys in these intermediate states (that is, in their intermediate states somewhere between being people, machines, and robots). A third child insists that this is not right. The toys, he says, should not be placed in hybrid states. “You should play them as all tank or all people.” He is getting upset because the other two children are making a point of ignoring him. An eight-year-old girl comforts the upset child. “It’s okay to play them when they are in between. It’s all the same stuff,” she says, “just yucky computer cy-dough-plasm.” This comment is the expression of the cyborg consciousness that characterizes today’s children: a tendency to see computer systems as “sort of” alive, to fluidly “cycle through” various explanatory concepts, and to willingly transgress boundaries.

Walt Whitman wrote: “A child went forth every day. And the first object he look’d upon, that object he became.” When Piaget elaborated how the objects in children’s lives constructed their psyches, he imagined a timeless, universal process. With the radical change in the nature of objects, the internalized lessons of the object world have changed. When today’s adults “cycle through” different theories, they are uncomfortable. Such movement does not correspond to the unitary visions they were brought up to expect. But children have learned a different lesson from their cyborg objects. Donna Haraway characterizes irony as being “about contradictions that do not resolve into larger wholes . . . about the tension of holding incompatible things together because both or all are necessary and true” (1991b:148). In this sense, today’s cyborg children, growing up into irony, are becoming adept at holding incompatible things together. They are cycling through the cy-dough-plasm into fluid and emergent conceptions of self and life.

References

Bremer, Michael. 1991. SimAnt User Manual. Orinda, Calif.

Dennett, Daniel C. 1991. Consciousness Explained. Boston: Little, Brown.

Haraway, Donna. 1991a. “The Actors Are Cyborg, Nature Is Coyote, and the Geography Is Elsewhere: Postscript to ‘Cyborgs at Large.”‘ In Technoculture, edited by Constance Penley and Andrew Ross. Minneapolis: University of Minnesota Press.

—. 1991b. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge.

Kramer, Peter. 1993. Listening to Prozac: A Psychiatrist Explores Antidepressant Drugs and the Remaking of the Self. New York: Viking.

Latour, Bruno. 1988. The Pasteurization of France. Translated by Alan Sheridan and John Law. Cambridge: Harvard University Press.

Langton, Christopher C. 1989. “Artificial Life.” In Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems, edited by Christopher G. Langton. Santa Fe Institute Studies in the Science of Complexity, vol. 6. Redwood City, Calif: Addison-Wesley.

Levy, Steven. 1992. Artificial Life: The Quest for a New Frontier. New York: Pantheon.

Lifton, Robert J. 1993. The Protean Self. Human Resilience in an Age of Fragmentation. New York: Basic Books.

Piaget, Jean. 1960. The Child’s Conception of the World. Translated by Joan and Andrew Tomlinson. Totowa N. J.: Littlefield, Adams.

Resnick, Mitchel. 1989. “LEGO, Logo, and Life.” In Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems, edited by Christopher G. Langton. Santa Fe Institute Studies in the Science of Complexity, vol. 6. Redwood City, Calif: Addison-Wesley.

—. 1992. Turtles, Termites, and Traffic Jams. Cambridge, Mass.: MIT Press.

Reynolds, Craig. 1987. “Flocks, Herds, and Schools: A Distributed Behavioral Model.” Computer Graphics 21 (July).

Turkle, Sherry. 1984. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster.

—. 1995. Life on the Screen: Identity in the Age of the Internet. New York: Simon and Schuster.