Whither Psychoanalysis in a Computer Culture?
October 24, 2002 by Sherry Turkle
In the early 1980s, MIT professor Sherry Turkle first called the computer a “second self.” With this essay, she presents a major new theory of “evocative objects”: Wearable computers, PDAs, online multiple identities, “companion species” (such as quasi-alive virtual pets, digital dolls, and robot nurses for the elderly), “affective computing” devices (such as the human-like Kismet robot), and the imminent age of machines designed as relational artifacts are causing us to see ourselves and our world differently. They call for a new generation of psychoanalytic self-psychology to explore the human response and the human vulnerability to these objects.
Originally presented as the 2002 Freud Lecture at The Sigmund Freud Society in Vienna on May 6, 2002. Published on KurzweilAI.net Oct. 23, 2002.
I. Psychoanalytic Culture and Computer Culture
Over twenty years ago, as a new faculty member at MIT, I taught an introductory class on psychoanalytic theory. For one meeting, early in the semester, I had assigned Freud’s chapters on slips of the tongue from The Psychopathology of Everyday Life. I began class by reviewing Freud’s first example: the chairman of a parliamentary session begins the meeting by declaring it closed1.
Freud’s analysis centered on the possible reasons behind the chairman’s slip: he might be anxious about what the parliamentarians had on their agenda. Freud’s analysis turned on trying to uncover the hidden meaning behind the chairman’s remark. The theoretical effort was to understand his mixed emotions, his unconscious ambivalence.
As I was talking to my class about the Freudian notions of the unconscious and of ambivalence, one of the students, an undergraduate majoring in computer science, raised her hand to object. She was studying at the MIT Artificial Intelligence Laboratory, which was (and is) a place whose goal, in the words of one of its founders, Marvin Minsky, is to create "machines that did things that would be considered intelligent if done by people. Work in the AI Lab began with the assumption that the mind, in Minsky’s terms, "was a meat machine," best understood by analogizing its working to that of a computer program. It was from this perspective that my student objected to what she considered a tortured explanation for slips of the tongue.
"In a Freudian dictionary," she began, "closed and open are far apart. In a Webster’s dictionary," she continued, "they are as far apart as the listings for C and the listings for O. But in a computational dictionary — such as we have in the human mind — closed and open are designated by the same symbol, separated by a sign of opposition. Closed equals ‘minus’ open. To substitute closed for open does not require the notion of ambivalence or conflict. When the substitution is made, a bit has been dropped. A minus sign has been lost. There has been a power surge. No problem."
With this brief comment, a Freudian slip had been transformed into an information processing error. An explanation in terms of meaning had been replaced by a narrative of mechanistic causation. At the time, that transition from meaning to mechanism struck me as emblematic of a larger movement that might be taking place in psychological culture. Were we moving from a psychoanalytic to a computer culture, one that would not need such notions as ambivalence when it modeled the mind as a digital machine2?
For me, that 1981 class was a turning point. The story of the relationship between the psychoanalytic and computer cultures moved to the center of my intellectual concerns. But the story of their relationship has been far more complex than the narrative of simple transition that suggested itself to me during the early 1980s. Here I shall argue the renewed relevance of a psychoanalytic discourse in digital culture. Indeed, I shall argue that this relevance is so profound as to suggest an occasion for a revitalization and renewal of psychoanalytic thinking.
In my view, this contemporary relevance does not follow, as some might expect, from efforts to link psychoanalysis and computationally-inspired neuroscience. Nor does it follow, as I once believed it would, from artificial intelligence and psychoanalysis finding structural or behavioral analogies in their respective objects of study.
In my 1988, "Psychoanalysis and Artificial Intelligence: A New Alliance3," I suggested an opening for dialogue between these two traditions that had previously eyed each other with suspicion if not contempt. In my view, the opening occurred because of the ascendance of "connectionist" models of artificial intelligence. Connectionist descriptions of how mind was "emergent" from the interactions of agents had significant resonance with the way psychoanalytic object-relations theory talked about objects in a dynamic inner landscape. Both seemed to be describing what Minsky would have called a "society of mind."
Today, however, the elements within the computer culture that speak most directly to psychoanalysis are concrete rather than theoretical. Novel and evocative computational objects demand a depth psychology of our relationships with them. The computer culture needs psychoanalytic understandings to adequately confront our evolving relationships with a new world of objects. Psychoanalysis needs to understand the influence of computational objects on the terrain it knows best: the experience and specificity of the human subject.
II. Evocative Objects and Psychoanalytic Theory
The designers of computational objects have traditionally focused on how these objects might extend and/or perfect human cognitive powers. As an ethnographer/psychologist of computer culture, I hear another narrative as well: that of the users. Designers have traditionally focused on the instrumental computer, the computer that does things for us.
Computer users are frequently more in touch with the subjective computer, the computer that does things to us, to our ways of seeing the world, to the way we think, to the nature of our relationships with each other. Technologies are never "just tools." They are evocative objects. They cause us to see ourselves and our world differently.
While designers have focused on how computational devices such as personal digital assistants will help people better manage their complex lives, users have seen devices such as a Palm Pilot as extensions of self. The designer says: "People haven’t evolved to keep up with complexity. Computers will help." The user says: "When my Palm crashed it was like a death. More than I could handle. I had lost my mind."
Wearable computers are devices that enable the user to have computer and online access all the time, connected to the Web by a small radio transmitter and using specially designed eyeglasses as a computer monitor. Designers of wearable computing talk about new and indeed, superhuman access to information. For example, with a wearable computer, you can be in a conversation with a faculty colleague and accessing his or her most recent papers at the same time.
But when people actually wear computers all the time (and in this case, this sometimes happens when the designers begin to use and live with the technology) they testify to impacts on a very different register: wearable computers change one’s sense of self. One user says, "I become my computer. It’s not just that I remember people or know more about them. I feel invincible, sociable, better prepared. I am naked without it. With it, I’m a better person." A wearable computer is lived as a glass through which we see, however darkly, our cyborg future4. Indeed, the group of students at MIT who have pioneered the use of wearable computing call themselves cyborgs.
Computer research proceeds through a discourse of rationality. Computer culture grows familiar with the experiences of passion, dependency, and profound connection with artifact. Contemporary computational objects are increasingly intimate machines; they demand that we focus our attention on the significance of our increasingly intimate relationships with them. This is where psychoanalytic studies are called for. We need a developmental and psychodynamic approach to technology that focuses on our new object relations.
There is a certain irony in this suggestion, for of course psychoanalysis has its own "object-relations" tradition5. Freud’s "Mourning and Melancholia" opened psychoanalysis to thinking about how people take lost objects and internalize them, creating new psychic structure along with new facets of personality and capacity6. But for psychoanalysis, the "objects" in question were people. A small number of psychoanalytic thinkers writers explored the power of the inanimate (for example, D. W. Winnicott and Erik Erikson, child analysts who wrote about the experience of objects in children’s play), but, in general, the story of "object relations" in psychoanalysis has cast people in the role of "objects."
Today, the new objects of our lives call upon psychoanalytic theory to create an object relations theory that really is about objects in the everyday sense of the word.
What are these new objects? When in the early 1980s I first called the computer a "second self" or a Rorschach, an object for the projection of personhood, relationships with the computer were usually one-to-one, a person alone with a machine. This is no longer the case. A rapidly expanding system of networks, collectively known as the Internet, links millions of people together in new spaces that are changing the way we think, the nature of our sexuality, the form of our communities, our very identities. A network of relationships on the Internet challenges what we have traditionally called "identity."
Most recently, a new kind of computational object has appeared on the scene. "Relational artifacts," such as robotic pets and digital creatures, are explicitly designed to have emotive, affect-laden connections with people. Today’s computational objects do not wait for children to "animate" them in the spirit of a Raggedy Anne doll or the Velveteen Rabbit, the toy who finally became alive because so many children had loved him. They present themselves as already animated and ready for relationship. People are not imagined as their "users" but as their companions.
At MIT, a research group on "affective computing" works on the assumption that machines will not be able to develop human-like intelligence without sociability and affect. The mission of the affective computing group is to develop computers that are programmed to assess their users’ emotional states and respond with emotional states of their own. In the case of the robotic doll and the affective computers, we are confronted with relational artifacts that demand that the human users attend to the psychology of a machine.
Today’s relational artifacts include robot dogs and cats, some specially designed and marketed to lonely elders. There is also a robot infant doll that makes baby sounds and even baby facial expressions, shaped by mechanical musculature under artificial skin. This computationally complex doll has baby "states of mind." Bounce the doll when it is happy, and it gets happier. Bounce it when it is grumpy and it gets grumpier.
These relational artifacts provide good examples of how psychoanalysis might productively revisit old "object" theories in light of new "object" relations. Consider whether relational artifacts could ever be "transitional objects" in the spirit of a baby blanket or rag doll. For Winnicott, such objects (to which children remain attached even as they embark on the exploration of the world beyond the nursery) are mediators between the child’s earliest bonds with the mother, whom the infant experiences as inseparable from the self, and the child’s growing capacity to develop relationships with other people who will be experienced as separate beings.
The infant knows transitional objects as both almost-inseparable parts of the self and, at the same time, as the first not-me possessions. As the child grows, the actual objects are left behind. The abiding effects of early encounters with them, however, are manifest in the experience of a highly-charged intermediate space between the self and certain objects in later life. This experience has traditionally been associated with religion, spirituality, the perception of beauty, sexual intimacy, and the sense of connection with nature. In recent years, the power of the transitional object is commonly seen in experiences with computers.
Just as musical instruments can be extensions of the mind’s construction of sound, computers can be extensions of the mind’s construction of thought. A novelist refers to "my ESP with the machine. The words float out. I share the screen with my words." An architect who uses the computer to design goes even further: "I don’t see the building in my mind until I start to play with shapes and forms on the machine. It comes to life in the space between my eyes and the screen." Musicians often hear the music in their minds before they play it, experiencing the music from within before they experience it from without. The computer similarly can be experienced as an object on the border between self and not-self.
In the past, the power of objects to play this transitional role has been tied to the ways in which they enabled the child to project meanings onto them. The doll or the teddy bear presented an unchanging and passive presence. In the past, computers were also targets of projection; the machine functioned as a Rorschach or "second self."
But today’s relational artifacts take a decidedly more active stance. With them, children’s expectations that their dolls want to be hugged, dressed, or lulled to sleep don’t only come from the child’s projection of fantasy or desire onto inert playthings, but from such things as the digital dolls’ crying inconsolably or even saying: "Hug me!" or "It’s time for me to get dressed for school!"
In a similar vein, consider how these objects look from the perspective of self psychology. Heinz Kohut describes how some people may shore up their fragile sense of self by turning another person into a "self object." In the role of self object, the other is experienced as part of the self, thus in perfect tune with the fragile individual’s inner state. Disappointments inevitably follow.
Relational artifacts (not as they exist now but as their designers promise they will soon be) clearly present themselves as candidates for such a role. If they can give the appearance of aliveness and yet not disappoint, they may even have a comparative advantage over people, and open new possibilities for narcissistic experience with machines. One might even say that when people turn other people into self-objects, they are making an effort to turn a person into a kind of "spare part." From this point of view, relational artifacts make a certain amount of sense as successors to the always-resistant human material.
Just as television today is a background actor in family relationships and a "stabilizer" of mood and affect for individuals in their homes, in the near future a range of robotic companions and a web of pervasive computational objects will mediate a new generation’s psychological and social lives. We will be living in a relational soup of computation that offers itself as a self-ether if not as a self-object. Your home network and the computational "agents" programmed into it, indeed the computing embedded in your furniture and your clothing, will know your actions, your preferences, your habits, and your physiological responses to emotional stimuli.
A new generation of psychoanalytic self-psychology is called upon to explore the human response and the human vulnerability to these objects.
III. Personal Computing: One-on-One With the Machine
Each modality of being with a computer, one-on-one with the machine, using the computer as a gateway to other people, and being presented with it as a relational artifact, implies a distinct mode of object relations. Each challenges psychoanalytic thinking in a somewhat different way. And all of these challenges face us at the same time. The development of relational artifacts does not mean that we don’t also continue to spend a great deal of time alone, one-on-one with our personal computers.
Being alone with a computer can be compelling for many different reasons. For some, computation offers the promise of perfection, the fantasy that "If you do it right, it will do it right, and right away." Writers can become obsessed with fonts, layout, spelling and grammar checks. What was once a typographical error can be, like Hester Prynne’s Scarlet Letter, a sign of shame. As one writer put it: "A typographical error is the sign not of carelessness but of sloth and disregard for others, the sign that you couldn’t take the one extra second, the one keystroke, to make it right." Like the anorectic projecting self worth onto his or her body and calorie consumption, and who endeavors to eat ten calories less each day, game players or programmers may try to get to one more screen or play ten minutes more each day when dealing with the perfectible computational material.
Thus, the promise of perfection is at the heart of the computer’s holding power for some. Others are drawn by different sirens. As we have seen, there is much seduction in the sense that on the computer, mind is building mind or even merging with the mind of another being. The machine can seem to be a second self, a metaphor first suggested to me by a thirteen-year-old girl who said, "When you program a computer there is a little piece of your mind, and now it’s a little piece of the computer’s mind. And now you can see it." An investment counselor in her mid-forties echoes the child’s sentiment when she says of her laptop computer: "I love the way it has my whole life on it." If one is afraid of intimacy yet afraid of being alone, a computer offers an apparent solution: the illusion of companionship without the demands of friendship. In the mirror of the machine, one can be a loner yet never be alone.
IV. Lives on the Screen: Relating Person-to-Person via Computer
From the mid-1980s, the cultural image of computer use expanded from an individual alone with a computer to an individual engaged in a network of relationships via the computer. The Internet became a powerful evocative object for rethinking identity, one that encourages people to recast their sense of self in terms of multiple windows and parallel lives.
Virtual personae. In cyberspace, as is well known, the body is represented by one’s own textual description, so the obese can be slender, the beautiful plain. The fact that self-presentation is written in text means that there is time to reflect upon and edit one’s "composition" which makes it easier for the shy to be outgoing, the "nerdy" sophisticated. The relative anonymity of life on the screen — one has the choice of being known only by one’s chosen "handle" or online name — gives people the chance to express often unexplored aspects of the self. Additionally, multiple aspects of self can be explored in parallel. Online services offer their users the opportunity to be known by several different names. For example, it is not unusual for someone to be BroncoBill in one online context, ArmaniBoy in another, and MrSensitive in a third.
The online exercise of playing with identity and trying out new ones is perhaps most explicit in "role playing" virtual communities and online gaming where participation literally begins with the creation of a persona (or several), but it is by no means confined to these somewhat exotic locales. In bulletin boards, newsgroups, and chatrooms, the creation of personae may be less explicit than in virtual worlds or games, but it is no less psychologically real. One IRC (Internet Relay Chat) participant describes her experience of online talk: "I go from channel to channel depending on my mood …. I actually feel a part of several of the channels, several conversations…. I’m different in the different chats. They bring out different things in me." Identity play can happen by changing names and by changing places.
Even the computer interface encourages rethinking complex identity issues. The development of the windows metaphor for computer interfaces was a technical innovation motivated by the desire to get people working more efficiently by "cycling through" different applications much as time-sharing computers cycled through the computing needs of different people. But in practice, windows have become a potent metaphor for thinking about the self as a multiple, distributed, "time-sharing" system. The self is no longer simply playing different roles in different settings, something that people experience when, for example, one wakes up as a lover, makes breakfast as a mother, and drives to work as a lawyer. The windows metaphor perhaps merely suggests a distributed self that exists in many worlds and plays many roles at the same time. Cyberspace, however, translates that metaphor into a lived experience of "cycling through."
Identity, Moratoria and Play. For some people, cyberspace is a place to "act out" unresolved conflicts, to play and replay characterological difficulties on a new and exotic stage. For others, it provides an opportunity to "work through" significant personal issues, to use the new materials of cybersociality to reach for new resolutions. These more positive identity-effects follow from the fact that for some, cyberspace provides what Erik Erikson would have called a "psychosocial moratorium," a central element in how Erikson thought about identity development in adolescence.
Although the term "moratorium" implies a "time out," what Erikson had in mind was not withdrawal. On the contrary, the adolescent moratorium is a time of intense interaction with people and ideas. It is a time of passionate friendships and experimentation. The adolescent falls in and out of love with people and ideas. Erikson’s notion of the moratorium was not a "hold" on significant experiences but on their consequences. It is a time during which one’s actions are in a certain sense, not counted as they will be later in life. They are not given as much weight, not given the force of full judgment. In this context, experimentation can become the norm rather than a brave departure. Relatively consequence-free experimentation facilitates the development of a "core self," a personal sense of what gives life meaning that Erikson called "identity."
Erikson developed these ideas about the importance of a moratorium during the late 1950s and early 1960s. At that time, the notion corresponded to a common understanding of what "the college years" were about. Today, thirty years later, the idea of the college years as a consequence-free "time out" seems of another era. College is pre-professional and AIDS has made consequence-free sexual experimentation an impossibility. The years associated with adolescence no longer seem a "time out." But if our culture no longer offers an adolescent moratorium, virtual communities often do. It is part of what makes them seem so attractive.
Erikson’s ideas about stages did not suggest rigid sequences. His stages describe what people need to achieve before they can easily move ahead to another developmental task. For example, Erikson pointed out that successful intimacy in young adulthood is difficult if one does not come to it with a sense of who one is, the challenge of adolescent identity building. In real life, however, people frequently move on with serious deficits. With incompletely resolved "stages," they simply do the best they can. They use whatever materials they have at hand to get as much as they can of what they have missed. Now virtual social life can play a role in these dramas of self-reparation. Time in cyberspace reworks the notion of the moratorium because it may now exist on an always-available "window." Analysts need to note, respect and interpret their patients’ "life on the screen."
Having literally written our online personae into existence, they can be a kind of Rorschach. We can use them to become more aware of what we project into everyday life. We can use the virtual to reflect constructively on the real. Cyberspace opens the possibility for identity play, but it is very serious play. People who cultivate an awareness of what stands behind their screen personae are the ones most likely to succeed in using virtual experience for personal and social transformation. And the people who make the most of their lives on the screen are those who are capable of approaching it in a spirit of self-reflection. What does my behavior in cyberspace tell me about what I want, who I am, what I may not be getting in the rest of my life?
"Case" is a 34-year-old industrial designer happily married to a female co-worker. Case describes his RL persona as a "nice guy," a "Jimmy Stewart type like my father." He describes his outgoing, assertive mother as a "Katherine Hepburn type." For Case, who views assertiveness through the prism of this Jimmy Stewart/Katherine Hepburn dichotomy, an assertive man is quickly perceived as "being a bastard." An assertive woman, in contrast, is perceived as being "modern and together." Case says that although he is comfortable with his temperament and loves and respects his father, he feels he pays a high price for his low-key ways. In particular, he feels at a loss when it comes to confrontation, both at home and at work. Online, in a wide range of virtual communities, Case presents himself as females to whom he refers as his "Katherine Hepburn types." These are strong, dynamic, "out there" women. They remind Case of his mother, who "says exactly what’s on her mind." He tells me that presenting himself as a woman online has brought him to a point where he is more comfortable with confrontation in his RL as a man.
Additionally, Case has used cyberspace to develop a new model for thinking about his mind. He thinks of his Katherine Hepburn personae as various "aspects of the self." His online life reminds him of how Hindu gods could have different aspects or sub-personalities, or avatars, all the while being a whole self.
Case’s inner landscape is very different from those of a person with multiple personality disorder. Case’s inner actors are not split off from each other or his sense of "himself." He experiences himself very much as a collective whole, not feeling that he must goad or repress this or that aspect of himself into conformity. He is at ease, cycling through from Katherine Hepburn to Jimmy Stewart. To use the psychoanalyst Philip Bromberg’s language, online life has helped Case learn how to "stand in the spaces between selves and still feel one, to see the multiplicity and still feel a unity." To use the computer scientist Marvin Minsky’s language, Case feels at ease cycling through his "society of mind," a notion of identity as distributed and heterogeneous. Identity, from the Latin idem, has been typically used to refer to the sameness between two qualities. On the Internet, however, one can be many and usually is.
Most recently, Ray Kurzweil, inventor of the Kurzweil reading machine and AI researcher, has created a virtual alter ego: a female rock star named Ramona. Kurzweil is physically linked to Ramona. She moves when he moves; she speaks when he speaks (his voice is electronically transformed into that of a woman); she sings when he sings. What Case experienced in the relative privacy of an online virtual community, Kurzweil suggests will be standard identity play for all of us. Ramona can be expressed "live" on a computer screen as Kurzweil performs "her" and as an artificial intelligence on Kurzweil’s web site.
Theory and objects-to-think-with. The notions of identity and multiplicity to which I was exposed in the late 1960s and early 1970s originated within the continental psychoanalytic tradition. These notions, most notably that there is no such thing as "the ego" — that each of us is a multiplicity of parts, fragments, and desiring connections — grew in the intellectual hothouse of Paris; they presented the world according to such authors as Jacques Lacan, Gilles Deleuze, and Félix Guattari. I met these ideas and their authors as a student in Paris, but despite such ideal conditions for absorbing theory, my "French lessons" remained abstract exercises. These theorists of postructuralism spoke words that addressed the relationship between mind and body, but from my point of view had little to do with my own.
In my lack of personal connection with these ideas, I was not alone. To take one example, for many people it is hard to accept any challenge to the idea of an autonomous ego. While in recent years, many psychologists, social theorists, psychoanalysts, and philosophers have argued that the self should be thought of as essentially decentered, the normal requirements of everyday life exert strong pressure on people to take responsibility for their actions and to see themselves as unitary actors. This disjuncture between theory (the unitary self is an illusion) and lived experience (the unitary self is the most basic reality) is one of the main reasons why multiple and decentered theories have been slow to catch on — or when they do, why we tend to settle back quickly into older, centralized ways of looking at things.
When twenty years later, I first used my personal computer and modem to join online communities, I had an experience of this theoretical perspective that brought it shockingly down to earth. I used language to create several characters. My actions were textual — my words made things happen. I created selves that were made of and transformed by language. And in each of these different personae, I was exploring different aspects of my self. The notion of a decentered identity was concretized by experiences on a computer screen. In this way, cyberspace became an object to think with for thinking about identity. In cyberspace, identity was fluid and multiple, a signifier no longer clearly points to a thing that is signified, and understanding is less likely to proceed through analysis than by navigation through virtual space.
Appropriable theories, ideas that capture the imagination of the culture at large, tend to be those with which people can become actively involved. They tend to be theories that can be "played" with. So one way to think about the social appropriability of a given theory is to ask whether it is accompanied by its own objects-to-think-with that can help it move out beyond intellectual circles.
For example, the popular appropriation of Freudian theory had little to do with scientific demonstrations of its validity. Freudian theory passed into the popular culture because they offered robust and down-to-earth objects-to-think-with. The objects were not physical but almost-tangible ideas such as dreams and slips of the tongue. People were able to play with such Freudian "objects." They became used to looking for them and manipulating them, both seriously and not so seriously. And as they did so, the idea that slips and dreams betray an unconscious started to feel natural.
In Freud’s work, dreams and slips of the tongue carried the theory. Today, life on the computer screen carries theory. People decide that they want to interact with others on a computer network. They get an account on a commercial service. They think that this will provide them with new access to people and information and of course it does.
But it does more. When they log on, they may find themselves playing multiple roles; they may find themselves playing characters of the opposite sex. In this way they are swept up by experiences that enable them to explore previously unexamined aspects of their sexuality or that challenge their ideas about a unitary self. The instrumental computer, the computer that does things for us has another side. It is also a subjective computer that does things to us — to our view of our relationships, to our ways of looking at our minds and ourselves
Within the psychoanalytic tradition, many "schools" have departed from a unitary view of identity, among these the Jungian, object-relations, and Lacanian. In different ways, each of these groups of analysts was banished from the ranks of orthodox Freudians for such suggestions, or somehow relegated to the margins. As America became the center of psychoanalytic politics in the mid-twentieth century, ideas about a robust executive ego moved into the psychoanalytic mainstream.
These days, the pendulum has swung away from any complacent view of a unitary self. Through the fragmented selves presented by patients and through theories that stress the decentered subject, contemporary social and psychological thinkers are confronting what has been left out of theories of the unitary self. Online experiences with "parallel lives" are part of the significant cultural context that supports new ways of theorizing about non-pathological, indeed healthy, multiple selves.
V. Relational Artifacts: A Companion Species?
In Steven Spielberg’s movie, AI: Artificial Intelligence, scientists build a humanoid robot boy, David, who is programmed to love. David expresses this love to a woman who has adopted him as her child. In the discussion that followed the release of the film, emphasis usually fell on the question whether such a robot could really be developed. People thereby passed over a deeper question, one that historically has contributed to our fascination with the computer’s burgeoning capabilities. That question concerns not what computers can do or what computers will be like in the future, but rather, what we will be like. What kinds of people are we becoming as we develop more and more intimate relationships with machines?
In this context, the pressing issue in A.I. is not the potential "reality" of a non-biological son, but rather that faced by his adoptive mother — a biological woman whose response to a machine that asks for her nurturance is the desire to nurture it; whose response to a non-biological creature who reaches out to her is to feel attachment, horror, love, and confusion.
The questions faced by the mother in A.I. include "What kind of relationship is it appropriate, desirable, imaginable to have with a machine?" and "What is a relationship?" Although artificial intelligence research has not come close to creating a robot such as Spielberg’s David, these questions have become current, even urgent.
Today, we are faced with relational artifacts to which people respond in ways that have much in common with the mother in A.I. These artifacts are not perfect human replicas as was David, but they are able to push certain emotional buttons (think of them perhaps as evolutionary buttons). When a robotic creature makes eye contact, follows your gaze, and gestures towards you, you are provoked to respond to that creature as a sentient and even caring other. Psychoanalytic thought offers materials that can deepen our understanding of what we feel when we confront a robot child who asks us for love. It can help us explore what moral stance we might take if we choose to pursue such relationships.
There is every indication that the future of computational technology will include relational artifacts that have feelings, life cycles, moods, that reminisce, and have a sense of humor — that say they love us, and expect us to love them back. What will it mean to a person when their primary daily companion is a robotic dog? Or their health care "attendant" is built in the form of a robot cat? Or their software program attends to their emotional states and, in turn, has affective states of its own? In order to study these questions I have embarked on a research project that includes fieldwork in robotics laboratories, among children playing with virtual pets and digital dolls, and among the elderly to whom robotic companions are starting to be aggressively marketed.
I have noted that in the over two decades in which I have explored people’s relationships with computers, I have used the metaphor of the Rorschach, the computer as a screen on which people projected their thoughts and feelings, their very different cognitive styles. With relational artifacts, the Rorschach model of a computer/human relationship breaks down. People are learning to interact with computers through conversation and gesture; people are learning that to relate successfully to a computer you have to assess its emotional "state."
In my previous research on children and computer toys, children described the lifelike status of machines in terms of their cognitive capacities (the toys could "know" things, "solve" puzzles). In my studies on children and Furbies, I found that children describe these new toys as "sort of alive" because of the quality of their emotional attachments to the objects and because of the idea that the Furby might be emotionally attached to them. So, for example, when I ask the question, "Do you think the Furby is alive?" children answer not in terms of what the Furby can do, but how they feel about the Furby and how the Furby might feel about them.
Ron (6): Well, the Furby is alive for a Furby. And you know, something this smart should have arms. It might want to pick up something or to hug me.
Katherine (5): Is it alive? Well, I love it. It’s more alive than a Tamagotchi because it sleeps with me. It likes to sleep with me.
Jen (9): I really like to take care of it. So, I guess it is alive, but it doesn’t need to really eat, so it is as alive as you can be if you don’t eat. A Furby is like an owl. But it is more alive than an owl because it knows more and you can talk to it. But it needs batteries so it is not an animal. It’s not like an animal kind of alive.
Although we are just at the early stages of studying children and relational artifacts, several things seem clear. Today’s children are learning to distinguish between an "animal kind of alive" and a "Furby kind of alive." The category of "sort of alive" becomes used with increasing frequency. And quite often, the boundaries between an animal kind of alive and a Furby kind of alive blur as the children attribute more and more lifelike properties to the emotive toy robot. So, for example, eight-year-old Laurie thinks that Furbies are alive, but die when their batteries are removed. People are alive because they have hearts, bodies, lungs, "and a big battery inside. If somebody kills you — maybe it’s sort of like taking the batteries out of the Furby."
Furthermore, today’s children are learning to have expectations of emotional attachments to computers, not in the way we have expectations of emotional attachment to our cars and stereos, but in the way we have expectations about our emotional attachments to people. In the process, the very meaning of the word "emotional" may change. Children talk about an "animal kind of alive and a Furby kind of alive." Will they also talk about a "people kind of love" and a "computer kind of love?"
We are in a different world from the old "AI debates" of the 1960s to 1980s in which researchers argued about whether machines could be "really" intelligent. The old debate was essentialist; the new objects sidestep such arguments about what is inherent in them and play instead on what they evoke in us: When we are asked to care for an object, when the cared-for object thrives and offers us its attention and concern, we experience that object as intelligent, but more important, we feel a connection to it. So the question here is not to enter a debate about whether objects "really" have emotions, but to reflect on what relational artifacts evoke in the user.
How will interacting with relational artifacts affect people’s way of thinking about themselves, their sense of human identity, of what makes people special? Children have traditionally defined what makes people special in terms of a theory of "nearest neighbors." So, when the nearest neighbors (in children’s eyes) were their pet dogs and cats, people were special because they had reason. The Aristotelian definition of man as a rational animal made sense even for the youngest children.
But when, in the 1980s, it seemed to be the computers who were the nearest neighbors, children’s approach to the problem changed. Now, people were special not because they were rational animals but because they were emotional machines. So, in 1983, a ten-year-old told me: "When there are the robots that are as smart as the people, the people will still run the restaurants, cook the food, have the families, I guess they’ll still be the only ones who’ll go to Church."
Now in a world in which machines present themselves as emotional, what is left for us?
One woman’s comment on AIBO, Sony’s household entertainment robot startles in what it might augur for the future of person-machine relationships: "’[AIBO] is better than a real dog . . . It won’t do dangerous things, and it won’t betray you … Also, it won’t die suddenly and make you feel very sad.’"
In Ray Bradbury’s story, "I sing the body electric," a robotic, electronic grandmother is unable to win the trust of the girl in the family, Agatha, until the girl learns that the grandmother, unlike her recently deceased mother, cannot die. In many ways throughout the story we learn that the grandmother is actually better than a human caretaker — more able to attend to each family member’s needs, less needy, with perfect memory and inscrutable skills — and most importantly — not mortal.
Mortality has traditionally defined the human condition; a shared sense of mortality has been the basis for feeling a commonality with other human beings, a sense of going through the same life cycle, a sense of the preciousness of time and life, of its fragility. Loss (of parents, of friends, of family) is part of the way we understand how human beings grow and develop and bring the qualities of other people within themselves.
The possibilities of engaging emotionally with creatures that will not die, whose loss we will never need to face, presents dramatic questions that are based on current technology — not issues of whether the technology depicted in AI could really be developed.
The question, "What kinds of relationships is it appropriate to have with machines?" has been explored in science fiction and in technophilosophy. But the sight of children and the elderly exchanging tenderness with robotic pets brings science fiction into everyday life and technophilosophy down to earth. In the end, the question is not just whether our children will come to love their toy robots more than their parents, but what will loving itself come to mean?
Conclusion: Toward the Future of the Computer Culture
Relational artifacts are being presented to us as companionate species at the same time that other technologies are carrying the message that mind is mechanism, most notably psychopharmacology. In my studies of attitudes toward artificial intelligence and robotics, people more and more are responding to a question about computers with an answer about psychopharmacology. Once Prozac has made someone see his or her mind as a biochemical machine it seems a far smaller step to see the mind as reducible to a computational one.
Twenty years ago, when my student turned a Freudian slip into an information-processing error, it was computational models that seemed most likely to spread mechanistic thinking about mind. Today, psychopharmacology is the more significant backdrop to the rather casual introduction of relational artifacts as companions, particularly for the elderly and for children.
The introduction of these objects is presented as good for business and (in the case of children) good for "learning" and "socialization. It is also presented as realistic social policy. This is the "robot or nothing" argument. (If the old people don’t get the robots, they certainly aren’t going to get a pet.) Many people do find the idea of robot companions unproblematic. Their only question about them is, "Does it work?" By this, they usually mean, "Does it keep the elderly people/children quiet?" There are, of course, many other questions. To begin with, (even considering) putting artificial creatures in the role of companions to our children and parents raises the question of their moral status.
Already, there are strong voices that argue the moral equivalence of robots as a companion species. Kurzweil talks of an imminent age of "spiritual machines," by which he means machines with enough self-consciousness that they will deserve moral and spiritual recognition (if not parity) with their human inventors. Computer "humor," which so recently played on anxieties about whether or not people could "pull the plug" on machines, now portrays the machines confronting their human users with specific challenges. One New Yorker cartoon has the screen of a desktop computer asking: "I can be upgraded. Can you?" Another cartoon makes an ironic reference to Kurzweil’s own vision of "downloading" his mind onto a computer chip. In this cartoon, a doctor, speaking to his surgical patient hooked up to an I.V. drip, says: "You caught a virus from your computer and we had to erase your brain. I hope you kept a back-up copy."
Kurzweil’s argument for the moral (indeed spiritual) status of machines is intellectual, theoretical. Cynthia Breazeal’s comes from her experience of connection with a robot. Breazeal was leader on the design team for Kismet, the robotic head that was designed to learn from human tutoring, much as a young child would. She also was its chief programmer, tutor, and companion. Kismet needed her to become as "intelligent" as it did. Breazeal experienced what might be called a maternal connection to Kismet; she certainly describes a sense of connection with it as more than "mere" machine. When she graduated from MIT and left the AI Laboratory where she had done her doctoral research, the tradition of academic property rights demanded that Kismet be left behind in the laboratory that had paid for its development. What she left behind was the robot "head" and its attendant software. Breazeal describes a sharp sense of loss. Building a new Kismet would not be the same.
It would be facile to analogize Breazeal’s situation to that of the mother in Speilberg’s A.I. but she is, in fact, one of the first people in the world to have one of the signal experiences in that story. The issue is not Kismet’s achieved level of intelligence, but Breazeal’s human experience as a caretaker. Breazeal "brought up" Kismet, taught it through example, inflection, and gesture. What we need today is a new object relations psychology that will help us understand such relationships and indeed, to responsibly navigate them. Breazeal’s concerns have been for being responsible to the robots, acknowledging their moral status.
My concern is centered on the humans in the equation. In concrete terms: first we need to understand Cynthia Breazeal’s relationship to Kismet; second, we need to find a language for achieving some critical distance on it. Caring deeply for a machine that presents itself as a relational partner changes who we are as people. Presenting a machine to an aging parent as a companion changes who we are as well. Walt Whitman said, "A child goes forth every day/And the first object he look’ed upon/That object he became." We make our technologies, and our technologies make and shape us. We are not going to be the same people we are today, on the day we are faced with machines with which we feel in a relationship of mutual affection.
Even when the concrete achievements in the field of artificial intelligence were very primitive, the mandate of AI has always been controversial, in large part because it challenged ideas about human "specialness" and specificity. In the earliest days of AI, what seemed threatened was the idea that people were special because of their intelligence. There was much debate about whether machines could ever play chess; the advent of a program that could beat its creator in a game of checkers was considered a moment of high intellectual and religious drama.
By the mid-1980s, anxiety about what AI challenged about human specialness had gone beyond whether machines would be "smart" and had moved to emotional and religious terrain. At MIT, Marvin Minsky’s students used to say that he wanted to build a computer "complex enough that a soul would want to live in it." Most recently, AI scientists are emboldened in their claims.
They suggest the moral equivalence of people and machines. Ray Kurzweil argues that machines will be spiritual; Rodney Brooks argues that the "us and them" problem of distinguishing ourselves from the robots will disappear because we are becoming more robotic (with chips and implants) and the robots are becoming more like us (biological parts instead of silicon-based ones).
The question of human specificity and the related question of the moral equivalence of people and machines have moved from the periphery to the center of discussions about artificial intelligence. One element of "populist" resistance to the idea of moral equivalence finds expression in a number of narratives. Among these is the idea that humans are special because of their imperfections.
A ten-year-old who has just played with Breazeal’s Kismet says, "I would love to have a robot at home. It would be such a good friend. But it couldn’t be a best friend. It might know everything but I don’t. So it wouldn’t be a best friend." There is resistance from the experience of the life cycle. An adult confronting an "affective" computer program designed to function as a psychotherapist says, "Why would I want to talk about sibling rivalry to something that was never born and never had a mother?"
In the early days of the Internet, a New Yorker cartoon captured the essential psychological question: paw on keyboard, one dog says to another, "On the Internet, nobody knows you’re a dog." This year, a very different cartoon summed up more recent anxieties. Two grownups face a child in a wall of solidarity, explaining: "We’re neither software nor hardware. We’re your parents." The issue is the irreducibility of human beings and human meaning. We are back to the family, to the life cycle, to human fragility and experience. We are back to the elements of psychoanalytic culture.
With the turn of the millennium, we came to the end of the Freudian century. It is fashionable to argue that we have moved from a psychoanalytic to a computer culture, that there is no need to talk about Freudian slips now that we can talk about information processing errors. In my view, however, the very opposite is true.
We must cultivate the richest possible language and methodologies for talking about our increasingly emotional relationships with artifacts. We need far closer examination of how artifacts enter the development of self and mediate between self and other. Psychoanalysis provides a rich language for distinguishing between need (something that artifacts may have) and desire (which resides in the conjunction of language and flesh). It provides a rich language for exploring the possibility of the irreducibility of human meanings.
Finally, to come full circle, with the reinterpretation of Freudian slips in computational terms — with the general shift from meaning to mechanism — there is a loss of the notion of ambivalence. Immersion in programmed worlds and relationships with digital creatures and robotic pets puts us in reassuring microworlds where the rules are clear. But never have we so needed the ability to think, so to speak, "ambivalently", to consider life in shades of gray, consider moral dilemmas that aren’t battles for "infinite justice" between Good and Evil. Never have we so needed to be able to hold many different and contradictory thoughts and feelings at the same time.
People may be comforted by the notion that we are moving from a psychoanalytic to a computer culture, but what the times demand is a passionate quest for joint citizenship.
THE INITIATIVE ON TECHNOLOGY AND SELF AT MIT
Sherry Turkle, Director
An unstated question lies behind much of our current preoccupation with the future of technology. The question is not what will technology be like in the future, but rather, what will we be like, what are we becoming as we forge increasingly intimate relationships with our machines? — Sherry Turkle
Beyond catalyzing changes in what we do, technology profoundly affects how we think. Technology alters people’s awareness of self and redefines their relationships with the world. Though this has always been true, recently the pace and depth of technology’s effects on identity have increased. The Internet has become a space for new forms of self-exploration and social encounter.
Psychopharmacology, genetic engineering, biotechnology, artificial intelligence, nanotechnology, and robotics are among the technologies now raising fundamental questions about selfhood, subjectivity, relationships, development, and what it means to be human.
Origin and Purpose
The MIT Initiative on Technology and Self was founded in 2001 by Sherry Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, with the generous support of the Mitchell Kapor Foundation. It focuses on how contemporary technologies become enmeshed in the formation of human identity. Its goal is to create a center for reflection and research on the subjective side of technology and to raise the level of public discourse on the social and psychological dimensions of technological change. The Initiative welcomes participants from the academic community (both faculty and students), as well as from journalism and industry.
About the Working Groups
The Initiative sponsors working groups in specific thematic areas. The current working groups are Adolescence, Technology, and Identity; Psychopharmacology and Identity ("Rx/ID"); Design, Space, and Software ("Architecture"), and Robots, Creatures, and Human Identity. Groups in formation include Information Technologies and Professional Identity ("Virtuality and Its Discontents"); Nanotechnology and Identity; The Experience of the Archive: Physical and Digital; Gender, Technology, and Identity; and Psychodynamic Perspectives on Technology and Self. The structure of the working group is flexible. For example, some groups run speaker series or hold conferences; some are centered on a research project or planning one; others focus on sharing ongoing research and/or the analysis of texts.
About the "(Evocative) Objects" Lunch Series
One of the hallmark activities of the Initiative has been an informal lunch speaker series known as the "(Evocative) Objects" lunches. This series is open to the public and, within the Initiative community, it provides an opportunity for members of the various working groups to come together. These lunches are informal gatherings focused on an (evocative) object, that is, an object that causes us to think differently about such categories as self, other, intention, desire, emotion, the body . . .
Past speakers at the "(Evocative) Objects" Lunch Series have included:
Sherry Turkle, Professor of The Social Studies of Science and Technology, MIT
Ramona, Raymond Kurzweil’s virtual alter ego
Roberta Baskin, Nieman Fellow at Harvard University, Senior Producer of ABC News "20/20"
The Personal Digital Assistant
Oron Catts and Ionat Zurr, University of Western Australia
The Tissue Culture and Art Project
Mitchel Resnick, Associate Professor of Learning Research, Media Lab, MIT
Emotional Objects for Children
Edith Ackermann, Professor of Developmental Psychology, University of Aix-Marseille I, Visiting Professor of Architecture, MIT
Animated Toys, Artificial Creatures, and Avatars
Evelynn Hammonds, Associate Professor of the History of Science, MIT
Morphing Software: Photography, Race and "Miscegenation" in Cyberspace
Batya Friedman and Peter Kahn, University of Washington
Sony’s Robotic Dog AIBO, Significant Others, and the Imaginative Leap
How do robotic pets challenge traditional boundaries between who or what can have intentions and desires, extend our conceptions of self and significant others, and impede children’s social and moral development?
Susan Yee, Research Scientist in the Department of Architecture, MIT
The Archive as Object: Physical and Digital
Marina Umaschi Bers, Assistant Professor at the Eliot-Pearson Department of Child Development, Tufts University
Zora is a three-dimensional multi-user environment that engages children in the design of a graphical virtual city and its social organization.
Peter Kramer, Clinical Professor of Psychiatry and Human Behavior at Brown University and author of the best-selling book Listening to Prozac
Prozac and Artistic Creativity, an examination of the debate over the rights of a troubled individual to be freed from pain and suffering through medication, and the rights of society to enjoy literary, artistic, musical or other creative expressions, which can be suppressed if the artist is medicated to relieve his or her pain.
Mitchell Kapor, Chairman, Mitchell Kapor Foundation, founder of he Lotus Development Corporation and designer of Lotus 1-2-3
Lindenworld, a highly realistic, multi-user, 3D environment built by its participants to serve as a new laboratory for social interaction and the exploration of identity.
Eric Caplan, The Pfizer Corporation
The Initiative also sponsors conferences on topics that address changes in technology and the corresponding effects on identity and self. For example, an inaugural conference on "Adolescence, Technology, and Identity" was funded by the Spencer Foundation. This conference became the foundation of a working group of the same name. Conferences in the planning stages include "Whither Psychoanalysis in Digital Culture?" "Nanotechnology and Human Identity," and "Is PowerPoint Changing the Way That We Think?"
During the fall and spring semesters, Professor Turkle offers a seminar, "Technology and Self," that complements and supports the Initiative’s work. It provides an opportunity for students (graduate and advanced undergraduate) to receive academic credit as they participate in Initiative activities. For more information about the seminar, please contact Robin Wilson, Project Director, at (617) 452-2844.
In the academic year 2002-2003, the Initiative plans to add a postdoctoral component to its academic activities and to begin a publishing program to give greater international visibility to the issues of technology and self.
In addition to planning conferences, the Initiative has received major funding from the NSF for the research project "Information Technologies and Professional Identity: A Comparative Study of the Effects of Virtuality." The Initiative is planning additional research projects in the following areas:
- "Children, Identity, and Digital Culture,"
- "Nanotechnology and Human Identity,"
- "Psychodynamic Perspectives on Technology and Self,"
- "Archiving the Object: Physical vs. Digital,"
- "Adolescence, Technology, and Identity," and
- "Technologies of Self-Reflection: A New Artistic Genre."