AI (the movie) and AI Panel Discussion at MIT

May 7, 2001 by Amara D. Angelica

Steven Spielberg’s film “A.I.” was previewed at MIT on April 20, 2001 at an event that seemed geared more toward humans than futuristic robots.

Originally published May 7, 2001 on KurzweilAI.net.

In the beginning of Steven Spielberg’s “A.I.,” William Hurt’s scientist/creator-of-A.I. character pokes a needle through the hand of one of his creations (a human, by visual accounts), who emits a shriek. He then tells her to undress, to which she complies without question.

This was intended to illustrate that we haven’t come very far in creating robots. In a superficial physical sense, they’re analogs to humans, but lacking in emotional depth, with no “ego” to question with.

But then the scientist proposes the next step: to create an artificial intelligence with emotional depth–specifically, the ability to love. Questioned on the moral implications, he replies: “Didn’t God create Adam…to love Him?”

In a videotaped interview, supporting actor Jude Law, elaborated: “If they are going to outlive us, they better love.”

Indeed, love, or human emotion, was a major theme of the A.I. event at MIT on April 30, 2001, which featured a ten minute preview of the film, videotaped interviews with Steven Spielberg and others, and a Q&A session with Kathleen Kennedy, the executive producer of the film, and its star, Haley Joel Osment. There was also a panel discussion with four leading experts in the field–Raymond Kurzweil, Rodney Brooks, Sherry Turkle and Cynthia Breazeal–who also made presentations about their work and insights on the film and the field of AI .

The event focused on A.I. in the contemporary world and how humans interact with technologies that perform intelligent tasks. As the film promotion says of Haley Joel Osment’s character David, “His love is real, but he is not.” The four experts illustrated how artificially intelligent systems, many of which we take for granted, are already real to us, interwoven into the fabric of our lives–in medicine, travel, and in any situation where we have come to rely on the digital storing of information that humans previously stored in their heads.

It’s important to distinguish between two forms of AI here:

  • “Weak AI” is artificial intelligence in the contemporary sense: automated airline ticketing systems, medical diagnostic systems, and even the computer that diagnoses problems in your car are forms of machine intelligence, albeit focused on specific tasks.
  • “Strong AI” refers to artificially intelligent systems that act autonomously and reflect understanding, contextualization, and human-enough qualities to pass the classic Turing (conversational) test.

 The film depicts the latter. Sherry Turkle, Professor of Sociology and Science at MIT, pointed out that we are already assign emotional qualities to these weak AI machines, even though they’re not conscious. That is, we anthropomorphize these machines and their software –we believe in an emotional motivation.

Turkle, who has been researching children’s responses to non-biological “relational objects,” explored how we will likely create a new existence for artificially intelligent beings: the idea that they are alive in a different sense. In our relationships, whether with simple toys or complex chatterbots (such as Ramona), we adapt so we can have more complex and fulfilling relationships with these objects.

As she puts it, the question of intelligence becomes bypassed when the machine fulfills its task. We then develop an emotional relationship and assign emotional qualities to the machine. We know it’s not purposeful, but we want to believe it–to understand how the machine feels.

That’s the goal of research by MIT AI Lab post doctoral fellow Dr. Cynthia Breazeal: the creation of an emotional robot, or at least a robot with the emotional range of an infant. Her KISMET does not attempt to pass Turing tests with semantic sophistication, nor does it play chess; instead, it reacts to a wide variety of stimuli in ways modeled after basic developmental psychology to create a different kind of interface with humans.

Despite her reminder that there is a great deal more to be done to improve KISMET’s sophistication, one is impressed by how “real” the interactions KISMET has with humans are, as she illustrated in video presentations of these interactions. When KISMET would drop its oversize eyes and frown, not only the videotaped human but the audience as well cooed in response to this display of seeming emotion; it is as if KISMET were pushing our genetically coded buttons that we share in response to body language.

She also pointed out that it is not hubris that pushes her and others forward in their pursuit of AI. The goal is greater understanding of the simple human concepts of learning and emotion by attempting to reverse engineer them. The machines we make become our mirrors, she says, similar to how the children of Sherry Turkle’s research project emotions and qualities onto their machines.

Rodney Brooks, Director of the Artificial Intelligence Lab at MIT, explored how we are becoming machines ourselves, or at least merging with them. Prosthetics, neural implants, and laser eye surgery illustrate how we are already moving toward this merger, he said. The future of biotechnology may reveal ways in which we can gain full control over biological processes and engineer our own bodies as we wish. So the relationships between human and machines becomes as indistinct as the boundaries between human and machine in the near future. It is not a question, as he put it, of “us and them,” but more a matter of “us as them.”

Case in point: Kathleen Kennedy said a 35 foot mechanical tyrannosaurus rex accidentally came alive and wreaked havoc during a lunch break on the set of “Jurassic Park.” Crew members reacted as though it were real (fleeing in terror). This serves as a dual metaphor: does the machine have to be truly artificially intelligent for it to evoke real emotions from its creators and does the machine carry threats for us?

We most likely will not, as Rodney Brooks pointed out, create 747s by accident in our backyards (an example he used for the feared appearance of dangerous technologies that will emerge as if by accident), nor will we create 35 foot man-eating monsters. But we have created plenty of nuclear weapons. What is our responsibility, then, not only in creating machines that love us, but machines that could destroy us?

No stranger to this debate, inventor/futurist/author Raymond Kurzweil spoke to these concerns, pointing out the need for more, not less, research into areas of emerging technologies in order to lay the ethical and legal foundations, as well as develop safeguards before these technologies can be used for destructive means. He also illustrated that more complex forms of artificial intelligence on the horizon will demand recognition, socially and politically. And their arrival is imminent, based on past and current exponential growth trends across a variety of technologies and scientific disciplines.

This was not a showcase of futuristic gadgetry, nor does the film seem to be concerned with technological extravagance in its portrayal of the future. The concern was psychological, social, and emotional–and most importantly, human responsibility in a world where the nature of interaction between humans and their created counterparts is already very complex, and will swiftly become more so.

Links to check out:

“A.I”–Great flash site about the movie, with many features, including a chatbot.

KISMET–Dr. Cynthia Breazeal’s socialable humanoid robot.

MIT Artificial Intelligence Lab–Where AI is being pursued. Rodney Brooks is the Director.

Sherry Turkle–Links to her research and publications.


Rodney Brooks, Sherry Turkle, Raymond Kurzweil and Cynthia Breazeal