Beyond Computation: A Talk with Rodney Brooks

June 7, 2002 by John Brockman

Rodney Brooks is trying to build robots with properties of living systems. These include self-reproducing and self-assembling robots and one inspired by Bill Joy that wanders around the corridors, finds electrical outlets, and plugs itself in. His students’ edgy projects include real-time MRI imagery, virtual colonoscopies, programs that create DNA for E. coli molecules that act as computers, and eventually, self-organizing smart biomaterials that grow into objects, such as a table.

Originally published on Edge.org June 5, 2002. Published on KurzweilAI.net, June 7, 2002.

Introduction

Rodney Brooks, a computer scientists and Director of the MIT’s Artificial Intelligence Laboratory, is looking for something beyond computation in the sense that we don’t understand and we can’t describe what’s going on inside living systems using computation only. When we build computational models of living systems, such as a self-evolving system or an artificial immunology system — they’re not as robust or rich as real living systems.

"Maybe we’re missing something," Brooks asks, "but what could that something be?" He is puzzled that we’ve got all these biological metaphors that we’re playing around with — artificial immunology systems, building robots that appear lifelike — but none of them come close to real biological systems in robustness and in performance. "What I’m worrying about," he says, "is that perhaps in looking at biological systems we’re missing something that’s always in there. You might be tempted to call it an essence of life, but I’m not talking about anything outside of biology or chemistry."
JB

RODNEY A. BROOKS is Director of the MIT Artificial Intelligence Laboratory, and Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical Officer of iRobot, a 120-person robotics company. Dr. Brooks also appeared as one of the four principals in the Errol Morris movie Fast, Cheap, and Out of Control (named after one of his papers in the Journal of the British Interplanetary Society) in 1997 (one of Roger Ebert’s 10 best films of the year). He is the author of Flesh and Machines.

ROD BROOKS’ Edge Bio Page

ROD BROOKS: Every nine years or so I change what I’m doing scientifically. Last year, 2001, I moved away from building humanoid robots to worry about what the difference is between living matter and non-living matter. You have an organization of molecules over here and it’s a living cell; you have an organization of molecules over here and it’s just matter. What is it that makes something alive? Humberto Maturana was interested in this question, as was the late Francisco Varela in his work on autopoesis. More recently, Stuart Kauffman has talked about what it is that makes something living, how it is a self-perpetuating structure of interrelationships.

We have all become computation-centric over the last few years. We’ve tended to think that computation explains everything. When I was a kid, I had a book which described the brain as a telephone-switching network. Earlier books described it as a hydrodynamic system or a steam engine. Then in the ’60s it became a digital computer. In the ’80s it became a massively parallel digital computer. I bet there’s now a kid’s book out there somewhere which says that the brain is just like the World Wide Web because of all of its associations. We’re always taking the best technology that we have and using that as the metaphor for the most complex things — the brain and living systems. And we’ve done that with computation.

But maybe there’s more to us than computation. Maybe there’s something beyond computation in the sense that we don’t understand and we can’t describe what’s going on inside living systems using computation only. When we build computational models of living systems — such as a self-evolving system or an artificial immunology system — they’re not as robust or rich as real living systems. Maybe we’re missing something, but what could that something be?

You could hypothesize that what’s missing might be some aspect of physics that we don’t yet understand. David Chalmers has certainly used that notion when he tries to explain consciousness. Roger Penrose uses that notion to a certain extent when he says that it’s got to be the quantum effects in the microtubules. He’s looking for some physics that we already understand but are just not describing well enough.

If we look back at how people tried to understand the solar system in the time of Kepler and Copernicus, we notice that they had their observations, geometry, and a. They could describe what was happening in those terms, but it wasn’t until they had calculus that they were really able to make predictions and have a really good model of what was happening. My working hypothesis is that in our understanding of complexity and of how lots of pieces interact we’re stuck at that algebra-geometry stage. There’s some other tool — some organizational principle — that we need to understand in order to really describe what’s going on.

And maybe that tool doesn’t have to be disruptive. If we look at what happened in the late 19th century through the middle of the 20th, there were a couple of very disruptive things that happened in physics: quantum mechanics and relativity. The whole world changed. But computation also came along in that time period — around the 1930s — and that wasn’t disruptive. If you were to take a 19th century mathematician and sit him down in front of a chalk board, you could explain the ideas of computation to him in a few days. He wouldn’t be saying, "My God, that can’t be true!" But if we took a 19th century physicist (or for that matter, an ordinary person in the 21st century) and tried to explain quantum mechanics to him, he would say, "That can’t be true. It’s too disruptive." It’s a completely different way of thinking. Using computation to look at physical systems is not disruptive to the extent that it needs its own special physics or chemistry; it’s just a way of looking at organization.

So, my mid-life research crisis has been to scale down looking at humanoid robots and to start looking at the very simple question of what makes something alive, and what the organizing principles are that go on inside living systems. We’re coming at it with two and a half or three prongs. At one level we’re trying to build robots that have properties of living systems that robots haven’t had before. We’re trying to build robots that can repair themselves, that can reproduce (although we’re a long way from self-reproduction), that have metabolism, and that have to go out and seek energy to maintain themselves. We’re trying to design robots that are not built out of silicon steel, but out of materials that are not as rigid or as regular as traditional materials — that are more like what we’re built out of. Our theme phrase is that we’re going to build a robot out of Jello. We don’t really mean we’re actually going to use Jello, but that’s the image we have in our mind. We are trying to figure out how we could build a robot out of "mushy" stuff and still have it be a robot that interacts in the world.

The second direction we’re going is building large-scale computational experiments. People might call them simulations, but since we’re not necessarily simulating anything real I prefer to call them experiments. We’re looking at a range of questions on living systems. One student, for example, is looking at how multi-cellular reproduction can arise from single-cell reproduction. When you step back a little bit you can understand how single-cell reproduction works, but then how did that turn into multi-cellular reproduction, which at one level of organization looks very different from what’s happening in the single-cell reproduction. In single-cell reproduction one thing gets bigger and then just breaks into two; in multicell reproduction you’re actually building different sorts of cells. This is important in speculating about the pre-biotic emergence of self-organization in the soup of chemicals that used to be Earth. We’re trying to figure out how self-organization occured, and how it bootstraped Darwinian evolution, DNA, etc. out of that. The current dogma is that DNA is central. But maybe DNA came along a lot later as a regulatory mechanism.

In other computational experiments we’re looking at very simple animals and modeling their neural development. We’re looking at polyclad flatworms, which have a very primitive, but very adaptable brain with a couple of thousand neurons. If you take a polyclad flatworm and cut out its brain, it doesn’t carry out all of its usual behaviors but it can still survive. If you then get a brain from another one and you put it into this brainless flatworm, after a few days it can carry out all of its behaviors pretty well. If you take a brain from another one and you turn it about 180 degrees and put it in backwards, the flatworm will walk backwards a little bit for the first few days, but after a few days it will be back to normal with this brain helping it out. Or you can take a brain and flip it over 180 degrees, and it adapts, and regrows. How is that regrowth and self-organization happening in this fairly simple system? All of these different projects are looking at how this self-organization happens with computational experiments in a very artificial life-like way.

Continued at: http://www.edge.org/3rd_culture/brooks_beyond/beyond_index.html Copyright © 2002 by Edge Foundation, Inc.