BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS [6.5.02]IntroductionRodney Brooks, a computer scientists and Director of the MIT's Artificial Intelligence Laboratory, is looking for something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems, such as a self-evolving system or an artificial immunology system - they're not as robust or rich as real living systems. "Maybe we're missing something," Brooks asks, "but what could that something be?" He is puzzled that we've got all these biological metaphors that we're playing around with-artificial immunology systems, building robots that appear lifelike-but none of them come close to real biological systems in robustness and in performance. "What I'm worrying about," he says, "is that perhaps in looking at biological systems we're missing something that's always in there. You might be tempted to call it an essence of life, but I'm not talking about anything outside of biology or chemistry." RODNEY A. BROOKS is Director of the MIT Artificial Intelligence Laboratory, and Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical Officer of iRobot, a 120-person robotics company. Dr. Brooks also appeared as one of the four principals in the Errol Morris movie Fast, Cheap, and Out of Control (named after one of his papers in the Journal of the British Interplanetary Society) in 1997 (one of Roger Ebert's 10 best films of the year). He is the author of Flesh and Machines. BEYOND COMPUTATION: A TALK WITH RODNEY BROOKSROD BROOKS: Every nine years or so I change what I'm doing scientifically. Last year, 2001, I moved away from building humanoid robots to worry about what the difference is between living matter and non-living matter. You have an organization of molecules over here and it's a living cell; you have an organization of molecules over here and it's just matter. What is it that makes something alive? Humberto Maturana was interested in this question, as was the late Francisco Varela in his work on autopoesis. More recently, Stuart Kauffman has talked about what it is that makes something living, how it is a self-perpetuating structure of interrelationships. We have all become computation-centric over the last few years. We've tended to think that computation explains everything. When I was a kid, I had a book which described the brain as a telephone-switching network. Earlier books described it as a hydrodynamic system or a steam engine. Then in the '60s it became a digital computer. In the '80s it became a massively parallel digital computer. I bet there's now a kid's book out there somewhere which says that the brain is just like the World Wide Web because of all of its associations. We're always taking the best technology that we have and using that as the metaphor for the most complex things-the brain and living systems. And we've done that with computation. But maybe there's more to us than computation. Maybe there's something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems-such as a self-evolving system or an artificial immunology system-they're not as robust or rich as real living systems. Maybe we're missing something, but what could that something be? You could hypothesize that what's missing might be some aspect of physics that we don't yet understand. David Chalmers has certainly used that notion when he tries to explain consciousness. Roger Penrose uses that notion to a certain extent when he says that it's got to be the quantum effects in the microtubules. He's looking for some physics that we already understand but are just not describing well enough. If we look back at how people tried to understand the solar system in the time of Kepler and Copernicus, we notice that they had their observations, geometry, and a. They could describe what was happening in those terms, but it wasn't until they had calculus that they were really able to make predictions and have a really good model of what was happening. My working hypothesis is that in our understanding of complexity and of how lots of pieces interact we're stuck at that algebra-geometry stage. There's some other tool-some organizational principle-that we need to understand in order to really describe what's going on. |
Växlande metaforer.
![]() |
And maybe that tool doesn't have to be disruptive If we look at what happened in the late 19th century through the middle of the 20th, there were
a couple of very disruptive things that happened in physics: quantum mechanics and relativity. The whole world changed. But computation also
came along in that time period-around the 1930s-and that wasn't disruptive. If you were to take a 19th century mathematician and sit him down
in front of a chalk board, you could explain the ideas of computation to him in a few days. He wouldn't be saying, "My God, that can't be true!" But
if we took a 19th century physicist (or for that matter, an ordinary person in the 21st century) and tried to explain quantum mechanics to him, he
would say, "That can't be true. It's too disruptive." It's a completely different way of thinking. Using computation to look at physical systems is
not disruptive to the extent that it needs its own special physics or chemistry; it's just a way of looking at organization.
So, my mid-life research crisis has been to scale down looking at humanoid robots and to start looking at the very simple question of what makes something alive, and what the organizing principles are that go on inside living systems. We're coming at it with two and a half or three prongs.
On the computational side, I'm trying to build an interesting chemistry which is related to physics and has a structure where you get interesting combinatorics out of simple components in a physical simulation, so that properties of living systems can arise through spontaneous self-organization.
The question here is: What sorts of influences do you need on the outside?
My company, iRobot, has been pushing in a bunch of different areas. There's been a heightened interest in military robots, especially since September 11. By September 12 we had some of our robots down at Ground Zero in New York trying to help look for survivors under the rubble. There's been an increase in interest in robots that can do search and rescue, in robots that can find mines, and in portable robots that can do reconnaissance. These would be effective when small groups, like the special forces we've seen in Afghanistan, go in somewhere and they don't necessarily want to stick their heads up to go look inside a place. They can send the robot in to do that.
Another robot that we're just starting to get into production now after three years of testing is a robot to go down oil wells. This particular one is
5 centimeters in diameter and 14 meters long. It has to be autonomous, because you can't communicate by radio. Right now, if you want to go and
manipulate oil wells while they are in production, you need a big infrastructure on the surface to shove a big thick cable down. This can mean miles
and miles of cable, which means tons of cable on the surface, or a ship sitting above the oil well to push this stuff down through 30-foot segments
of pipe that go one after the other after the other for days and days and days. We've built these robots that can go down oil wells,-where the
pressure is 10,000 psi at 150 degrees Centigrade-carry along instruments, do various measurements, and find out where there might be too much
water coming into the well. Modern wells have sleeves that can be moved back and forth to block off work in segments where changes in pressure
in the shale layer from oil flow would suggest that it would be more effective to let the oil in somewhere else. When you have a managed oil well
you're going to increase the production by about a factor of two over the life of the well. The trouble is, it's been far too expensive to manage the
oil wells because you need this incredible infrastructure. These robots cost something on the order of a hundred thousand dollars.
Other things happening in robots are toys. Just like the first microprocessors, the first robots are getting into people's homes in toys. We had a bit of a downturn in high tech toys since September 11, and we're more back to basics, but it will spring back next year. There are a lot of high-tech, simple robot toys coming on the market; we're certainly playing in that space. Another interesting thing just now starting to happen is robots in the home. For a couple of years now you've been able to buy lawn-mowing robots from the Israeli company, Friendly Machines. In the past month Electrolux has just started selling their floor-cleaning robot. A couple of other players have also made announcements, but no one's delivering besides Electrolux. We're on the start of the curve of getting robots into our homes and doing useful work if these products turn out to be successful. My basic research is conducted at The Artificial Intelligence Lab at MIT, which is an interdisciplinary lab. We get students from across the Institute, although the vast majority are computer science majors. We also have electrical engineering majors, brain and cognitive science students, some mechanical engineering students, even some aeronautics and astronautics students these days because there is a big push for autonomous systems in space. We work on a mixture of applied and wacky theoretical stuff.
The most successful applied stuff over the last 3 or 4 years has been in assistance of surgery.
The newest thing, which is just in clinical trials right now, is virtual colonoscopies. Instead of actually having to shove the thing up to look, we can take MRI scans, and then the clinician sits there and does a fly-through the body. Algorithms go in, look for the polyp, and highlight the potential polyps. It's an external scan to replace what has previously been an internal intrusion. The clinical trials have just started. I view this registration of data sets as a step forwards. It's like the Star Trek tricorder which scans up and down the body and tells you what's wrong. We're building the technologies that are going to allow that sort of thing to happen. If these clinical trials work out, within five years the colonoscopies could become common. Scanning a patient with something like the tricoder is a lot further off, but that's the direction we're going; we're putting those pieces of technology together. That's the applied end of what we're doing at the lab. At the wackier, far-out end, Tom Knight now has a compiler in which you give a simple
program to the system, and it compiles the program into a DNA strip.
|
![]() |
To explain amorphous computing, let me suggest the following thought experiment. Say that in a bucket of paint you have a whole bunch of
computers which are little display elements. Instead of having a big LCD screen, you just get your paint brush, you paint this paint on the wall,
and these little computation elements locally can communicate with the other elements nearby them in the paint. They're not regularly spaced, but
you can predict ahead of time the density, and have them self-organize themselves into a big geometric display. Next you couple this with some
of these cells that can do digital computation.
A little further out, you grow a sheet of cells-just feed 'em some sugar and have them grow. They're all doing the same little computation-communicating with their neighbors by diffusing lactone molecules-and you have them self-organize and understand their spatial structure. 30 years from now, instead of growing a tree, cutting down the tree and building this wooden table, we would be able to just place some DNA in some living cells, and grow the table, because they self-organize. They know where to grow and how to change their production depending on where they are. This is going to be a key to this new industrial infrastructure of biomaterials-a little bit of computation inside each cell, and self-organization.
We've come a long way since the early AI stuff. In the '50s, when John McCarthy had that famous 6-week meeting up in Dartmouth where he coined the term "artificial intelligence," people got together and thought that the keys to understanding intelligence were being able to reproduce the stuff that those MIT and Carnegie Tech graduates found difficult to do. Al Newell and Herb Simon, for example, built some programs that could start to prove some of the theorems in Russell and Whitehead's Principia. Other people, like Turing and Wiener, were interested in playing chess, and that was the thing that people with a technical degree still found difficult to do. The concentration was really on those intellectual pursuits. Herb Simon thought that they would be the key to understanding thinking.
What they missed was how important our embodiment and our perception of the world are as the basis for our thinking.
But we still cannot do basic object recognition. We can't have a system look at a table and identify a cassette recorder or a pair of eye glasses, which is stuff that a 3-year-old can do. In the early days that stuff was viewed as being so easy, and because everyone could do it no one thought that it could be the key. Over time there's been a realization that vision, sound-processing, and early language are maybe the keys to how our brain is organized and that everything that's built on top of that makes us human and gives us our intellect. There's a whole other approach to getting to intellectual robots if you like-based on perception and language-which was not there in the early days. I used to carry this paper around from 1967: MIT Artificial Intelligence Memo #100. It was written by Seymour Papert. He assigned Gerry Sussman, who was an undergraduate at the time, a summer project of solving vision. They thought it must be easy and that an undergraduate should be able to knock it off in three months. It didn't quite turn out that way. |
![]() |