Optimism exists on a continuum in between confidence and hope.Let me take these in order.
I am confident that the acceleration and expanding purview of informationtechnology will solve within twenty years the problems that nowpreoccupy us.
Consider energy. We are awash in energy (10,000 times more thanrequired to meet all our needs falls on Earth) but we are not verygood at capturing it. That will change with the full nanotechnology-basedassembly of macro objects at the nano scale, controlled by massivelyparallel information processes, which will be feasible within twentyyears. Even though our energy needs are projected to triple withinthat time, we’ll capture that .0003 of the sunlight needed to meetour energy needs with no use of fossil fuels, using extremely inexpensive,highly efficient, lightweight, nano-engineered solar panels, andwe’ll store the energy in highly distributed (and therefore safe)nanotechnology-based fuel cells. Solar power is now providing 1part in 1,000 of our needs, but that percentage is doubling everytwo years, which means multiplying by 1,000 in twenty years.
Almost all the discussions I’ve seen about energy and its consequences(such as global warming) fail to consider the ability of futurenanotechnology-based solutions to solve this problem. This developmentwill be motivated not just by concern for the environment but alsoby the $2 trillion we spend annually on energy. This is alreadya major area of venture funding.
Consider health. As of just recently, we have the tools to reprogrambiology. This is also at an early stage but is progressing throughthe same exponential growth of information technology, which wesee in every aspect of biological progress. The amount of geneticdata we have sequenced has doubled every year, and the price perbase pair has come down commensurately. The first genome cost abillion dollars. The National Institutes of Health is now startinga project to collect a million genomes at $1,000 apiece. We canturn genes off with RNA interference, add new genes (to adults)with new reliable forms of gene therapy, and turn on and off proteinsand enzymes at critical stages of disease progression. We are gainingthe means to model, simulate, and reprogram disease and aging processesas information processes. In ten years, these technologies willbe 1,000 times more powerful than they are today, and it will bea very different world, in terms of our ability to turn off diseaseand aging.
Consider prosperity. The 50-percent deflation rate inherent ininformation technology and its growing purview is causing the declineof poverty. The poverty rate in Asia, according to the World Bank,declined by 50 percent over the past ten years due to informationtechnology and will decline at current rates by 90 percent in thenext ten years. All areas of the world are affected, including Africa,which is now undergoing a rapid invasion of the Internet. Even sub-SaharanAfrica has had an average annual 5 percent economic growth ratein the last few years.
OK, so what am I optimistic (but not necessarily confident) about?
All of these technologies have existential downsides. We are alreadyliving with enough thermonuclear weapons to destroy all mammalianlife on this planet-weapons that are still on a hair-trigger. Rememberthese? They’re still there, and they represent an existential threat.
We have a new existential threat, which is the ability of a destructivelyminded group or individual to reprogram a biological virus to bemore deadly, more communicable, or (most daunting of all) more stealthy(that is, having a longer incubation period, so that the early spreadis undetected). The good news is that we have the tools to set upa rapid-response system like the one we have for software viruses.It took us five years to sequence HIV, but we can now sequence avirus in a day or two. RNA interference can turn viruses off, sinceviruses are genes, albeit pathological ones. Sun Microsystems founderBill Joy and I have proposed setting up a rapid-response systemthat could detect a new virus, sequence it, design an RNAi (RNA-mediatedinterference) medication, or a safe antigen-based vaccine, and gearup production in a matter of days. The methods exist, but as yeta working rapid-response system does not. We need to put one inplace quickly.
So I’m optimistic that we will make it through without sufferingan existential catastrophe. It would be helpful if we gave the twoaforementioned existential threats a higher priority.
And, finally, what am I hopeful, but not necessarily optimistic,about?
Who would have thought right after September 11, 2001, that wewould go five years without another destructive incident at thator greater scale? That seemed unlikely at the time, but despiteall the subsequent turmoil in the world, it has happened. I am hopefulthat this respite will continue.
© Ray Kurzweil 2007
]]>This visionary speech that Richard Feynman gave on December 29th, 1959, at the annual meeting of the American Physical Society at the California Institute of Technology helped give birth to the now exploding field of nanotechnology.
I imagine experimental physicists must often look with envy at men like Kamerlingh Onnes, who discovered a field like low temperature, which seems to be bottomless and in which one can go down and down.
Such a man is then a leader and has some temporary monopoly in a scientific adventure. Percy Bridgman, in designing a way to obtain higher pressures, opened up another new field and was able to move into it and to lead us all along. The development of ever higher vacuum was a continuing development of the same kind.
I would like to describe a field, in which little has been done, but in which an enormous amount can be done in principle. This field is not quite the same as the others in that it will not tell us much of fundamental physics (in the sense of, “What are the strange particles?”) but it is more like solid-state physics in the sense that it might tell us much of great interest about the strange phenomena that occur in complex situations. Furthermore, a point that is most important is that it would have an enormous number of technical applications.
What I want to talk about is the problem of manipulating and controlling things on a small scale.
As soon as I mention this, people tell me about miniaturization, and how far it has progressed today. They tell me about electric motors that are the size of the nail on your small finger. And there is a device on the market, they tell me, by which you can write the Lord’s Prayer on the head of a pin. But that’s nothing; that’s the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction.
Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?
Let’s see what would be involved. The head of a pin is a sixteenth of an inch across. If you magnify it by 25,000 diameters, the area of the head of the pin is then equal to the area of all the pages of the Encyclopaedia Brittanica. Therefore, all it is necessary to do is to reduce in size all the writing in the Encyclopaedia by 25,000 times. Is that possible? The resolving power of the eye is about 1/120 of an inch–that is roughly the diameter of one of the little dots on the fine half-tone reproductions in the Encyclopaedia. This, when you demagnify it by 25,000 times, is still 80 angstroms in diameter–32 atoms across, in an ordinary metal. In other words, one of those dots still would contain in its area 1,000 atoms. So, each dot can easily be adjusted in size as required by the photoengraving, and there is no question that there is enough room on the head of a pin to put all of the Encyclopaedia Brittanica.
Furthermore, it can be read if it is so written. Let’s imagine that it is written in raised letters of metal; that is, where the black is in the Encyclopedia, we have raised letters of metal that are actually 1/25,000 of their ordinary size. How would we read it?
If we had something written in such a way, we could read it using techniques in common use today. (They will undoubtedly find a better way when we do actually have it written, but to make my point conservatively I shall just take techniques we know today.) We would press the metal into a plastic material and make a mold of it, then peel the plastic off very carefully, evaporate silica into the plastic to get a very thin film, then shadow it by evaporating gold at an angle against the silica so that all the little letters will appear clearly, dissolve the plastic away from the silica film, and then look through it with an electron microscope!
There is no question that if the thing were reduced by 25,000 times in the form of raised letters on the pin, it would be easy for us to read it today. Furthermore; there is no question that we would find it easy to make copies of the master; we would just need to press the same metal plate again into plastic and we would have another copy.
The next question is: How do we write it? We have no standard technique to do this now. But let me argue that it is not as difficult as it first appears to be. We can reverse the lenses of the electron microscope in order to demagnify as well as magnify. A source of ions, sent through the microscope lenses in reverse, could be focused to a very small spot. We could write with that spot like we write in a TV cathode ray oscilloscope, by going across in lines, and having an adjustment which determines the amount of material which is going to be deposited as we scan in lines.
This method might be very slow because of space charge limitations. There will be more rapid methods. We could first make, perhaps by some photo process, a screen which has holes in it in the form of the letters. Then we would strike an arc behind the holes and draw metallic ions through the holes; then we could again use our system of lenses and make a small image in the form of ions, which would deposit the metal on the pin.
A simpler way might be this (though I am not sure it would work): We take light and, through an optical microscope running backward, we focus it onto a very small photoelectric screen. Then electrons come away from the screen where the light is shining. These electrons are focused down in size by the electron microscope lenses to impinge directly upon the surface of the metal. Will such a beam etch away the metal if it is run long enough? I don’t know. If it doesn’t work for a metal surface, it must be possible to find some surface with which to coat the original pin so that, where the electrons bombard, a change is made which we could recognize later.
There is no intensity problem in these devices–not what you are used to in magnification, where you have to take a few electrons and spread them over a bigger and bigger screen; it is just the opposite. The light which we get from a page is concentrated onto a very small area so it is very intense. The few electrons which come from the photoelectric screen are demagnified down to a very tiny area so that, again, they are very intense. I don’t know why this hasn’t been done yet!
That’s the Encyclopaedia Brittanica on the head of a pin, but let’s consider all the books in the world. The Library of Congress has approximately 9 million volumes; the British Museum Library has 5 million volumes; there are also 5 million volumes in the National Library in France. Undoubtedly there are duplications, so let us say that there are some 24 million volumes of interest in the world.
What would happen if I print all this down at the scale we have been discussing? How much space would it take? It would take, of course, the area of about a million pinheads because, instead of there being just the 24 volumes of the Encyclopaedia, there are 24 million volumes. The million pinheads can be put in a square of a thousand pins on a side, or an area of about 3 square yards. That is to say, the silica replica with the paper-thin backing of plastic, with which we have made the copies, with all this information, is on an area of approximately the size of 35 pages of the Encyclopaedia. That is about half as many pages as there are in this magazine. All of the information which all of mankind has every recorded in books can be carried around in a pamphlet in your hand–and not written in code, but a simple reproduction of the original pictures, engravings, and everything else on a small scale without loss of resolution.
What would our librarian at Caltech say, as she runs all over from one building to another, if I tell her that, ten years from now, all of the information that she is struggling to keep track of–120,000 volumes, stacked from the floor to the ceiling, drawers full of cards, storage rooms full of the older books–can be kept on just one library card! When the University of Brazil, for example, finds that their library is burned, we can send them a copy of every book in our library by striking off a copy from the master plate in a few hours and mailing it in an envelope no bigger or heavier than any other ordinary air mail letter.
Now, the name of this talk is “There is Plenty of Room at the Bottom”–not just “There is Room at the Bottom.” What I have demonstrated is that there is room–that you can decrease the size of things in a practical way. I now want to show that there is plenty of room. I will not now discuss how we are going to do it, but only what is possible in principle–in other words, what is possible according to the laws of physics. I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven’t yet gotten around to it.
Suppose that, instead of trying to reproduce the pictures and all the information directly in its present form, we write only the information content in a code of dots and dashes, or something like that, to represent the various letters. Each letter represents six or seven “bits” of information; that is, you need only about six or seven dots or dashes for each letter. Now, instead of writing everything, as I did before, on the surface of the head of a pin, I am going to use the interior of the material as well.
Let us represent a dot by a small spot of one metal, the next dash, by an adjacent spot of another metal, and so on. Suppose, to be conservative, that a bit of information is going to require a little cube of atoms 5 times 5 times 5–that is 125 atoms. Perhaps we need a hundred and some odd atoms to make sure that the information is not lost through diffusion, or through some other process.
I have estimated how many letters there are in the Encyclopaedia, and I have assumed that each of my 24 million books is as big as an Encyclopaedia volume, and have calculated, then, how many bits of information there are (10^15). For each bit I allow 100 atoms. And it turns out that all of the information that man has carefully accumulated in all the books in the world can be written in this form in a cube of material one two-hundredth of an inch wide–which is the barest piece of dust that can be made out by the human eye. So there is plenty of room at the bottom! Don’t tell me about microfilm!
This fact–that enormous amounts of information can be carried in an exceedingly small space–is, of course, well known to the biologists, and resolves the mystery which existed before we understood all this clearly, of how it could be that, in the tiniest cell, all of the information for the organization of a complex creature such as ourselves can be stored. All this information–whether we have brown eyes, or whether we think at all, or that in the embryo the jawbone should first develop with a little hole in the side so that later a nerve can grow through it–all this information is contained in a very tiny fraction of the cell in the form of long-chain DNA molecules in which approximately 50 atoms are used for one bit of information about the cell.
If I have written in a code, with 5 times 5 times 5 atoms to a bit, the question is: How could I read it today? The electron microscope is not quite good enough, with the greatest care and effort, it can only resolve about 10 angstroms. I would like to try and impress upon you while I am talking about all of these things on a small scale, the importance of improving the electron microscope by a hundred times. It is not impossible; it is not against the laws of diffraction of the electron. The wave length of the electron in such a microscope is only 1/20 of an angstrom. So it should be possible to see the individual atoms. What good would it be to see individual atoms distinctly?
We have friends in other fields–in biology, for instance. We physicists often look at them and say, “You know the reason you fellows are making so little progress?” (Actually I don’t know any field where they are making more rapid progress than they are in biology today.) “You should use more mathematics, like we do.” They could answer us–but they’re polite, so I’ll answer for them: “What you should do in order for us to make more rapid progress is to make the electron microscope 100 times better.”
What are the most central and fundamental problems of biology today? They are questions like: What is the sequence of bases in the DNA? What happens when you have a mutation? How is the base order in the DNA connected to the order of amino acids in the protein? What is the structure of the RNA; is it single-chain or double-chain, and how is it related in its order of bases to the DNA? What is the organization of the microsomes? How are proteins synthesized? Where does the RNA go? How does it sit? Where do the proteins sit? Where do the amino acids go in? In photosynthesis, where is the chlorophyll; how is it arranged; where are the carotenoids involved in this thing? What is the system of the conversion of light into chemical energy?
It is very easy to answer many of these fundamental biological questions; you just look at the thing! You will see the order of bases in the chain; you will see the structure of the microsome. Unfortunately, the present microscope sees at a scale which is just a bit too crude. Make the microscope one hundred times more powerful, and many problems of biology would be made very much easier. I exaggerate, of course, but the biologists would surely be very thankful to you–and they would prefer that to the criticism that they should use more mathematics.
The theory of chemical processes today is based on theoretical physics. In this sense, physics supplies the foundation of chemistry. But chemistry also has analysis. If you have a strange substance and you want to know what it is, you go through a long and complicated process of chemical analysis. You can analyze almost anything today, so I am a little late with my idea. But if the physicists wanted to, they could also dig under the chemists in the problem of chemical analysis. It would be very easy to make an analysis of any complicated chemical substance; all one would have to do would be to look at it and see where the atoms are. The only trouble is that the electron microscope is one hundred times too poor. (Later, I would like to ask the question: Can the physicists do something about the third problem of chemistry–namely, synthesis? Is there a physical way to synthesize any chemical substance?
The reason the electron microscope is so poor is that the f- value of the lenses is only 1 part to 1,000; you don’t have a big enough numerical aperture. And I know that there are theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so; and therefore the resolving power at the present time is at its theoretical maximum. But in every theorem there are assumptions. Why must the field be symmetrical? I put this out as a challenge: Is there no way to make the electron microscope more powerful?
The biological example of writing information on a small scale has inspired me to think of something that should be possible. Biology is not simply writing information; it is doing something about it. A biological system can be exceedingly small. Many of the cells are very tiny, but they are very active; they manufacture various substances; they walk around; they wiggle; and they do all kinds of marvelous things–all on a very small scale. Also, they store information. Consider the possibility that we too can make a thing very small which does what we want–that we can manufacture an object that maneuvers at that level!
There may even be an economic point to this business of making things very small. Let me remind you of some of the problems of computing machines. In computers we have to store an enormous amount of information. The kind of writing that I was mentioning before, in which I had everything down as a distribution of metal, is permanent. Much more interesting to a computer is a way of writing, erasing, and writing something else. (This is usually because we don’t want to waste the material on which we have just written. Yet if we could write it in a very small space, it wouldn’t make any difference; it could just be thrown away after it was read. It doesn’t cost very much for the material).
I don’t know how to do this on a small scale in a practical way, but I do know that computing machines are very large; they fill rooms. Why can’t we make them very small, make them of little wires, little elements–and by little, I mean little. For instance, the wires should be 10 or 100 atoms in diameter, and the circuits should be a few thousand angstroms across. Everybody who has analyzed the logical theory of computers has come to the conclusion that the possibilities of computers are very interesting–if they could be made to be more complicated by several orders of magnitude. If they had millions of times as many elements, they could make judgments. They would have time to calculate what is the best way to make the calculation that they are about to make. They could select the method of analysis which, from their experience, is better than the one that we would give to them. And in many other ways, they would have new qualitative features.
If I look at your face I immediately recognize that I have seen it before. (Actually, my friends will say I have chosen an unfortunate example here for the subject of this illustration. At least I recognize that it is a man and not an apple.) Yet there is no machine which, with that speed, can take a picture of a face and say even that it is a man; and much less that it is the same man that you showed it before–unless it is exactly the same picture. If the face is changed; if I am closer to the face; if I am further from the face; if the light changes–I recognize it anyway. Now, this little computer I carry in my head is easily able to do that. The computers that we build are not able to do that. The number of elements in this bone box of mine are enormously greater than the number of elements in our “wonderful” computers. But our mechanical computers are too big; the elements in this box are microscopic. I want to make some that are submicroscopic.
If we wanted to make a computer that had all these marvelous extra qualitative abilities, we would have to make it, perhaps, the size of the Pentagon. This has several disadvantages. First, it requires too much material; there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing. There is also the problem of heat generation and power consumption; TVA would be needed to run the computer. But an even more practical difficulty is that the computer would be limited to a certain speed. Because of its large size, there is finite time required to get the information from one place to another. The information cannot go any faster than the speed of light–so, ultimately, when our computers get faster and faster and more and more elaborate, we will have to make them smaller and smaller.
But there is plenty of room to make them smaller. There is nothing that I can see in the physical laws that says the computer elements cannot be made enormously smaller than they are now. In fact, there may be certain advantages.
How can we make such a device? What kind of manufacturing processes would we use? One possibility we might consider, since we have talked about writing by putting atoms down in a certain arrangement, would be to evaporate the material, then evaporate the insulator next to it. Then, for the next layer, evaporate another position of a wire, another insulator, and so on. So, you simply evaporate until you have a block of stuff which has the elements–coils and condensers, transistors and so on–of exceedingly fine dimensions.
But I would like to discuss, just for amusement, that there are other possibilities. Why can’t we manufacture these small computers somewhat like we manufacture the big ones? Why can’t we drill holes, cut things, solder things, stamp things out, mold different shapes all at an infinitesimal level? What are the limitations as to how small a thing has to be before you can no longer mold it? How many times when you are working on something frustratingly tiny like your wife’s wrist watch, have you said to yourself, “If I could only train an ant to do this!” What I would like to suggest is the possibility of training an ant to train a mite to do this. What are the possibilities of small but movable machines? They may or may not be useful, but they surely would be fun to make.
Consider any machine–for example, an automobile–and ask about the problems of making an infinitesimal machine like it. Suppose, in the particular design of the automobile, we need a certain precision of the parts; we need an accuracy, let’s suppose, of 4/10,000 of an inch. If things are more inaccurate than that in the shape of the cylinder and so on, it isn’t going to work very well. If I make the thing too small, I have to worry about the size of the atoms; I can’t make a circle of “balls” so to speak, if the circle is too small. So, if I make the error, corresponding to 4/10,000 of an inch, correspond to an error of 10 atoms, it turns out that I can reduce the dimensions of an automobile 4,000 times, approximately–so that it is 1 mm. across. Obviously, if you redesign the car so that it would work with a much larger tolerance, which is not at all impossible, then you could make a much smaller device.
It is interesting to consider what the problems are in such small machines. Firstly, with parts stressed to the same degree, the forces go as the area you are reducing, so that things like weight and inertia are of relatively no importance. The strength of material, in other words, is very much greater in proportion. The stresses and expansion of the flywheel from centrifugal force, for example, would be the same proportion only if the rotational speed is increased in the same proportion as we decrease the size. On the other hand, the metals that we use have a grain structure, and this would be very annoying at small scale because the material is not homogeneous. Plastics and glass and things of this amorphous nature are very much more homogeneous, and so we would have to make our machines out of such materials.
There are problems associated with the electrical part of the system–with the copper wires and the magnetic parts. The magnetic properties on a very small scale are not the same as on a large scale; there is the “domain” problem involved. A big magnet made of millions of domains can only be made on a small scale with one domain. The electrical equipment won’t simply be scaled down; it has to be redesigned. But I can see no reason why it can’t be redesigned to work again.
Lubrication involves some interesting points. The effective viscosity of oil would be higher and higher in proportion as we went down (and if we increase the speed as much as we can). If we don’t increase the speed so much, and change from oil to kerosene or some other fluid, the problem is not so bad. But actually we may not have to lubricate at all! We have a lot of extra force. Let the bearings run dry; they won’t run hot because the heat escapes away from such a small device very, very rapidly.
This rapid heat loss would prevent the gasoline from exploding, so an internal combustion engine is impossible. Other chemical reactions, liberating energy when cold, can be used. Probably an external supply of electrical power would be most convenient for such small machines.
What would be the utility of such machines? Who knows? Of course, a small automobile would only be useful for the mites to drive around in, and I suppose our Christian interests don’t go that far. However, we did note the possibility of the manufacture of small elements for computers in completely automatic factories, containing lathes and other machine tools at the very small level. The small lathe would not have to be exactly like our big lathe. I leave to your imagination the improvement of the design to take full advantage of the properties of things on a small scale, and in such a way that the fully automatic aspect would be easiest to manage.
A friend of mine (Albert R. Hibbs) suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and “looks” around. (Of course the information has to be fed out.) It finds out which valve is the faulty one and takes a little knife and slices it out. Other small machines might be permanently incorporated in the body to assist some inadequately-functioning organ.
Now comes the interesting question: How do we make such a tiny mechanism? I leave that to you. However, let me suggest one weird possibility. You know, in the atomic energy plants they have materials and machines that they can’t handle directly because they have become radioactive. To unscrew nuts and put on bolts and so on, they have a set of master and slave hands, so that by operating a set of levers here, you control the “hands” there, and can turn them this way and that so you can handle things quite nicely.
Most of these devices are actually made rather simply, in that there is a particular cable, like a marionette string, that goes directly from the controls to the “hands.” But, of course, things also have been made using servo motors, so that the connection between the one thing and the other is electrical rather than mechanical. When you turn the levers, they turn a servo motor, and it changes the electrical currents in the wires, which repositions a motor at the other end.
Now, I want to build much the same device–a master-slave system which operates electrically. But I want the slaves to be made especially carefully by modern large-scale machinists so that they are one-fourth the scale of the “hands” that you ordinarily maneuver. So you have a scheme by which you can do things at one- quarter scale anyway–the little servo motors with little hands play with little nuts and bolts; they drill little holes; they are four times smaller. Aha! So I manufacture a quarter-size lathe; I manufacture quarter-size tools; and I make, at the one-quarter scale, still another set of hands again relatively one-quarter size! This is one-sixteenth size, from my point of view. And after I finish doing this I wire directly from my large-scale system, through transformers perhaps, to the one-sixteenth-size servo motors. Thus I can now manipulate the one-sixteenth size hands.
Well, you get the principle from there on. It is rather a difficult program, but it is a possibility. You might say that one can go much farther in one step than from one to four. Of course, this has all to be designed very carefully and it is not necessary simply to make it like hands. If you thought of it very carefully, you could probably arrive at a much better system for doing such things.
If you work through a pantograph, even today, you can get much more than a factor of four in even one step. But you can’t work directly through a pantograph which makes a smaller pantograph which then makes a smaller pantograph–because of the looseness of the holes and the irregularities of construction. The end of the pantograph wiggles with a relatively greater irregularity than the irregularity with which you move your hands. In going down this scale, I would find the end of the pantograph on the end of the pantograph on the end of the pantograph shaking so badly that it wasn’t doing anything sensible at all.
At each stage, it is necessary to improve the precision of the apparatus. If, for instance, having made a small lathe with a pantograph, we find its lead screw irregular–more irregular than the large-scale one–we could lap the lead screw against breakable nuts that you can reverse in the usual way back and forth until this lead screw is, at its scale, as accurate as our original lead screws, at our scale.
We can make flats by rubbing unflat surfaces in triplicates together–in three pairs–and the flats then become flatter than the thing you started with. Thus, it is not impossible to improve precision on a small scale by the correct operations. So, when we build this stuff, it is necessary at each step to improve the accuracy of the equipment by working for awhile down there, making accurate lead screws, Johansen blocks, and all the other materials which we use in accurate machine work at the higher level. We have to stop at each level and manufacture all the stuff to go to the next level–a very long and very difficult program. Perhaps you can figure a better way than that to get down to small scale more rapidly.
Yet, after all this, you have just got one little baby lathe four thousand times smaller than usual. But we were thinking of making an enormous computer, which we were going to build by drilling holes on this lathe to make little washers for the computer. How many washers can you manufacture on this one lathe?
When I make my first set of slave “hands” at one-fourth scale, I am going to make ten sets. I make ten sets of “hands,” and I wire them to my original levers so they each do exactly the same thing at the same time in parallel. Now, when I am making my new devices one-quarter again as small, I let each one manufacture ten copies, so that I would have a hundred “hands” at the 1/16th size.
Where am I going to put the million lathes that I am going to have? Why, there is nothing to it; the volume is much less than that of even one full-scale lathe. For instance, if I made a billion little lathes, each 1/4000 of the scale of a regular lathe, there are plenty of materials and space available because in the billion little ones there is less than 2 percent of the materials in one big lathe.
It doesn’t cost anything for materials, you see. So I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on.
As we go down in size, there are a number of interesting problems that arise. All things do not simply scale down in proportion. There is the problem that materials stick together by the molecular (Van der Waals) attractions. It would be like this: After you have made a part and you unscrew the nut from a bolt, it isn’t going to fall down because the gravity isn’t appreciable; it would even be hard to get it off the bolt. It would be like those old movies of a man with his hands full of molasses, trying to get rid of a glass of water. There will be several problems of this nature that we will have to be ready to design for.
But I am not afraid to consider the final question as to whether, ultimately–in the great future–we can arrange the atoms the way we want; the very atoms, all the way down! What would happen if we could arrange the atoms one by one the way we want them (within reason, of course; you can’t put them so that they are chemically unstable, for example).
Up to now, we have been content to dig in the ground to find minerals. We heat them and we do things on a large scale with them, and we hope to get a pure substance with just so much impurity, and so on. But we must always accept some atomic arrangement that nature gives us. We haven’t got anything, say, with a “checkerboard” arrangement, with the impurity atoms exactly arranged 1,000 angstroms apart, or in some other particular pattern.
What could we do with layered structures with just the right layers? What would the properties of materials be if we could really arrange the atoms the way we want them? They would be very interesting to investigate theoretically. I can’t see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a small scale we will get an enormously greater range of possible properties that substances can have, and of different things that we can do.
Consider, for example, a piece of material in which we make little coils and condensers (or their solid state analogs) 1,000 or 10,000 angstroms in a circuit, one right next to the other, over a large area, with little antennas sticking out at the other end–a whole series of circuits. Is it possible, for example, to emit light from a whole set of antennas, like we emit radio waves from an organized set of antennas to beam the radio programs to Europe? The same thing would be to beam the light out in a definite direction with very high intensity. (Perhaps such a beam is not very useful technically or economically.)
I have thought about some of the problems of building electric circuits on a small scale, and the problem of resistance is serious. If you build a corresponding circuit on a small scale, its natural frequency goes up, since the wave length goes down as the scale; but the skin depth only decreases with the square root of the scale ratio, and so resistive problems are of increasing difficulty. Possibly we can beat resistance through the use of superconductivity if the frequency is not too high, or by other tricks.
When we get to the very, very small world–say circuits of seven atoms–we have a lot of new things that would happen that represent completely new opportunities for design. Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics. So, as we go down and fiddle around with the atoms down there, we are working with different laws, and we can expect to do different things. We can manufacture in different ways. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc.
Another thing we will notice is that, if we go down far enough, all of our devices can be mass produced so that they are absolutely perfect copies of one another. We cannot build two large machines so that the dimensions are exactly the same. But if your machine is only 100 atoms high, you only have to get it correct to one-half of one percent to make sure the other machine is exactly the same size–namely, 100 atoms high!
At the atomic level, we have new kinds of forces and new kinds of possibilities, new kinds of effects. The problems of manufacture and reproduction of materials will be quite different. I am, as I said, inspired by the biological phenomena in which chemical forces are used in repetitious fashion to produce all kinds of weird effects (one of which is the author).
The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big.
Ultimately, we can do chemical synthesis. A chemist comes to us and says, “Look, I want a molecule that has the atoms arranged thus and so; make me that molecule.” The chemist does a mysterious thing when he wants to make a molecule. He sees that it has got that ring, so he mixes this and that, and he shakes it, and he fiddles around. And, at the end of a difficult process, he usually does succeed in synthesizing what he wants. By the time I get my devices working, so that we can do it by physics, he will have figured out how to synthesize absolutely anything, so that this will really be useless.
But it is interesting that it would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. Give the orders and the physicist synthesizes it. How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed–a development which I think cannot be avoided.
Now, you might say, “Who should do this and why should they do it?” Well, I pointed out a few of the economic applications, but I know that the reason that you would do it might be just for fun. But have some fun! Let’s have a competition between laboratories. Let one laboratory make a tiny motor which it sends to another lab which sends it back with a thing that fits inside the shaft of the first motor.
Just for the fun of it, and in order to get kids interested in this field, I would propose that someone who has some contact with the high schools think of making some kind of high school competition. After all, we haven’t even started in this field, and even the kids can write smaller than has ever been written before. They could have competition in high schools. The Los Angeles high school could send a pin to the Venice high school on which it says, “How’s this?” They get the pin back, and in the dot of the “i” it says, “Not so hot.”
Perhaps this doesn’t excite you to do it, and only economics will do so. Then I want to do something; but I can’t do it at the present moment, because I haven’t prepared the ground. It is my intention to offer a prize of $1,000 to the first guy who can take the information on the page of a book and put it on an area 1/25,000 smaller in linear scale in such manner that it can be read by an electron microscope.
And I want to offer another prize–if I can figure out how to phrase it so that I don’t get into a mess of arguments about definitions–of another $1,000 to the first guy who makes an operating electric motor–a rotating electric motor which can be controlled from the outside and, not counting the lead-in wires, is only 1/64 inch cube.
I do not expect that such prizes will have to wait very long for claimants.
]]>Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
Q. Isn’t there a solid definition of intelligence that doesn’t depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
Q. Is intelligence a single thing so that one can ask a yes or no question “Is this machine intelligent or not?”?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered “somewhat intelligent”.
Q. Isn’t AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
Q. What about IQ? Do computer programs have IQs?
A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child’s age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, “digit span” is trivial for even extremely limited computers.
However, some of the problems on IQ tests are useful challenges for AI.
Q. What about other comparisons between human and computer intelligence?
Arthur R. Jensen [Jen98], a leading researcher in human intelligence, suggests “as a heuristic hypothesis” that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to “quantitative biochemical and physiological conditions”. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.
Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don’t develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.
Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I’m not sure anyone is serious about imitating all of them.
Q. What is the Turing test?
A. Alan Turing’s 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.
The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.
Daniel Dennett’s book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer’s knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.
Q. Does AI aim at human-level intelligence?
A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.
Q. How far is AI from reaching human-level intelligence? When will it happen?
A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge basis of facts in the languages now used for expressing knowledge.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human level intelligence will be achieved.
Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.
Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.
Q. What about making a “child machine” that could improve by reading and by learning from experience?
A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven’t yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.
Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?
A. I think yes, but we aren’t yet at a level of AI at which this process can begin.
Q. What about chess?
A. Alexander Kronrod, a Russian AI researcher, said “Chess is the Drosophila of AI.” He was making an analogy with geneticists’ use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
Q. What about Go?
A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.
Sooner or later, AI research will overcome this scandalous weakness.
Q. Don’t some people say that AI is a bad idea?
A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn’t reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.
Q. Aren’t computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]
A. No. These theories are relevant but don’t address the fundamental problems of AI.
In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. However, people can’t guarantee to solve arbitrary problems in these domains either.
In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can’t solve them quickly in general.
What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.
The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn’t interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.
Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should be often illuminating even when you can’t prove that the program is the shortest.
Q. What are the branches of AI?
A. Here’s a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96] lists some of the concepts involved in logical aI. [Sha97] is an important text.
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.
Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
This is a study of the kinds of knowledge that are required for solving problems in the world.
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. [My opinion].
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations.
Q. What are the applications of AI?
A. Here are some.
You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation–looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.
In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.
Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.
The world is composed of three-dimensional objects, but the inputs to the human eye and computers’ TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.
A “knowledge engineer” interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.
One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).
Q. How is AI research done?
A. AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects.
There are two main lines of research. One is biological, based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology. The other is phenomenal, based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals. The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking.
Q. What should I study before or while learning AI?
A. Study mathematics, especially mathematical logic. The more you learn about science in general the better. For the biological approaches to AI, study psychology and the physiology of the nervous system. Learn some programming languages-at least C, Lisp and Prolog. It is also a good idea to learn one basic machine language. Jobs are likely to depend on knowing the languages currently in fashion. In the late 1990s, these include C++ and Java.
Q. What is a good textbook on AI?
A. Artificial Intelligence by Stuart Russell and Peter Norvig, Prentice Hall is the most commonly used textbbook in 1997. The general views expressed there do not exactly correspond to those of this essay. Artificial Intelligence: A New Synthesis by Nils Nilsson, Morgan Kaufman, may be easier to read.
Q. What organizations and publications are concerned with AI?
A. The American Association for Artificial Intelligence (AAAI), the European Coordinating Committee for Artificial Intelligence (ECCAI) and the Society for Artificial Intelligence and Simulation of Behavior (AISB) are scientific societies concerned with AI research. The Association for Computing Machinery (ACM) has a special interest group on artificial intelligence SIGART.
The International Joint Conference on AI (IJCAI) is the main international conference. The AAAI runs a US National Conference on AI. Electronic Transactions on Artificial Intelligence, Artificial Intelligence, and Journal of Artificial Intelligence Research, and IEEE Transactions on Pattern Analysis and Machine Intelligence are four of the main journals publishing AI research papers. I have not yet found everything that should be in this paragraph.
Page of Positive Reviews lists papers that experts have found important.
Funding a Revolution: Government Support for Computing Research by a committee of the National Research covers support for AI research in Chapter 9.
Den98 Daniel Dennett. Brainchildren: Essays on Designing Minds. MIT Press, 1998.
Jen98 Arthur R. Jensen. Does IQ matter? Commentary, pages 20-21, November 1998. The reference is just to Jensen’s comment-one of many.
McC59 John McCarthy. Programs with Common Sense. In Mechanisation of Thought Processes, Proceedings of the Symposium of the National Physics Laboratory, pages 77-84, London, U.K., 1959. Her Majesty’s Stationery Office. Reprinted in McC90.
McC89 John McCarthy. Artificial Intelligence, Logic and Formalizing Common Sense. In Richmond Thomason, editor, Philosophical Logic and Artificial Intelligence. Klüver Academic, 1989.
McC96 John McCarthy. Concepts of Logical AI, 1996. Web only for now but may be referenced.
Mit97 Tom Mitchell. Machine Learning. McGraw-Hill, 1997.
Sha97 Murray Shanahan. Solving the Frame Problem, a mathematical investigation of the common sense law of inertia. M.I.T. Press, 1997.
]]>The version that appears on Vernor Vinge’s website can be read here.
Vernor Vinge is a retired San Diego State University math professor, computer scientist, and science fiction author. He is best known for his Hugo Award-winning novels A Fire Upon the Deep, A Deepness in the Sky, Rainbows End, Fast Times at Fairmont High, and The Cookie Monster, as well as for his 1993 essay “The Coming Technological Singularity,” in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended,” such that no current models of reality are sufficient to predict beyond it.
What Is the Singularity?
The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth.
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I’m not guilty of a relative-time ambiguity, let me more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)
What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still — shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work–the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. (In [5], Greg Bear paints a picture of the major changes happening in a matter of hours.)
I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [28] paraphrased John von Neumann as saying:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed (see [25]).)
In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote [11]:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. … It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees.
Through the ’60s and ’70s and ’80s, recognition of the cataclysm spread [29] [1] [31] [5]. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the “hard” science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future [24]. Now they saw that their most diligent extrapolations resulted in the unknowable … soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are.
What about the ’90s and the ’00s and the ’10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we’ll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if we were willing to give up speed, if we were willing to settle for an artificial being who was literally slow [30]. But it’s much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans’ natural equipment.)
But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technologically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of true technological unemployment finally come true.
Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing science fiction in the middle ’60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed.
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected — perhaps even to the researchers involved. (“But all our previous models were catatonic! We were just tweaking some parameters…”) If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.
And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I’d be more comfortable if I were regarding these transcendental events from one thousand years remove … instead of twenty.
Can the Singularity be Avoided?
Well, maybe it won’t happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [19] and Searle [22] against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question “How We Will Build a Machine that Thinks” [27]. As you might guess from the workshop’s title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains. A minority felt that the largest 1992 computers were within three orders of magnitude of the power of the human brain. The majority of the participants agreed with Moravec’s estimate [17] that we are ten to forty years away from hardware parity. And yet there was another minority who pointed to [7] [21], and conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as ten orders of magnitude short of the equipment we carry around in our heads. If this is true (or for that matter, if the Penrose or Searle critique is valid), we might never see a Singularity. Instead, in the early ’00s we would find our hardware performance curves beginning to level off — this because of our inability to automate the design work needed to support further hardware improvements. We’d end up with some very powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever “wake up” and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age … and it would also be an end of progress. This is very like the future predicted by Gunther Stent. In fact, on page 137 of [25], Stent explicitly cites the development of transhuman intelligence as a sufficient condition to break his projections.
But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of “a machine in the likeness of the human mind” [13]. In fact, the competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.
Eric Drexler [8] has provided spectacular insights about how far technical improvement may go. He agrees that superhuman intelligences will be available in the near future — and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good’s ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say, one million times slower than you, there is little doubt that over a period of years (your time) you could come up with “helpful advice” that would incidentally set you free. (I call this “fast thinking” form of superintelligence “weak superhumanity”. Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time. “Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. It’s hard to say precisely what “strong superhumanity” would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? (Now if the dog mind were cleverly rewired and then run at high speed, we might see something different….) Many speculations about superintelligence seem to be based on the weakly superhuman model. I believe that our best guesses about the post-Singularity world can be obtained by thinking on the nature of strong superhumanity. I will return to this point later in the paper.)
Another approach to confinement is to build rules into the mind of the created superhuman entity (for example, Asimov’s Laws [3]). I think that any rules strict enough to be effective would also produce a device whose ability was clearly inferior to the unfettered versions (and so human competition would favor the development of the those more dangerous models). Still, the Asimov dream is a wonderful one: Imagine a willing slave, who has 1000 times your capabilities in every way. Imagine a creature who could satisfy your every safe wish (whatever that means) and still have 99.9% of its time free for other activities. There would be a new universe we never really understood, but filled with benevolent gods (though one of my wishes might be to become one of them).
If the Singularity cannot be prevented or confined, just how bad could the Post-Human era be? Well … pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet…. In a Post-Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a “Meta-Golden Rule”, which might be paraphrased as “Treat your inferiors as you would be treated by your superiors.” It’s a wonderful, paradoxical idea (and most of my friends don’t believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)
I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology. And yet … we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:
Other Paths to the Singularity: Intelligence Amplification
When people speak of creating superhumanly intelligent beings, they are usually imagining an AI project. But as I noted at the beginning of this paper, there are other paths to superhumanity. Computer networks and human-computer interfaces seem more mundane than AI, and yet they could lead to the Singularity. I call this contrasting approach Intelligence Amplification (IA). IA is something that is proceeding very naturally, in most cases not even recognized by its developers for what it is. But every time our ability to access information and to communicate it to others is improved, in some sense we have achieved an increase over natural intelligence. Even now, the team of a PhD human and good computer workstation (even an off-net workstation!) could probably max any written intelligence test in existence.
And it’s very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that. And there is at least conjectural precedent for this approach. Cairns-Smith [6] has speculated that biological life may have begun as an adjunct to still more primitive life based on crystalline growth. Lynn Margulis (in [15] and elsewhere) has made strong arguments that mutualism is a great driving force in evolution.
Note that I am not proposing that AI research be ignored or less funded. What goes on with AI will often have applications in IA, and vice versa. I am suggesting that we recognize that in network and interface research there is something as profound (and potential wild) as Artificial Intelligence. With that insight, we may see projects that are not as directly applicable as conventional interface and network design work, but which serve to advance us toward the Singularity along the IA path.
Here are some possible projects that take on special significance, given the IA point of view:
The above examples illustrate research that can be done within the context of contemporary computer science departments. There are other paradigms. For example, much of the work in Artificial Intelligence and neural nets would benefit from a closer connection with biological life. Instead of simply trying to model and understand biological life with computers, research could be directed toward the creation of composite systems that rely on biological life for guidance or for the providing features we don’t understand well enough yet to implement in hardware. A long-time dream of science-fiction has been direct brain to computer interfaces [2] [29]. In fact, there is concrete work that can be done (and is being done) in this area:
Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety … well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today’s world, one where losers take on the winners’ tricks and are coopted into the winners’ enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].
The problem is not simply that the Singularity represents the passing of humankind from center stage, but that it contradicts our most deeply held notions of being. I think a closer look at the notion of strong superhumanity can show why that is.
Strong Superhumanity and the Best We Can Ask for
Suppose we could tailor the Singularity. Suppose we could attain our most extravagant hopes. What then would we ask for: That humans themselves would become their own successors, that whatever injustice occurs would be tempered by our knowledge of our roots. For those who remained unaltered, the goal would be benign treatment (perhaps even giving the stay-behinds the appearance of being masters of godlike slaves). It could be a golden age that also involved progress (overleaping Stent’s barrier). Immortality (or at least a lifetime as long as we can make the universe survive [10] [4]) would be achievable.
But in this brightest and kindest world, the philosophical problems themselves become intimidating. A mind that stays at the same capacity cannot live forever; after a few thousand years it would look more like a repeating tape loop than a person. (The most chilling picture I have seen of this is in [18].) To live indefinitely long, the mind itself must grow … and when it becomes great enough, and looks back … what fellow-feeling can it have with the soul that it was originally? Certainly the later being would be everything the original was, but so much vastly more. And so even for the individual, the Cairns-Smith or Lynn Margulis notion of new life growing incrementally out of the old must still be valid.
This “problem” about immortality comes up in much more direct ways. The notion of ego and self-awareness has been the bedrock of the hardheaded rationalism of the last few centuries. Yet now the notion of self-awareness is under attack from the Artificial Intelligence people (“self-awareness and other delusions”). Intelligence Amplification undercuts our concept of ego from another direction. The post-Singularity world will involve extremely high-bandwidth networking. A central feature of strongly superhuman entities will likely be their ability to communicate at variable bandwidths, including ones far higher than speech or written messages. What happens when pieces of ego can be copied and merged, when the size of a selfawareness can grow or shrink to fit the nature of the problems under consideration? These are essential features of strong superhumanity and the Singularity. Thinking about them, one begins to feel how essentially strange and different the Post-Human era will be-no matter how cleverly and benignly it is brought to be.
From one angle, the vision fits many of our happiest dreams: a time unending, where we can truly know one another and understand the deepest mysteries. From another angle, it’s a lot like the worst-case scenario I imagined earlier in this paper.
Which is the valid viewpoint? In fact, I think the new era is simply too different to fit into the classical frame of good and evil. That frame is based on the idea of isolated, immutable minds connected by tenuous, low-bandwith links. But the post-Singularity world does fit with the larger tradition of change and cooperation that started long ago (perhaps even before the rise of biological life). I think there are notions of ethics that would apply in such an era. Research into IA and high-bandwidth communications should improve this understanding. I see just the glimmerings of this now [32]. There is Good’s Meta-Golden Rule; perhaps there are rules for distinguishing self from others on the basis of bandwidth of connection. And while mind and self will be vastly more labile than in the past, much of what we value (knowledge, memory, thought) need never be lost. I think Freeman Dyson has it right when he says [9]: “God is what mind becomes when it has passed beyond the scale of our comprehension.”
[I wish to thank John Carroll of San Diego State University and Howard Davidson of Sun Microsystems for discussing the draft version of this paper with me.]
Just about everyone found something to dislike about 2016, from wars to politics and celebrity deaths. But hidden within this year’s news feeds were some really exciting news stories. And some of them can even give us hope for the future.
Though concerns about the future of AI still loom, 2016 was a great reminder that, when harnessed for good, AI can help humanity thrive.
Some of the most promising and hopefully more immediate breakthroughs and announcements were related to health. Google’s DeepMind announced a new division that would focus on helping doctors improve patient care. Harvard Business Review considered what an AI-enabled hospital might look like, which would improve the hospital experience for the patient, the doctor, and even the patient’s visitors and loved ones. A breakthrough from MIT researchers could see AI used to more quickly and effectively design new drug compounds that could be applied to a range of health needs.
More specifically, Microsoft wants to cure cancer, and the company has been working with research labs and doctors around the country to use AI to improve cancer research and treatment. But Microsoft isn’t the only company that hopes to cure cancer. DeepMind Health also partnered with University College London’s hospitals to apply machine learning to diagnose and treat head and neck cancers.
Other researchers are turning to AI to help solve social issues. While AI has what is known as the “white guy problem” and examples of bias cropped up in many news articles, Fei Fei Li has been working with STEM girls at Stanford to bridge the gender gap. Stanford researchers also published research that suggests artificial intelligence could help us use satellite data to combat global poverty.
It was also a big year for research on how to keep artificial intelligence safe as it continues to develop. Google and the Future of Humanity Institute made big headlines with their work to design a “kill switch” for AI. Google Brain also published a research agenda on various problems AI researchers should be studying now to help ensure safe AI for the future.
Even the White House got involved in AI this year, hosting four symposia on AI and releasing reports in October and December about the potential impact of AI and the necessary areas of research. The White House reports are especially focused on the possible impact of automation on the economy, but they also look at how the government can contribute to AI safety, especially in the near future.
And of course there was AlphaGo. In January, Google’s DeepMind published a paper, which announced that the company had created a program, AlphaGo, that could beat one of Europe’s top Go players. Then, in March, in front of a live audience, AlphaGo beat the reigning world champion of Go in four out of five games. These results took the AI community by surprise and indicate that artificial intelligence may be progressing more rapidly than many in the field realized.
And AI went beyond research labs this year to be applied practically and beneficially in the real world. Perhaps most hopeful was some of the news that came out about the ways AI has been used to address issues connected with pollution and climate change. For example, IBM has had increasing success with a program that can forecast pollution in China, giving residents advanced warning about days of especially bad air. Meanwhile, Google was able to reduce its power usage by using DeepMind’s AI to manipulate things like its cooling systems.
And speaking of addressing climate change…
With recent news from climate scientists indicating that climate change may be coming on faster and stronger than previously anticipated and with limited political action on the issue, 2016 may not have made climate activists happy. But even here, there was some hopeful news.
Among the biggest news was the ratification of the Paris Climate Agreement. But more generally, countries, communities and businesses came together on various issues of global warming, and Voices of America offers five examples of how this was a year of incredible, global progress.
But there was also news of technological advancements that could soon help us address climate issues more effectively. Scientists at Oak Ridge National Laboratory have discovered a way to convert CO2 into ethanol. A researcher from UC Berkeley has developed a method for artificial photosynthesis, which could help us more effectively harness the energy of the sun. And a multi-disciplinary team has genetically engineered bacteria that could be used to help combat global warming.
Biotechnology — with fears of designer babies and manmade pandemics – is easily one of most feared technologies. But rather than causing harm, the latest biotech advances could help to save millions of people.
In the course of about two years, CRISPR-cas9 went from a new development to what could become one of the world’s greatest advances in biology. Results of studies early in the year were promising, but as the year progressed, the news just got better. CRISPR was used to successfully remove HIV from human immune cells. A team in China used CRISPR on a patient for the first time in an attempt to treat lung cancer (treatments are still ongoing), and researchers in the US have also received approval to test CRISPR cancer treatment in patients. And CRISPR was also used to partially restore sight to blind animals.
Where CRISPR could have the most dramatic, life-saving effect is in gene drives. By using CRISPR to modify the genes of an invasive species, we could potentially eliminate the unwelcome plant or animal, reviving the local ecology and saving native species that may be on the brink of extinction. But perhaps most impressive is the hope that gene drive technology could be used to end mosquito- and tick-borne diseases, such as malaria, dengue, Lyme, etc. Eliminating these diseases could easily save over a million lives every year.
The year saw other biotech advances as well. Researchers at MIT addressed a major problem in synthetic biology in which engineered genetic circuits interfere with each other. Another team at MIT engineered an antimicrobial peptide that can eliminate many types of bacteria, including some of the antibiotic-resistant “superbugs.” And various groups are also using CRISPR to create new ways to fight antibiotic-resistant bacteria.
If ever there was a topic that does little to inspire hope, it’s nuclear weapons. Yet even here we saw some positive signs this year. The Cambridge City Council voted to divest their $1 billion pension fund from any companies connected with nuclear weapons, which earned them an official commendation from the U.S. Conference of Mayors. In fact, divestment may prove a useful tool for the general public to express their displeasure with nuclear policy, which will be good, since one cause for hope is that the growing awareness of the nuclear weapons situation will help stigmatize the new nuclear arms race.
In February, Londoners held the largest anti-nuclear rally Britain had seen in decades, and the following month MinutePhysics posted a video about nuclear weapons that’s been seen by nearly 1.3 million people. In May, scientific and religious leaders came together to call for steps to reduce nuclear risks. And all of that pales in comparison to the attention the U.S. elections brought to the risks of nuclear weapons.
As awareness of nuclear risks grows, so do our chances of instigating the change necessary to reduce those risks.
But if awareness alone isn’t enough, then recent actions by the United Nations may instead be a source of hope. As October came to a close, the United Nations voted to begin negotiations on a treaty that would ban nuclear weapons. While this might not have an immediate impact on nuclear weapons arsenals, the stigmatization caused by such a ban could increase pressure on countries and companies driving the new nuclear arms race.
The U.N. also announced recently that it would officially begin looking into the possibility of a ban on lethal autonomous weapons, a cause that’s been championed by Elon Musk, Steve Wozniak, Stephen Hawking and thousands of AI researchers and roboticists in an open letter.
And why limit our hope and ambition to merely one planet? This year, a group of influential scientists led by Yuri Milner announced an Alpha-Centauri starshot, in which they would send a rocket of space probes to our nearest star system. Elon Musk later announced his plans to colonize Mars. And an MIT scientist wants to make all of these trips possible for humans by using CRISPR to reengineer our own genes to keep us safe in space.
Yet for all of these exciting events and breakthroughs, perhaps what’s most inspiring and hopeful is that this represents only a tiny sampling of all of the amazing stories that made the news this year. If trends like these keep up, there’s plenty to look forward to in 2017.
Reprinted from Future Of Life website with permission. Ariel Conn specializes in all forms of online science communication, including writing, social media and web design. She has bachelors degrees in English and physics and a masters in geophysics. She created a got milk? commercial, interned with NASA, researched induced seismology at both Virginia Tech and the National Energy Technology Laboratory, and worked as a science writer for the Idaho National Laboratory.
]]>Perhaps our questions about artificial intelligence are a bit like inquiring after the temperament and gait of a horseless carriage.
—K. Eric Drexler
Now we will classify the different stages AI might go through by using the Greek prepositions. These have been adopted into English as prefixes, particularly in scientific usage. In some cases the concepts have been applied to advancing AI before and in other cases not. The reason for introducing these new terms is they provide a framework that puts any given level of expected AI capability in perspective vis-à-vis the other levels, and in comparison to human intelligence.
Hypo means below or under (think hypodermic, under the skin; hypothermia or hypoglycemia, below normal temperature or blood sugar), including, in the original Greek, under the moral or legal subjection of. Isaac Asimov’s robots are (mostly) hypohuman, in both senses of hypo: they are not quite as smart as humans, and they are subject to our rule. Most existing AI is arguably hypohuman, as well (Deep Blue to the contrary notwithstanding). As long as it stays that way, the only thing we have to worry about is that there will be human idiots putting their AI idiots in charge of things they both don’t understand. All the discussion of formalist float applies, especially the part about feedback.
Dia means through or across in Greek (diameter, diagonal), and the Latin trans means the same thing, but the commonly heard transhuman doesn’t apply here. Transhuman refers to humans as opposed to AIs, humans who have been enhanced (by whatever means)and are in a transitional state between human and fully posthuman, whatever that may be. Neither concept is very useful here.
By diahuman, I mean AIs in the stage where AI capabilities are crossing the range of human intelligence. It’s tempting to call this human-equivalent, but the idea of equivalence is misleading. It’s already apparent that some AI abilities (e.g., chess playing) are beyond the human scale , while others (e.g., reading and writing) haven’t reached it yet.
Thus diahuman refers to a phase of AI development (and only by extension to an individual AI in that phase), and this is fuzzy because the limits of human (and AI) capability are fuzzy. It’s hard to say which capabilities are important in the comparison. I would claim that AI is entering the early stages of the diahuman phase right now; there are humans who, like today’s AIs, don’t learn well and who function competently only at simple jobs for which they must be trained.
The core of the diahuman phase, however, will be the development of autogenous learning. In the latter stages, AIs, like the brightest humans, will be completely autonomous, not only learning what they need to know but also deciding what they need to learn.
Diahuman AIs will be valuable and will undoubtedly attract significant attention and resources to the AI enterprise. They are likely to cause something of a stir in philosophy and perhaps religion, as well. However, they will not have a significant impact on the human condition. (The one exception might be economically, in the case that diahuman AI lingers so long that Moore’s law makes human-equivalent robots very cheap compared to human labor. But I’m assuming that we will probably have advanced past the diahuman stage by then.)
Para means alongside (paralegal, paramedic). The concept of designing a system that a human is going to be part of dates back to cybernetics (although all technology throughout history had to be designed so that humans could operate it, in some sense).
Parahuman AI will be built around more and more sophisticated theories of how humans work. The PC of the future ought to be a parahuman AI. MIT roboticist Cynthia Brazeal’s sociable robots are the likely forerunners of a wide variety of robots that will interact with humans in many kinds of situations.
The upside of parahuman AI is that it will enhance the interface between our native senses and abilities, adapted as they are for a hunting and gathering bipedal ape, and the increasingly formalized and mechanized world we are building. The parahuman AI should act like a lawyer, a doctor, an accountant, and a secretary, all with deep knowledge and endless patience. Once AI and cognitive science have acquired a solid understanding of how we learn, parahuman AI teachers could be built which would model in detail how each individual student was absorbing the material, ultimately finding the optimal presentation for understanding and motivation.
The downside is simply the same effect, put to work with slimier motives: the parahuman advertising AI, working for corporations or politicians, could know just how to tweak your emotions and gain your trust without actually being trustworthy. It would be the equivalent of an individualized artificial con man. Note by the way that of the two human elements that were part of the original cybernetic anti-aircraft control theory, one of them, the pilot of the plane being shot at, didn’t want to be part of the system but was, willy-nilly.
Parahuman is a characterization that does not specify a level of intellectual capability compared to humans; it can be properly applied to AIs at any level. Humans are fairly strongly parahuman intelligences as well; many of our innate skills involve interacting with other humans. Parahuman can be largely contrasted with the following term, allohuman.
Allo means other or different (allomorph, allonym, allotrope). Although I have argued that human intelligence is universal, there remains a vast portion of our minds that is distinctively human. This includes the genetically programmed representation modules, the form of our motivations, and the sensory modalities, of which several are fairly specific to running a human body.
It will certainly be possible to create intelligences that while being universal nevertheless have different lower-level hardwired modalities for sense and representation, and different higher-level motivational structure. One simple possibility is that universal mechanism may stand in for a much greater portion of the cognitive mechanism so that, for example, the AI would use learned physics instead of instinctive concepts and learned psychology instead of our folk models.
Such differences could reasonably make the AI better at certain tasks; consider the ability to do voluminous calculations in you head. However, if you have ever watched an experienced accountant manipulate a calculator, you can see that the numbers almost flow through his fingers. Built-in modalities may provide some increment of effectiveness compared to learned ones, but not as much as you might think. Consider reading—it’s a learned activity, and unlike talking, we don’t just “pick it up.” But with practice, we read much faster than we can talk or understand spoken language.
Motivations and the style and the volume of communication could also differ markedly from the human model. The allohuman AI might resemble Mr. Spock, or it might resemble an intelligent ant. This likely will form the bulk of the difference between allohuman AIs and humans rather than the varying modalities.
Like parahuman, allohuman does not imply a given level of intellectual competence. In the fullness of time, however, the parahuman/allohuman distinction will make less and less difference. More advanced AIs, whether they need to interact with humans or to do something weirdly different, will simply obtain or deduce whatever knowledge is necessary and synthesize the skills on the fly.
Epi means upon or after (epidermis, epigram, epitaph, epilogue). I’m using it here in a combination of senses to mean AI that is just above the range of individual human capabilities but that still forms a continuous range with them, and also in the sense of what comes just after diahuman AI. That gives us what can be a useful distinction versus further-out possibilities. (See hyper below.)
Science fiction writer Charles Stross introduced the phrase “weakly godlike AI.” Weakly presumably refers to the fact that such AIs would still be bound by the laws of physics—they couldn’t perform miracles, for example. As a writer, I’m filled with admiration for the phrase, since weakly and godlike have such contrasting meanings that it forces you to think when you read it for the first time, and the term weakly is often used in a similar way, with various technical meanings, in scientific discourse, giving a vague sense of rigor (!) to the phrase.
The word posthuman is often used to describe what humans may be like after various technological enhancements. Like transhuman, posthuman is generally used for modified humans instead of synthetic AIs.
My model for what an epihuman AI would be like is to take the ten smartest people you know, remove their egos, and duplicate them a hundred times, so that you have a thousand really bright people willing to apply themselves all to the same project. Alternatively, simply imagine a very bright person given a thousand times as long to do any given task. We can straightforwardly predict, from Moore’s law, that ten years after the advent of a learning but not radically self-improving human-level AI, the same software running on machinery of the same cost would do the same human-level tasks a thousand times as fast as we. It could, for example:
A thousand really bright people are enough to do some substantial and useful work. An epihuman AI could probably command an income of $100 million or more in today’s economy by means of consulting and entrepreneurship, and it would have a net present value in excess of a $1 billion. Even so, it couldn’t take over the world or even an established industry. It could probably innovate well enough to become a standout in a nascent field, though, as in Google’s case.
A thousand top people is a reasonable estimate for what the current field of AI research is applying to the core questions and techniques—basic, in contrast to applied, research. Thus an epihuman AI could probably improve itself about as fast as current AI is improving. Of course, if it did that, it wouldn’t be able to spend its time making all that money; the opportunity cost is pretty high. It would need to make exactly the same kind of decision that any business faces with respect to capital reinvestment.
Whichever it may choose to do, the epihuman level characterizes an AI that is able to stand in for a given fairly sizeable company or for a field of academic inquiry. As more and more epihuman AIs appear, they will enhance economic and scientific growth so that by the later stages of the phase the total stock of wealth and knowledge will be significantly higher than it would have been without the AIs. AIs will be a significant sector, but no single AI would be able to rock the boat to a great degree.
Hyper means over or above. In common use as an English prefix, hyper tends to denote a greater excess than super, which means the same thing but comes from Latin instead of Greek. (Contrast, e.g., supersonic, more than Mach 1, and hypersonic, more than Mach 5.)
In the original Singularity paper, “The Coming Technological Singularity,” Vernor Vinge used the phrase superhuman intelligence. Nick Bostrom has used the term superintelligence. Like some of the terms above, however, superhuman has a wide range of meanings (think about Kryptonite), and most of them are not applicable to the subject at hand. We will stay with our Greek prefixes and finish the list with hyperhuman.
Imagine an AI that is a thousand epihuman AIs, all tightly integrated together. Such an intellect would be capable of substantially outstripping the human scientific community at any given task and of comprehending the entirety of scientific knowledge as a unified whole. A hyperhuman AI would soon begin to improve itself significantly faster than humans could. It could spot the gaps in science and engineering where there was low-hanging fruit and instigate rapid increases in technological capability across the board.
It is as yet poorly understood even in the scientific community just how much headroom remains for improvement with respect to the capabilities of current physical technology. A mature nanotechnology, for example, could replace the entire capital stock—all the factories, buildings, roads, cars, trucks, airplanes, and other machines—of the United States in a week. And that’s just using currently understood science, with a dollop of engineering development thrown in.
Any sufficiently advanced technology, Arthur Clarke wrote, is indistinguishable from magic. Although, I believe, any specific thing the hyperhuman AIs might do could be understood by humans, the total volume of work and the rate of advance would become harder and harder to follow. Please note that any individual human is already in a similar relationship with the whole scientific community; our understanding of what is going on is getting more and more abstract. The average person understands cell phones at a level of knowing that batteries have limited lives and coverage has gaps, but not at the level of field-effect transistor gain figures and conductive trace electromigration phenomena. Ten years ago the average scientist, much less the average user, could not have predicted that most cell phones would contain cameras and color screens today. But we can follow, if not predict, by understanding things at a very high level of abstraction, as if they were magic.
Any individual hyperhuman AI would be productive, intellectually or industrially, on the scale of the human race as a whole. As the number of hyperhuman AIs increased, our efforts would shrink to more and more modest proportions of the total.
Where does an eight-hundred-pound gorilla sit? According to the old joke, anywhere he wants to. Much the same thing will be true of a hyperhuman AI, except in instances where it has to interact with other AIs. The really interesting question then will be, what will it want?
©2007 J. Storrs Hall
]]>I send you greetings and good wishes at the beginning of another year. Ill be celebrating (?) my 90th birthday in Decembera few weeks after the Space Age completes its first half century.
When the late and unlamented Soviet Union launched Sputnik 1 on 4 October 1957, it took only about five minutes for the world to realise what had happened. And although I had been writing and speaking about space travel for years, the moment is still frozen in my own memory: I was in Barcelona attending the 8th International Astronautical Congress. We had retired to our hotel rooms after a busy day of presentations when the news brokeI was awakened by reporters seeking comments on the Soviet feat. Our theories and speculations had suddenly become reality!
Notwithstanding the remarkable accomplishments during the past 50 years, I believe that the Golden Age of space travel is still ahead of us. Before the current decade is out, fee-paying passengers will be experiencing sub-orbital flights aboard privately funded passenger vehicles, built by a new generation of engineer-entrepreneurs with an unstoppable passion for space (Im hoping I could still make such a journey myself). And over the next 50 years, thousands of people will gain access to the orbital realmand then, to the Moon and beyond.
During 2006, I followed with interest the emergence of this new breed of Citizen Astronauts and private space enterprise. I am very encouraged by the wide-spread acceptance of the Space Elevator, which can make space transport cheap and affordable to ordinary people. This daring engineering concept, which I popularised in The Fountains of Paradise (1978), is now taken very seriously, with space agencies and entrepreneurs investing money and effort in developing prototypes. A dozen of these parties competed for the NASA-sponsored, US$ 150,000 X Prize Cup which took place in October 2006 at the Las Cruces International Airport, New Mexico.
The Arthur Clarke Foundation continues to recognise and cheer-lead men and women who blaze new trails to space. A few days before the X Prize Cup competition, my old friend Walter Cronkite received the Foundations Lifetime Achievement Award. I have known Walter for over half a century, and my commentary with him during the heady days of the Apollo Moon landings now belongs to another era. A space pathfinder of the Twenty First Century, Bob Bigelow, was presented the Arthur C. Clarke Innovator Award for his work in the development of space habitats. With the successful launch of Bigelow Aerospaces Genesis 1, Bob is leading the way for private individuals willing to advance space exploration with minimum reliance on government programmes.
Meanwhile, planning and fund-raising work continued for the Arthur C Clarke Centre "to investigate the reach and impact of human imagination." to be set up in partnership with the University of Nevada, Las Vegas. Objective: to identify young people with robust imagination, to help their parents and teachers make the most of that talent, and to accord imagination as much regard as high academic grades in the classroom – anywhere in the world. The Board members of the Clarke Foundation, led by its indefatigable Chairman Tedson Meyers, have taken on the challenge of raising US$ 70 million for this project. Im hopeful that the billion dollar communications satellite industry I founded 60 years ago with my Wireless World paper (October 1945), for which I received the astronomical sum of £15, will be partners in this endeavour.
Ive only been able to make a few encouraging noises from the sidelines for these and other worthy projects as Im now very limited in time and energy owing to Post Polio. But Im happy to report that my health remains stable, and Im in no discomfort or pain. Being completely wheel-chaired helps to concentrate on my reading and writingwhich I can once again engage in, with the second cataract operation restoring my eyesight.
During the year, I wrote a number of short articles, book reviews and commentaries for a variety of print and online outlets. I also did a few carefully chosen media interviews, and filmed several video greetings to important scientific or literary gatherings in different parts of the world.
I was particularly glad to find a co-author to complete my last novel, The Last Theorem, which remained half-written for a couple of years. I had mapped out the entire story, but then found I didnt have the energy to work on the balance text. Accomplished American writer Frederik Pohl has now taken up the challenge. Meanwhile, co-author Stephen Baxter has completed First-born, the third novel in our collaborative Time Odyssey series, to be published in 2007.
Members of my adopted familyHector, Valerie, Cherene, Tamara and Melinda Ekanayakeare keeping well. Hector has been looking after me since 1956, and with his wife Valerie, has made a home for me at 25, Barnes Place, Colombo. Hector continued to rebuild the diving operation that was wiped out by the Indian Ocean Tsunami of December 2004. Sri Lankas tourist sector, still recovering from the mega-disaster, weathered a further crisis as the long-drawn civil conflict ignited again after more than three years of relative peace and quiet. I remain hopeful that a lasting solution would be worked out by the various national and international players engaged in the peace process.
Im still missing and mourning my beloved Chihuahua Pepsi, who left us more than a year ago. Ive just heard that dogs arent allowed in Heaven, so Im not going there.
Brother Fred, Chris Howse, Angie Edwards and Navam Tambayah look after my affairs in England. My agents David Higham Associates and Scovil, Chichak & Galen Literary Agency deal with rapacious editors and media executives. They both follow my general directive: No reasonable offer will even be considered.
I am well supported by my staff and take this opportunity to thank them all:
Executive Officer: Nalaka Gunawardene
Personal Assistant: Rohan De Silva
Secretary: Dottie Weerasooriya
Valets: Titus, Saman, Chandra, Sunil
Drivers: Lalith & Anthony
Domestic Staff: Kesavan, Jayasiri & Mallika
Gardener: Jagath
Let me end with an extract from my tribute to Star Trek on its 40th anniversary this message is more relevant today than when the series first aired in the heady days of Apollo: Appearing at such a time in human history, Star Trek popularised much more than the vision of a space-faring civilisation. In episode after episode, it promoted the then unpopular ideals of tolerance for differing cultures and respect for life in all formswithout preaching, and always with a saving sense of humour.
Colombo, Sri Lanka
28 January 2007
© Sir Arthur C. Clarke 2007.
]]>The Web is entering a new phase of evolution. There has been much debate recently about what to call this new phase. Some would prefer to not name it all, while others suggest continuing to call it “Web 2.0.” However, this new phase of evolution has quite a different focus from what Web 2.0 has come to mean.
John Markoff of the New York Times recently suggested naming this third-generation of the Web, “Web 3.0.” This suggestion has led to quite a bit of debate within the industry. Those who are attached to the Web 2.0 moniker have reacted by claiming that such a term is not warranted while others have responded positively to the term, noting that there is indeed a characteristic difference between the coming new stage of the Web and what Web 2.0 has come to represent.
The term Web 2.0 was never clearly defined and even today if one asks ten people what it means one will likely get ten different definitions. However, most people in the Web industry would agree that Web 2.0 focuses on several major themes, including AJAX, social networking, folksonomies, lightweight collaboration, social bookmarking, and media sharing. While the innovations and practices of Web 2.0 will continue to develop, they are not the final step in the evolution of the Web.
In fact, there is a lot more in store for the Web. We are starting to witness the convergence of several growing technology trends that are outside the scope of what Web 2.0 has come to mean. These trends have been gestating for a decade and will soon reach a tipping point. At this juncture the third-generation of the Web will start.
More intelligent Web
The threshold to the third-generation Web will be crossed in 2007. At this juncture the focus of innovation will start shift back from front-end improvements towards back-end infrastructure level upgrades to the Web. This cycle will continue for five to ten years, and will result in making the Web more connected, more open, and more intelligent. It will transform the Web from a network of separately siloed applications and content repositories to a more seamless and interoperable whole.
Because the focus of the third-generation Web is quite different from that of Web 2.0, this new generation of the Web probably does deserve its own name. In keeping with the naming convention established by labeling the second generation of the Web as Web 2.0, I agree with John Markoff that this third-generation of the Web could be called Web 3.0.
A more precise timeline and definition might go as follows:
Web 1.0. Web 1.0 was the first generation of the Web. During this phase the focus was primarily on building the Web, making it accessible, and commercializing it for the first time. Key areas of interest centered on protocols such as HTTP, open standard markup languages such as HTML and XML, Internet access through ISPs, the first Web browsers, Web development platforms and tools, Web-centric software languages such as Java and Javascript, the creation of Web sites, the commercialization of the Web and Web business models, and the growth of key portals on the Web.
Web 2.0. According to the Wikipedia, "Web 2.0, a phrase coined by O’Reilly Media in 20041, refers to a supposed second generation of Internet-based servicessuch as social networking sites, wikis, communication tools, and folksonomiesthat emphasize online collaboration and sharing among users." I would also add to this definition another trend that has been a major factor in Web 2.0the emergence of the mobile Internet and mobile devices (including camera phones) as a major new platform driving the adoption and growth of the Web, particularly outside of the United States.
Web 3.0. Using the same pattern as the above Wikipedia definition, Web 3.0 could be defined as: "Web 3.0, a phrase coined by John Markoff of the New York Times in 2006, refers to a supposed third generation of Internet-based services that collectively comprise what might be called ‘the intelligent Web’such as those using semantic web, microformats, natural language search, data-mining, machine learning, recommendation agents, and artificial intelligence technologieswhich emphasize machine-facilitated understanding of information in order to provide a more productive and intuitive user experience."
Web 3.0 Expanded Definition. I propose expanding the above definition of Web 3.0 to be a bit more inclusive. There are actually several major technology trends that are about to reach a new level of maturity at the same time. The simultaneous maturity of these trends is mutually reinforcing, and collectively they will drive the third-generation Web. From this broader perspective, Web 3.0 might be defined as a third-generation of the Web enabled by the convergence of several key emerging technology trends:
Ubiquitous Connectivity
Network Computing
Open Technologies
Open Identity
The Intelligent Web
© 2006 Nova Spivack.
]]>BROOKS: This is a double-headed event today. We’re going to start off with a debate. Then we’re goingmaybe it’s a triple-headed event. We’re going to start off with a debate, then we’re going to have a break for pizza and sodapizza lover hereoutside, and then we’re going to come back for a lecture.
The event that this is around is the 70th anniversary of a paper by Alan Turing, "On Computable Numbers," published in 1936, which one can legitimately, I thinkI think one can legitimately think of that paper as the foundation of computer science. It included the invention of the Turingwhat we now call the Turing Machine. And Turing went on to have lots of contributions to our field, we at the Computer Science and Artificial Intelligence Lab. In 1948, he had a paper titled, "Intelligent Machinery," which I think is really the foundation of artificial intelligence.
So in honor of that 70th anniversary, we have a workshop going on in the next couple days and this even tonight. This event is sponsored by the Templeton Foundation. Charles Harper of the Templeton Foundation is here, and so is Mary Ann Meyers and some other people sponsoring this event. And Charles, I have to ask you one question A or B? You have to say. You have to choose. This is going to choose who goes first, but I’m not telling you who A or B is.
HARPER: A.
BROOKS: OK. So we’re going to start this debate between Ray Kurzweil and David Gelernter. And it turns out that Ray is going to go first. Thanks, Charles. So I’m first going to introduce Ray and David. I will point out that after we finish and after the break, we’re going to come back at 6:15, and Jack Copeland, who’s down here, will then give a lecture on Turing’s life. And Jack has beenruns the Alanturing.net, the archives in New Zealand of Alan Turing, and he’s got a wealth of material and new material that’s being declassified over time that he’ll be talking about some of Alan Turing’s contributions.
But the debate that we’re about to have is really about the AI side of Alan Turing and the limits that we can expect or that we might be afraid of or might be celebrating of whether we can build superintelligent machines, or are we limited to building just superintelligent zombies. We’re pretty sure we can build programs with intelligence, but will they just be zombies that don’t have the real oomph of us humans? Will it be possible or desirable for us to build conscious, volitional, and perhaps even spiritual machines?
So we’re going to have a debate. Ray is going to speak for five minutes and then David is going to speak for five minutesopening remarks. Then Ray will speak for ten minutes, David for ten minutes that’s a total of 30 minutes, and I’m going to time them. And then we’re going to have a 15-minute interplay between the two of them. They get to use as much time as they can get from the other one during that. And then we’re going to open up to some questions from the audience. But I do ask that when we have the questions, the questions shouldn’t be for you to enter the debate. It would be better if you can come up with some question which you think they can argue about, because that’s what we’re here to see.
Ray Kurzweil has been a well-known name since hisin artificial intelligence since his appearance on Steve Allen’s show in 1965, where he played a piano piece that a computer he had built had composed. Ray has gone on to
KURZWEIL: I was three years old.
BROOKS: He was three years old, yes. Ray has gone on to build the Kurzweil synthesizers that many musicians use, the Kurzweil reading machines, and many other inventions that have gone out there and are in everyday use. He’s got prizes and medals up the wazoo. He won the Lemelson Prize from MIT, he won the National Medal of Technology, presented by President Clinton in 1999. And Ray has written a number of books that have beencome out and been very strong sellers on all sorts of questions about our future and the future of robot kind.
David Gelernter is a professor at Yale University, professor of computer science, but he’s sort of a strange professor of computer science, in the sense that he writes essays for Weekly Standard, Time, Wall Street Journal, Washington Post, Los Angeles Times, and many other sorts of places. And I see a few of my colleagues here, and I’m glad they don’t write columns for all those places. His research interests include AI, philosophy of mind, parallel distributed systems, visualization, and information management. And you can read all about them with Google if you want to get more details. Both very distinguished people, and I hope we have some interesting things to hear from them. So we’ll start with Ray. And five minutes, Ray.
KURZWEIL: OK. Well, thanks, Rodney. You’re very good at getting a turnout. That went quickly. [laughter] So there’s a tie-in with my tie, which this was given to me by Intel. It’s a photomicrograph of the Pentium, which I think symbolizes the progress we’ve made since Turing’s relay-based computer Ultra that broke the Nazi Enigma code and enabled Britain to win the Battle of Britain. But we’ve come a long way since then.
And in terms of this 70th anniversary, the course I enjoyed the most here at MIT, when I was here in the late ’60s, was 6.253I don’t remember all the numbers, and numbers are important here but that was theoretical models of computation, and it was about that paper and about the Turing Machine and what it could compute and computable functions and the busy beaver function, which is non-computable, and what computers can do, and really established computation as a sub-field of mathematics and, arguably, mathematics as a sub-field of computation.
So in terms of the debate topic, I thought it was interesting that there’s an assumption in the title that we will build superintelligent machines, we’ll build superintelligent machines that are conscious or not conscious. And it brings up the issue of consciousness, and I want to focus on that for a moment, because I think we can define consciousness in two ways. We can define apparent consciousness, which is an entity that appears to be consciousand I believe, in fact, you have to be apparently conscious to pass the Turing test, which means you really need a command of human emotion. Because if you’re just very good at doing mathematical theorems and making stock market investments and so on, you’re not going to pass the Turing test. And in fact, we have machines that do a pretty good job with those things. Mastering human emotion and human language is really key to the Turing test, which has held up as our exemplary assessment of whether or not a non-biological intelligence has achieved human levels of intelligence.
And that will require a machine to master human emotion, which in my view is really the cutting edge of human intelligence. That’s the most intelligent thing we do. Being funny, expressing a loving sentimentthese are very complex behaviors. And we have characters in video games that can try to do these things, but they’re not very convincing. They don’t have the complex, subtle cues that we associate with those emotions. They don’t really have emotional intelligence. But emotional intelligence is not some sideshow to human intelligence. It’s really the cutting edge. And as we build machines that can interact with us better and really master human intelligence, that’s going to be the frontier. And in the ten minutes, I’ll try to make the case that we will achieve that. I think that’s more of a 45-minute argument, but I’ll try to summarize my views on that.
I will say that the community, AI community and myself, have gotten closer in our assessments of when that will be feasible. There was a conference on my 1999 book, Spiritual Machines, at Stanford, and there were AI experts. And the consensus thenmy feeling then was we would see it in 2029. The consensus in the AI community was, oh, it’s going toit’s very complicated, it’s going to take hundreds of years, if we can ever do it. I gave a presentationI think you were there, Rodney, as well, at AI50, on the 50th anniversary of the Dartmouth Conference that gave AI its name in 1956. And we had these instant polling devices, and they asked ten different ways when a machine would pass the Turing testwhen will we know enough about the brain, when will we have sophisticated enough software, when will a computer actually pass the Turing test. They got the same answerit was basically the same question, and they got the same answer. And of course it was a bell curve, but the consensus was 50 years, which, at least if you think logarithmically, as I do, that’s not that different from 25 years.
So I haven’t changed my position, but the AI community is getting closer to my view. And I’ll try to explain why I think that’s the case. It’s because of the exponential power of growth in information technology, which will affect hardware, but also will affect our understanding of the human brain, which is at least one source of getting the software of intelligence.
The other definition of consciousness is subjectivity. Consciousness is a synonym for subjectivity and really having subjective experience, not just an entity that appears to have subjective experience. And fundamentallyand I’ll try to make this point more fully in my ten-minute presentationthat’s not a scientific concept. There’s no consciousness detector we can imagine creating, that you’d slide an entity ingreen light goes on, OK, this one’s conscious, no, this one’s not consciousthat doesn’t have some philosophical assumptions built into it. So John Searle would make sure that it’s squirting human neurotransmitters
BROOKS: Time’s up.
KURZWEIL: OK. And Dan Dennett would make sure it’s self-reflexive. But we’ll return to this.
[applause]
BROOKS: David?
GELERNTER: Let’s see. First, I’d like to say thanks for inviting me. My guess is that the position I’m representingthe anti-cognitivist position, broadly speakingis not the overwhelming favorite at this particular site. But I appreciate your willingness to listen to unpopular opinions, and I’ll try to make the most of it by being as unpopular as I can. [Laughter]
First, it seems to me we won’t even be able to build superintelligent zombies unless we attack the problem right, and I’m not sure we’re doing that. I’m pretty sure we’re not. We need to understand, it seems to me, in model thought as a whole the cognitive continuum. Not merely one or a discrete handful of cognitive styles, the mind supports a continuum or spectrum of thought styles reaching from focused analytical thought at one extreme, associated with alertness or wide-awakeness, toward steadily less-focused thought, in which our tendency to free-associate increases. Finally, at the other extreme, that tendency overwhelms everything else and we fall asleep.
So the spectrum reaches from focused analysis to unfocused continuous free association and the edge of sleep. As we move down-spectrum towards free association, naturally our tendency to think analogically increases. As we move down-spectrum, emotion becomes more important. I have to strongly agree with Ray on the importance of emotion. We speak of being coldly logical on the one hand, but dreaming on the other is an emotional experience. Is it possible to simulate the cognitive continuum in software? I don’t see why not. But only if we try.
Will we ever be able to build a conscious machine? Maybe, but building one out of software seems to me virtually impossible. First, of course, we have to say what conscious means. For my purpose, consciousness means a subjectivity. And Ray’sand consciousness means the presence of mental states that are strictly private, with no visible functions or consequences. A conscious entity can call up some thought or memory merely to feel happy, to enjoy the memory, be inspired or soothed or angered by the thought, get a rush of adrenaline from the thought. And the outside world needn’t see any evidence of all that this act of thought or remembering is taking place.
Now, the reason I believe consciousness will never be built out of software is that where software is executing, by definition we can separate out, peel off a portable layer that can run in a logically identical way on any computing platformfor example, on a human mind. I know what it’s like to be a computer executing software, because I can execute that separable, portable set of instructions just as an electronic digital computer can and with the same logical effect. If you believe that you can build consciousness out of software, you believe that when you execute the right sort of program, a new node of consciousness gets created. But I can imagine executing any program without ever causing a new node of consciousness to leap into being. Here I am evaluating expressions, loops, and conditionals. I can see this kind of activity producing powerful unconscious intelligence, but I can’t see it creating a new node of consciousness. I don’t even see where that new node would befloating in the air someplace, I guess.
And of course, there’s no logical difference between my executing the program and the computer’s doing it. Notice that this is not true of the brain. I do not know what it’s like to be a brain whose neurons are firing, because there is no separable, portable layer that I can slip into when we’re dealing with the brain. The mind cannot be ported to any other platform or even to another instance of the same platform. I know what it’s like to be an active computer in a certain abstract sense. I don’t know what it’s like to be an active brain, and I can’t make those same statements about the brain’s creating or not creating a new node of consciousness.
Sometimes people describe spiritualityto move finally to the last topicas a feeling of oneness with the universe or a universal flow through the mind, a particular mode of thought and style of thought. In principle, you could get a computer to do that. But people who strike me as spiritual describe spirituality as a physical need or want. My soul thirsteth for God, for the living God, as the Book of Psalm says. Can we build a robot with a physical need for a non-physical thing? Maybe, but don’t count on it. And forget software.
Is it desirable to build intelligent, conscious computers, finally? I think it’s desirable to learn as much as we can about every part of the human being, but assembling a complete conscious artificial human is a different project. We might easily reach a state someday where we prefer the company of a robot from Wal-Mart to our next door neighbors or roommates or whatever, but it’s sad that in a world where we tend to view such a large proportion of our fellow human beings as useless, we’re so hot to build new ones. [laughter]
In a Western world that no longer cares to have children at the replacement rate, we can’t wait to make artificial humans. Believe it or not, if we want more complete, fully functional people, we could have them right now, all natural ones. Consult me afterwards, and I’ll let you know how it’s done. [laughter]
BROOKS: OK, great.
GELERNTER: Thank you.
KURZWEIL: You heard glimpses in David’s presentation of both of these concepts of consciousness, and we can debate them both. I think principally he was talking about a form of performance that incorporates emotional intelligence. Because emotional intelligence, even though it seems private and we assume that there is someone actually home there experiencing the emotions that are apparently the case, we can’t really tell that when we look at someone else. In fact, all that we can discuss scientifically is objective observation, and science is really a synonym for objectivity, and consciousness is a synonym for subjectivity, and there is an inherent gulf between them.
So some people feel that actual consciousness doesn’t exist, since it’s not a scientific concept, it’s just an illusion, and we shouldn’t waste time talking about it. That’s not fully satisfactory, in my view, because our whole moral and ethical and legal system is based on consciousness. If you cause suffering to some other conscious entity, that’s the basis of our legal code and ethical values. Some people describe some magical or mystical property to consciousness. There were some elements in David’s remarks, say, in terms of talking about a new node of consciousness and how that would suddenly emerge from software.
My view is it’s an emergent property of a complex system. It’s not dependent on substrate. But that is not a scientific view, because there’s really no way to talk about or to measure the subjective experience of another entity. We assume that each other are conscious. It’s a share human assumption. But that assumption breaks down when we go out of shared human experience. The whole debate about animal rights has to do with are these entities actually conscious. Some people feel that animals are just machines in the old-fashioned sense of that term, notthere’s nobody really home. Some people feel that animals are conscious. I feel that my cat’s conscious. Other people don’t agree. They probably haven’t met my cat, but (laughter)
But then the other view is apparent consciousness, an entity that appears to be conscious, and that will require emotional intelligence. There are several reasons why I feel that we will achieve that in a machine, and that has to do with the acceleration of information technologyand this is something I’ve studied for several decades. And information technology, not just computation, but in all fields, is basically doubling every year in price, performance, capacity, and bandwidth. We certainly can see that in computation, but we can also see that in other areas, like the resolution of brain-scanning in 3D volume is doubling every year, the amount of data gathering on the brain is doubling every year. And we’re showing that we can actually turn this data into working models and simulations of brain regions. There’s about 20 regions of the brain that have already been modeled and simulated.
And I’ve actually had a debate with Tomaso Poggio as to whether this is useful, because he kept saying, well, OK, we’ll learn how the visual cortex works, but that’s really not going to be useful in creating artificial vision systems. And I said, well, when we got these early transformations of the auditory cortex, that actually did help us in speech recognition. It was not intuitive, we didn’t expect it, but when we plugged it into the front-end transformations of speech recognition, we got a big jump in performance. They haven’t done that yet in visual modeling of the visual cortex. And I saw him recentlyin fact, at AI50and he said, you know, you were right about that, because now they’re actually getting models, these early models of how the visual cortex works, and that that has been helpful in artificial vision systems.
I make the case in chapter four of my book that we will have models and simulations of all several hundred regions of the human brain within 20 years. And you have to keep in mind that the progress is exponential. So it’s very seductive. It looks like nothing is happening. People dismissed the genome project. Now we think it’s a mainstream project, but halfway through the project, only 1% of the project had been done, but the amount of genetic data doubled smoothly every year and the project was done on time. If you can factor in this exponential pace of progress, I believe we will have models and simulations of these different brain regionsIBM is already modeling a significant slice of the cerebral cortex. And that will give us the templates of intelligence, it will expand the AI toolkit, and it’ll also give us new insights into ourselves. And we’ll be able to create machines that have more facile emotional intelligence and that really do have the subtle cues of emotional intelligence, and that will be necessary to passing the Turing test.
But that still doesn’tthat still begets the key question as to whether or not those entities just appear to be conscious and feeling emotion or whether they really have emotional subjective experiences. David, I think, was giving a sophisticated version of John Searle’s Chinese room argument, whereI don’t have time to explain the whole argument, but for those of you familiar with it, you’ve got a guy that’s just following some rules on a piece of paper and he’s answering questions in Chinese, and John says, well, isn’t it ridiculous to think that that system is actually conscious? Or he has a mechanical typewriter which types out answers in Chinese, but it’s following complex rules. The premise seems absurd that that system could actually behave true understanding and be conscious when it’s just following a simple set of rules on a piece of paper.
Of course, the sleight of hand in that argument is that these set of rules would be immensely complex, and the whole premise is unrealistic that such a simple system could, in fact, realistically answer unanticipated questions in Chinese or any language. Because basically what the man is doing in the Chinese room, in John Searle’s argument, is passing a Turing test. And that entity would have to be very complex. And in that complexity is a key emergent property. So David says, well, it seems ridiculous to think that software could be conscious or evenand I’m not sure if he’swhich flavor of consciousness he’s using there, the true subjectivity or just apparent consciousness, but in either case it seems absurd that a little software program could display that kind of complexity and self-emergent awareness.
But that’s because you’re thinking of software as you know it today, if in fact you have a massively parallel system, as the brain is, with 100 trillion internal connections, all of which are computing simultaneously, and which in fact we can model those internal connections and neurons quite realistically in some cases today. We’re still in the early part of that process. But even John Searle agrees that a neuron is basically a machine and can be modeled and simulated, so why can’t we do that with massively parallel system with 100 trillion-fold parallelism? And if that seems ridiculous, that is ridiculous today, but it’s not ridiculous with the kind of technology we’ll have with 30 more doublings of price, performance, capacity, and bandwidth of information technology, the kind of technology we’ll have around 2030.
These massively parallel systems with the complexity of the human brain, which is a moderate level of complexity, because the design of the human brain is in the genome and the genome has 800 million bytes, but that’s uncompressed, has massive redundanciesALU’s repeated 300,000 times. If you apply loss that’s compression of the genome, you can reduce it to 30-50 million bytes, which is not simple, but it’s a level of complexity we can manage.
BROOKS: Ray, the logarithm of your remaining time is one. [laughter]
KURZWEIL: So thewe’ll be able to achieve that level of complexity. We are making exponential progress in reverse engineering the brain. We’ll have systems that have the suppleness of human intelligence. This will not be conventional software as we understand it today. There is a difference in the (inaudible) field of technology when it achieves that level of parallelism and that level of complexity, and I think we’ll achieve that if you consider these exponential progressions. And it still doesn’t penetrate the ultimate mystery of how consciousness can emerge, true subjectivity. We assume that each other are conscious, but that assumption breaks down in the case of animals, and we’ll have a vigorous debate when we have these machines. But I’ll make one point. We willI’ll make a prediction that we will come to believe these machines, because they’ll be very clever and they’ll get mad at us if we don’t believe them, and we won’t want that to happen. So thank you.
BROOKS: OK. David?
GELERNTER: Well, thank you for those very eloquent remarks. And I want to say, first of all, many points were raised. The premise of John Searle’s Chinese room and of the thought experiment which is related, that I outlined, is certainly unrealistic. Granted, the premise is unrealistic. That’s why we have thought experiments. If the premise were not unrealistic, if it were easy to run in a lab, we wouldn’t need to have a thought experiment.
Now, the fact remains that when we conduct a thought experiment, any thought experiment needs to be evaluated carefully. The fact that we can imagine something doesn’t mean that what we imagine is the case. We need to know whether our thought experiment is based on experience. I would say the thought experiment of imagining that you’re executing the instructions that constitute a program or that realize a virtual machine is founded on experience, because we’ve all had the experience of executing algorithms by hand. It isn’t anyand there’s no exotic ingredient in executing instructions. I may be wrong. I don’t know for sure what would happen if I executed a truly enormous program that went on for billions of pages. But I don’t have any reason for believing that consciousness would emerge. It seems to me a completely arbitrary claim. It might be true. Anything might be true. But I don’t see why you make the claim. I don’t see what makes it plausible.
You mentioned massive parallelism, but massive parallelism, after all, adds absolutely zero in terms of expressivity. You could have a billion processors going, or ten billion or ten trillion or 1081, and all those processors could be simulated on a single jalopy PC. I could run all those processes asynchronously on one processor, as you know, and what I get from parallelism is performance, obviously, and a certain amount of cleanliness and modularity when I write the program, but I certainly don’t get anything in terms of expressivity that I didn’t have anyway.
You mentioned consciousness, which is the key issue here. And you pointed out consciousness is subjective. I’m only aware of mine, you’re only aware of yours, granted. You say that consciousness is an emergent property of a complex system. Granted, of course, the brain is obviously a complex system and consciousness is clearly an emergent property. Nobody would claim that one neuron tweezed out of the brain was conscious. So yes, it is an emergent property. The business about animals and people denying animal consciousness, I haven’t really heard that since the 18th century, but who knows, maybe there are still Cartesians out thereraise your hands.
But in the final analysis, although it’s true that consciousness is irreducibly subjective, you can’t possibly claim to understand the human mind if you don’t understand consciousness. It’s true that I can’t see yours and you can’t see mine. It doesn’t change the fact that I know I’m conscious and you know that you are. And I’m not going to believe that you understand the human mind unless you can explain to me what consciousness is, how it’s created and how it got there. Now, that doesn’t mean that you can’t do a lot of useful things without beingcreating consciousness. You certainly can. If your ultimate goal is utilitarian, forget about consciousness. But if your goals are philosophical and scientific and you want to understand how the mind really operates, then you must be able to tell me how consciousness works, or you don’t have a theory of the human mind.
One element that I think you left out in your discussion of the thought experiment and the fact that, granted, we’re able to build more and more complex systems and they are more and more powerful, and we’re able to build more and more accurate and effective simulations of parts of the brain and indeed of other parts of the bodybecause keep in mind that when we allow the importance of emotion and thinking, it’s clear that you don’t just think with your brain, you think with your body. When you feel an emotion, when you have an emotion, the body acts as a resonator or a sounding board or an amplifier, and you need to understand how the body works, as well as the brain does, if you’re going to understand emotion. But granted, we’re gettingwe’re able to build more complex and more and more effective simulators.
What isn’t clear is the role of the brain’s chemical structure. The role of the brain stuff itself, of course, is a point that Searle harps on, but it goes back to a paper by Paul Ziff in the late 1950s, and many people have remarked on this point. We don’t have the right to dismiss out of hand the role of the actual chemical makeup of the brain in creating the emergent property of consciousness. We don’t know whether it can be created using any other substance. Maybe it can’t and maybe it can. It’s an empirical question.
One is reminded of the famous search that went on for so many centuries for a substitute source of the pigment ultramarine. Ultramarine, a tremendously important pigment for any painter. You get it from lapis lazuli, and there are not very many sources of lapis lazuli. It’s very expensive, and it’s a big production number to get it and grind it down, turn it into ultramarine. So ultramarine paint used to be as expensive as gold leaf. People wanted to know, where else can I get ultramarine? And they went to the scientific community, and the scientific community said, we don’t know. There’s no law that says there is some other place to get ultramarine from lapis lazuli, but we’ll try. And at a certain point in the late 19th century, a team of French chemists did succeed in producing a fake ultramarine pigment which was indeed much cheaper than lapis lazuli. And the art world rejoiced.
The moral of the story? If you can do it, great, but you have no basis for insisting on an a priori assumption that you can do it. I don’t know whether there is a way to achieve consciousness in any way other than living organisms achieve it. If you think there is, you’ve got to show me. I have no reason for accepting that a priori. And I think I’m finished.
BROOKS: I can’t believe it. Everyone stoppedRay, I thinkstay up there, and we’llnow we’ll go back and forth in terms of, Ray, maybe you want to answer that.
KURZWEIL: So I’m struggling as I listen to your remarks, David, to really tell what you mean by consciousness. I’ve tried to distinguish these two different ways of looking at itthe objective view, which is usually what people lapse into when they talk about consciousness. They talk about some neurological property, or they talk about self-reflection, an entity that can create models of its own intelligence and behavior and model itself, or what-if experiments in its mind or have imagination, thinking about itself and transforming models of itself and this kind of self-reflection. That is consciousness. Or maybe it has to do with mirror neurons and that we can empathizethat is to say, understand the conscious or the emotions of somebody else.
But that’s all objective performance. And theseour emotional intelligence, our ability to be funny or be sad or express a loving sentiment, those are things that the brain does. And I’d make the case that we are making progress, exponential progress in understanding the human brain and different regions, and modeling them in mathematical terms and then simulating them and testing those simulations. And the precision of those simulations is gearing up. We can argue about the timeframe. I think, though, within a quarter century or so, we will have detailed models thatand simulations that can then do the same things that the brain does apparently. And we won’t be able to really tell them apart.
That is what the Turing test is all about, that this machine will pass the Turing test. But that is an objective test. We could argue about the rules. Mitch Kapor and I argued for three months about the rules. Turing wasn’t very specific about them. But it is a objective test and it’s an objective property. So I’m not sure if you’re talking about that or talking about the actual sense one has of feeling, your apparent feelings, the subjective sense of consciousness. And so you talk about
GELERNTER: (inaudible), could I answer that question?
BROOKS: Yeah, let (inaudible).
GELERNTER: You say there are two kinds of consciousness, and I think you’re right. I think most people, when they talk about consciousness, think of something that’s objectively visible. As I said, for my purposes, I want consciousness to mean mental states, mental states specifically a mental state that has no external functionality.
KURZWEIL: But that’s still
GELERNTER: You know that you are capable of feeling or being happy. You know you’re capable of thinking of something good that makes you feel good, of thinking of something bad that makes you depressed, or thinking of something outrageous that makes you angry. You know you’re capable of mental states that are your property alone. As you say, there’s objectiveabsolutely
KURZWEIL: But these mental states do have
GELERNTER: That’s what I mean by consciousness.
KURZWEIL: But these mental states still have objective neurological correlates. And in fact, we now have means of where we can begin to look inside the brain with increasing resolutionstrike doubling in 3D volume every yearto actually see what’s going on in the brain. So sitting there quietly, thinking happy thoughts and making myself happy, you canthere are actually things going on inside the brain, we’re able to see them. And so now this supposedly subjective mental state is, in fact, becoming an objective behavior. Not
GELERNTER: Can I comment on that? I think you’reI think the idea that you’re arguing with Descartes is a straw man approach. I don’t think anybody argues anymore that the mind is a result of mind stuff, some intangible substance that has no relation to the brain. By arguing that consciousness is objectiveI’m agreeing with you that consciousness is objectiveI’m certainly not denying that it’s created by physical mechanisms. I’m not claiming there’s some magical or transcendental metaphysical property. But that doesn’t change the fact that in terms of the way you understand it and perceive it, your experiences of it is subjective. That was your term, and I’m agreeing with you. And that doesn’t change the fact that it is created by the brain.
Clearly, we’re reaching better and better understandings of the brain and of everything else. You’ve said that a few times, and I certainly don’t disagree. The fact that we’re getting better and better doesn’t mean that necessarily we’re going to reach any arbitrary goal. It depends on our methods. It depends if we understand the problem the right way. It depends if we’re taking the right route. It seems to me that consciousness is necessary. Unless we understand consciousness as this objective phenomenon that we’re all aware of, our brain simulators haven’t really told us anything fundamental about the human mind. Haven’t told us what I want to know.
KURZWEIL: I think our brain simulators are going to have to work not just the level of the Turing test, but at the level of measuring the objective neurological correlates of these supposedly internal mental states. And there’s some information processing going on when we daydream and we think happy thoughts or sad thoughts or worry about something. There’s same kinds of things going on as when we do more visibly intelligent tasks. We’re, in fact, more and more able to penetrate that by seeing what’s going on and modeling these different regions of the brain, including, say, the spindle cells and the mirror neurons, which are involved with things like empathy and emotionwhich are uniquely human, although a few other animals have some of themand really beginning to model this.
We’re at an early stage, and it’s easy to ridicule the primitiveness of today’s technology, which will also always appear primitive compared to what will be feasible, given the exponential progression. But these internal mental states are, in fact, objective behaviors, because we will need to expand our definition of objective behavior to the kinds of things that we can see when we look inside the brain.
GELERNTER: If I could comment on that? If your tests are telling us that they are unable to distinguish that the same thing creates, on the one hand, a mental state of sharply-focused, in which I’m able to concentrate on a problem without my mind drifting and solving itthere’s no way to distinguish that mental state from a mental state in which my mind is wandering, I am unable to focus or concentrate on what I’m doing, and then I start dreaming. In fact, cognitive psychologists have found out that we start dreaming and then we fall asleep. If your tests or your simulators are unable to distinguish between the mental state of dreaming or continuous free association on the one hand and focused logical analytic problem-solving on the other, then I think you’re just telling us that your tests have failed, because we know that these states are different and we want to know why they’re different. It doesn’t do any good to say, well, they’re caused in the same way. We need to explain the difference that we can observe.
BROOKS: Can I ask a question which I think gets at what this disagreement is? Then I’ll ask you two different questions. The question for David is, what would it take to convince you so that you would accept that you could build a conscious computer built on digital substrate? And Ray, what would it take to convince you that digital stuff isn’t good enough, we need some other chemicals or something else that David talked about?
KURZWEIL: To answer it myself, I wouldn’t get too hung up on digital, because, in fact, the brain is not digital. The neurotransmitters are kind of a digitally-controlled analog phenomena. But when we figure out the salientthe important thing is to figure out what is salient and how information is modeled and what these different regions are actually doing to transform information.
The actual neurons are very complex. There’s lots of things going on, but we find out in theone region of the auditory cortex is basically conducting a certain type of algorithm, the information is represented perhaps by the location of certain neurotransmitters in relation to another, whereas in another case it has to do with the production of some unique neurotransmitter. There’s different ways in which the information is represented. And these are chemical processes, but we can model really anything like that at whatever level of specificity is needed digitally. We know that. We can model it analog
BROOKS: OK, so you didn’t answer the question. Can you then answer the question? (laughter)
GELERNTER: I will continue in exactly the same spirit, by not answering the question. I wish I could answer the question. It is a very good question and a deep question. Given the fact that mental states that are purely private are also purely subjective, how can we tell when they are present? And the fact is, just as you don’t know how to produce them, I don’t know how to tell whether they are there. It’s a research question, it’s a philosophical question.
It’swe know how to understand particular technologies. That is, we say I’ve created consciousness and I’ve done it by running software on a digital computer. I can think about that and say I don’t buy that, I don’t believe there’s consciousness there. If you wheel in some other technology, my only stratagem is to try and understand that new technology. I need to understand what you’re doing, I need to understand what moves you’re making, because unfortunately I don’t know of any general test. The only test that one reads about or hears about philosophically is relevant similaritythat is, we assume that our fellow human beings are conscious, because we can see they’re people like us. We assume that if I had mental states, other similar creatures have mental states. And we make that same assumption about animals. And the more similar to us they seem, the more we assume their mental states are like ours.
How are we going to handle creatures who areor things or entities, objects, that are radically unlike us and are not organic? It’s a hard question and an interesting question. I’d like to see more work done on it.
KURZWEIL: In some ways, they’ll be more like us than animals, because animals are not perfect models of humans either medically or mentally. Whereas as we really reverse-engineer what’s going on, the salient processes, and learn what’s important in the different regions of the brain and recreate those properties and abilities to transform information similar ways, and then get an entity that in fact acts very human-like and a lot more human-like than an animal, for example, can pass a Turing test, which involves mastery of language which animals basically don’t have, for the most part, they will be closer to humans than animals are.
If we really modeltake an extreme case. I don’t think this is necessary to model neuron by neuron and neurotransmitter by neurotransmitter, but one could in theory do that. And we have, in fact, do have simulations of neurons that are highly detailed already, of one neuron or a cluster of three or four of them. So why not extend that to 100 billion neurons? It’s theoretically possible, and it’s a different substrate, but it’s really doing the same things. And it’s closer to humans than animals are.
BROOKS: So while David responds, if people who want to ask questions can come to the two microphones. Go ahead.
GELERNTER: When you say act very human-like, this is a key issue. You have to keep in mind that the Turing test is rejected by many people, and has been from the very beginning, as a superficial test of performance, a test that fails to tell us anything about mental states, fails to tell us the things that we really most want to know. So when you say something acts very human-like, that’s exactly what we don’t do when we attribute the presence of consciousness on the basis of relevant similarity.
When I see somebody, even if he isn’t acting human-like at all, if he’s fast asleep, even if he’s out cold, I don’t need to see him do anything, I don’t need to have him answer any fancy questions on the Turing test. I can see he’s a creature like I am, and I therefore attribute to him a mind and believe he’s capable of mental states. On the other hand, the Turing test, which is a test of performance rather than states of being, has beenhas certainly failed to convince people who are interested in what you would call the subjective kind of consciousness.
KURZWEIL: Well, I think now we’re
GELERNTER: That doesn’t tell me anything about
KURZWEIL: Well, now I think we’re getting somewhere, because I would agree. The Turing test is an objective test. And we can argue about making it super-rigorous and so forth, butand if an entity passed that test, the super-rigorous one, it is really convincingly human. It’s convincingly funny and sad, and we reallyis really displaying those emotions in a way that we cannot distinguish from human beings. But you’re rightI mean, this gets back to a point I made initially. That doesn’t prove that that entity is conscious, and we don’t absolutely know that people are conscious. I think we will come to accept them as conscious. That’s a prediction I can make. But fundamentally, this is the underlying ontological question.
There is actually a role for philosophy, because it’s not fundamentally a scientific question. If you reject the Turing test or any variant of it, then we’re just left with this philosophical issue. My own philosophical take is if an entity seems to be conscious, I would accept its consciousness. But that’s a philosophical and not a scientific position.
BROOKS: So I think we’ll take the first question. And remember, not a monologue, something to provoke discussion.
M: Yeah, no problem. Let’s see. What if everything is conscious and connected, and it’s just a matter of us learning how to communicate and connect with it?
KURZWEIL: That’s a good point, because we can communicate with other humans, to some extentalthough history is full of examples where we dehumanize a certain portion of the population and don’t really accept their conscious experienceand we have trouble communicating with animals, so that really underlies the whole animal rights what’s it like to be a giant squid? Their behavior seems very intelligent, but it’s also very alien and we don’tthere’s no way we can even have the terminology to express that, because it’s not experiences that are human. And that is part of the deep mystery of consciousness and gets at the subjective aspects of it.
But as we do really begin to model our own brain and then extend that to other species, as we’re doing with the genomewe’re now trying to reverse-engineer the genome in other species, and we’ll do the same thing ultimately with the brainthat will give us more insight. We can translate into our own human terms the kinds of mental states as we can see them manifest as we really understand how to model other brains.
GELERNTER: If we think we are communicating with a software-powered robot, we’re kidding ourselves, because we’re using words in a fundamentally different way. To use an example that Turing himself discusses, we could ask the computer or the robot, do you like strawberries, and the computer could lie and say yes or it could, in a sense, tell the truth and say no. But the more fundamental issue is that not only does it not like strawberries, it doesn’t like anything. It’s never had the experience of liking, it’s never had the experience of eating. It doesn’t know what a strawberry is or any other kind of berry or any other kind of fruit or any other kind of food item. It doesn’t know what liking is, it doesn’t know what hating is. It’s using words in a purely syntactic way with no meanings behind them.
KURZWEIL: This is now the Searlean argument, and John Searle’s argument can be really rephrased to prove that the human being has no understanding and no consciousness, because each neuron is just a machine. Instead of just shuffling symbols, it’s just shuffling chemicals. And obviously, just shuffling chemicals around is no different than shuffling symbols around. And if shuffling chemicals and symbols around doesn’t really lead to real understanding or consciousness, then why isn’t that true for a collection of 100 neurons, which are all just little machines, or 100 billion?
GELERNTER: There’s a fundamental distinction, which is software. Software is the distinction. I can’t download your brain onto the computer up there
KURZWEIL: Well, that’s just a limitation of my brain, because we don’t havewe don’t have quick downloading ports.
GELERNTER: You need somebody else’s brain in the audience?
KURZWEIL: No, that’s something that biology left out. We’re just not going to leave that out of our non-biological base.
GELERNTER: It turns out to be an important point. It’s the fundamental issue
KURZWEIL: It’s a limitation, not
GELERNTER: I think there’s a very big difference whether I can take this computer and upload it to a million other computers or to machines that are nothing like this digital computer, to a Turing machine, to an organic computer, to an optical computer. I can upload it to a class full of freshmen, I can upload it to all sorts of things. But your mind is yours and will never be downloaded (multiple conversations; inaudible)
KURZWEIL: That’s just because we left
GELERNTER: It’s stuck to your brain.
KURZWEIL: We left out the
GELERNTER: And I think that’s a thought-provoking fact. I don’t think you can just dismiss it as an
KURZWEIL: You’re posing that as a
GELERNTER:envira developmental accident. Maybe it is, but
KURZWEIL: You’re posing that as a benefit and advantage of biological intelligence, that we don’t have these quick downloading ports to access information
GELERNTER: Not an advantage. It’s just a fact.
KURZWEIL: But that’s not an advantage. If we added quick downloading ports, which we will add to our non-biological brain emulations, that’s just an added feature. We could leave it out. But we put it in there, that doesn’t deprive it of any capability that it would otherwise have.
GELERNTER: You think you could upload your mind to somebody with a different body, with a different environment, who had a different set of experiences, who had a different set of books, feels things in a different way, has a different set of likes, responds in a different kind of way, and get an exact copy of you? I think that’s a naïve idea. I don’t think there’s any way to upload your mind anywhere else and that lets you upload your entire being, including your body.
KURZWEIL: Well, it’s hard to upload to another person who already has a brain and a body that’sit’s like trying to upload to a machine that’s incompatible. But ultimately we will be able to gather enough data on a specific brain and simulate that, including our body and our environmental influences.
BROOKS: Next question.
M: Thanks. If we eventually develop a machine which appears intelligent, and let’s say given appropriate body so that it can answer meaningful questions about how does a strawberry taste or something like that or whether it likes strawberries, if we are wondering if this machine is actually experiencing consciousness the same way that we do, why not just ask it? They’ll presumably have no reason to lie if you haven’t specifically gone out of your way to program that in.
KURZWEIL: Well, that doesn’t tell us anything, because we can ask it today. You can ask a character in a video game and it will say, well, I’m really angry or I’m sad or whatever. And we don’t believe it, because it doesn’tit’s not very convincing yet. It doesn’tbecause it doesn’t have the subtle cues and it’s not as complex and not a realistic emulation of
M: Well, if we built 1000 of them, let’s say
GELERNTER: I strongly agree with (inaudible)
M:presumably they wouldn’t all agree to lie ahead of time. Somebodyone of them might tell us the truth if the answer is no.
BROOKS: We’ll finish that question (multiple conversations; inaudible)
GELERNTER: I strongly agree. Keep in mind that the whole basis of the Turing test is lying. The computer is instructed to lie and pass itself off as a human being. Turing assumes that everything it says will be a lie. He doesn’t talk about the real deep meaning of lying, or he doesn’t care about that, and that’s fine, that’s not his topic. But he’dit’s certainly not the case that the computer is in any sense telling the truth. It’s telling you something about its performance, not something about facts or reality or the way it’s made or what its mental life is like.
KURZWEIL: John Searle, by the way, thinks that a snail could be conscious if it had this magic property, which we don’t understand it, that causes consciousness. And when we figure it out, we may discover that snails have it. That’s his view. So I do think that
GELERNTER: Do you think it’s inherently implausible that we should need a certain chemical to produce a certain result? Do you think chemical structure is irrelevant?
KURZWEIL: No, but we can simulate chemical interactions. We just simulated the other day something that people said will never be able to be simulated, which is protein folding. And we can now take an arbitrary amino acid sequence and actually simulate and watch it fold up, and it’s an accurate simulation (multiple conversations; inaudible)
GELERNTER: You understand it, but you don’t get any amino acids out. As Searle points out, if you want to talk Searlean, you can simulate photosynthesis and no photosynthesis takes place. You can simulate a rainstorm, nobody gets wet. There’s an important distinction. Certainly you’re going to understand the process, but you’re not going to produce the result
KURZWEIL: Well, if you simulate creativity, you’llif you simulate creativity, you’ll get real ideas out.
BROOKS: Nextsure.
M: So up until this point, there seems to have been a lot of discussion just about a fullyjust software, just a human or whatnot. But I’m kind of curious your thoughts towards more of a gray area, if it’s possible. That is, if we in some way augment the brain with some sort of electronic component, or somebody has some sort of operation to add something to them. I don’t think it’s been done yet today, but just is it possible to have fullywhat you would consider to be a fully conscious human take part of the brain out, say, replace it with something to do a similar function, and then have obviously the person still survive. Is that person conscious? Is it (inaudible)?
KURZWEIL: Absolutely. And we’ve done things like that, which I’ll mention. But I thinkin fact, that is the key application or one key application of this technology. We’re not just going to create these superintelligent machines to compete with us from over the horizon. We’re going to enhance our own intelligence, which we do now with the machines in our pocketsand when we put them in our bodies and brains, we’ll enhance our bodies and brains with them.
But we are applying this for medical problems. You can get a pea-sized computer placed in your brain or placed at biological neurons (inaudible) Parkinson’s disease. And in fact, the latest generation now allows you to download new software to your neural implant from outside the patient, and that does replace the function of the corpus of biological neurons. And now you’ve got biological neurons in the vicinity getting signals from this computer where they used to get signals from the biological neurons, and this hybrid works quite well. And there’s about a dozen neural implants, some of which are getting more and more sophisticated, in various stages of development.
So right now we’re trying to bring back "normal" function, although normal human function is in fact a wide range. But ultimately we will be sending blood cell-sized robots to the bloodstream non-invasively to interact with our biological neurons. And that sounds very fantastic. I point out there’s already four major conferences on blood cell-sized devices that can produce therapeutic functions in animals andwe don’t have time to discuss all that, but we will
BROOKS: Let’s hear David’s response.
GELERNTER: When you talk about technological interventions that could change the brain, it’s a remarkableit’s a fascinating topic, and it can do a lot of good. And one of the really famous instances of that is the frontal lobotomy, an operation invented in the 1950s or maybe the last 1940s. Made people feel a lot better, but somehow it didn’t really catch on, because it bent their personality out of shape. So the bottom line is not everything that we do, not every technological intervention that affects your mental state is necessarily going to be good.
Now, it is a great thing to be able to come up with something that cures a disease, makes somebody feel better. We need to do as much of that as we can, and we are. But weit’s impossible to be too careful when you fool around with consciousness. You may make a mistake that you will regret. And lobotomy cases are undoable.
BROOKS: I’m afraid this is going to be the last question.
M: How close do the brain simulation people know they are to the right architecture, and how do they know it? You made the assertion that you don’t need to simulate the neurons in detail, and that the IBM people are simulating a slice of neocortex and that’s good. And I think that is good, but do they have a theory that says this architecture good, this architecture not good enough? How do they measure it?
KURZWEIL: Well, say, in the case of the simulation of a dozen regions of the auditory cortex done on the West Coast, they’ve applied sophisticated psychoacoustic tests to the simulation and they get very similar results as applying the same test to human auditory perception. There’s a simulation of the cerebellum where they apply skill formation tests. It doesn’t prove that these are perfect simulations, but it does show it’s on the right track. And the overall performance of these regions appears to be doing the kinds of things that we can measure, that the biological versions do. And the scale and sophistication and resolution of these simulations is scaling up.
The IBM one on the cerebral cortex is actually going to do it neuron by neuron and ultimately at the chemical level, which I don’t believe is actually necessary when weultimately, to actually create those functions, when we learn the salient algorithms, we can basically implement them using our computer science methods more efficiently. But that’s a very useful project to really understand how the brain works.
GELERNTER: I’m all in favor of neural simulations. I think one should keep in mind that we don’t think just with our brains, we think with our brains and our bodies. Ultimately, we’ll have to simulate both. And we also have to keep in mind that unless our simulators can tell us not only what the input/output behavior of the human mind is, but how it understands and how it produces consciousness unless it can tell us where consciousness comes from, it’s not enough to say it’s an emergent phenomenon. Granted, but how? How does it work? Unless those questions are answered, we don’t understand the human mind. We’re kidding ourselves if we think otherwise.
BROOKS: So with that, I think I’d like to thank both Ray and David. [applause]
]]>Today nature has slipped, perhaps finally, beyond our field of vision.
–O. B. Hardison Jr.
Now after six million years of evolution, where do we go next? How will evolution, our newly arrived intellect, our primal drives and the powerful technologies we continually create, change us?
Our current situation is unlike anything nature has seen before because we are not simply a by-product of evolution, we are ourselves now an agent of evolution. We are this animal, filled with ancient emotions and needs, amplified by our intellects and a conscious mind, embarking on a new century where we are creating fresh tools and technologies so rapidly that we are struggling to keep pace with the very changes we are bringing to the table.
Where will this lead? Will we develop new brain modules, new appendages, revamped capabilities just as we have over the past six million years? Absolutely, but probably not in the way we suspect. It appears, if we look closely, that the DNA that has been such a perfect ally in the evolution of life, may itself be in for a revamping. Evolution may be prowling for a new partner. And the partner may be us, or at least the technologies we make possible.
The irony is that it takes a being like us, a human being, to bring about change this fundamental. The job requires an amalgamation of high intelligence and emotion, conscious intent, primal drives and great quantities of knowledge made possible by minds that can communicate in highly complex ways. If you pulled any one of these out, the future, at least one involving intelligent, conscious creatures like us, would fall apart. It takes not just cleverness, but passion, sometimes fear, fired by focused intention, to create and invent. Without this combination there would be no technologies, no wheels or steam engines or nuclear bombs or computers. And there would be nothing like the world we live in today. At best we would still be huddled in the black African night, eking out whatever existence the predators waiting in the darkness around us would allow. Not even fire would be our friend.
But the traits that have shaped us into the human beings we are have endowed us with strange abilities, and they are hurtling us into a future radically unlike the past out of which we have emerged. And that future will be profoundly different from anything most of us can imagine.
Take the thinking of Hans Moravec as an object lesson. Moravec is a highly respected robotics scientist at Carnegie Mellon University. In the late 1980s, he quietly passed his spare time writing a book that predicted the end of the human race. The book, entitled Mind Children, didnt predict that we would destroy ourselves with nuclear weapons or rampant, self-inflicted diseases, or undo the species with self-replicating nanotechnology. Instead, Moravec, who had an abiding and life-long fascination with intelligent machines, predicted we would invent ourselves out of existence, and robots would be the technology of choice.
In a subsequent book (Robot, Mere Machine to Transcendent Mind) Moravec explained that this transformation would unfold one technological generation at a time, and, because of the blistering rate of change today, would pretty much run its course by the middle of the 21st century. We would manage this by boosting robots up the evolutionary ladder, roughly in decade-long increments, making them smarter, more mobile, more like us. First they would be as intelligent as insects or a simple guppy (we are about there right now), then lab rats, then monkeys and chimps until, finally one day, the machines would become more adept and adaptive than their makers. That, of course, would quickly raise the question: Now who is in charge? Would Homo sapiens, after some 200,000 years living on top of the planets food chain, no longer rule the roost? Would we, in the cramped space of this evolutionary ellipsis, find ourselves playing Neanderthal to technologies that had become, like us, self-awarethe first conscious tools built by a conscious toolmaking creature?
The unavoidable answer would be, yes. Evolution will have found through us a new way to make a new creature; one that could forsake its ladders of DNA and the fragile, carbon-based biology that nature had been using for nearly four million millennia to manage the job.
The end would not come in the form of a Terminator style invasion, it would simply unfold in the natural course of evolutionary events where one species, better adapted to its environment replaces another that is no longer very fit to continue. Except the new species wouldnt be cobbled out of DNA, it would be fashioned from silicon, alloy, and who knows what else, invented by us. But once successfully invented, we wouldnt be necessary any more.
Whether events will play out like this or not remains to be seen. But Moravecs scenario makes a pointthe world and the life upon it changes, and simply because we are the agents of change, doesnt mean we wont be affected by it.
***
It is strange to think of the invention of machines, even robotic ones, as having anything to do with Darwins natural selection. We usually regard evolution as biologicala world of cells, DNA and living creatures. And we think of our machines as unalive, unintelligent and shifted by economic forces more than natural ones. But it isnt written anywhere that evolution has to be constrained by what we traditionally think of as biology. In fact each day the lines between biology and technology, humans and the machines we create are blurring. We are already part and parcel of our technology.
Since the day Homo habilis whacked his first flint knife out of flakes of flint, it has been difficult to know whether we invented our tools or our tools invented us. The world economy would crash if its computer systems failed. We cant live without laptops, palmtops, cell phones or iPods, which grow continually smaller and more powerful. We regularly engineer genes, despite the raging debates over stem cell therapy. A human being will very likely be cloned within the next five years. We now have computer processors working at the nano (molecular) level and microelectromechanical machines (MEMS) that operate at cellular dimensions. Already electronic prosthetics make direct connections with human nerves, and electronic brain implants for Parkinsons disease and weak hearts are common place. Scientists are even experimenting with electronic, implantable eyes. New clothing weaves digital technologies into their fiber and brings them a step closer to being a part of us. The military are working on battlesuits that will fit like gloves, a kind of second skin and amplify a soldiers senses, strength and ability to communicate, even triangulate the direction of a bullet headed his or her way.
What next? Speech, writing and art enabled us to share inner feelings in new and powerful ways. But it takes months or years to learn a new language or how to play the piano or master the art of engineering bridges and buildings. Will new technologies that accelerate communication (virtual reality, telepresence, digital implants, nanotechnology) create new ways to communicate that can by-pass speech? Will we someday communicate by a kind of digital telepathy, downloading information, experiences, skills, even emotions the way we download a file from the Internet to our laptop? Will we become machines, or will machines become more powerful versions of us? And if any of this comes to pass, what ethical issues do we face? At what point to do we stop being human?
Lynn Margulis, probably the worlds leading microbiologist, has argued that this blurring of technology and biology isnt really all that new. She has observed1 that the shells of clams and snails are a kind of technology dressed in biological clothing. Is there really that much difference between the vast skyscrapers we build or the malls in which we shop, even the cars we drive around, and the hull of a seed? Seeds and clam shells, which are not alive, hold in them a little bit of water and carbon and DNA, ready to replicate when the time is right, yet we dont distinguish them from the life they hold. Why should it be any different with office buildings, hospitals and space shuttles?
Put another way, we may make a distinction between living things and the tools those things happen to create, but nature does not. The processes of evolution simply witness new adaptations and preserve those that perform better than others. That would make Homo habiliss first flint knife a form of biology as sure as a clamshell, one that set our ancestors on a fresh evolutionary path just as if their DNA had been tweaked to create a new, physical mutation, say an opposable thumb or a big toe.
Even if these technological adaptations were outside what we might consider normal biological bounds, the effect was just as profound, and far more rapid. In an evolutionary snap, that first flint knife changed what we ate and how we interacted with the world and one another. It enhanced our chances of survival. It accelerated our brain growth which in turn allowed us to create still more tools which led to yet bigger brains. And on we went, continually and with increasing speed and sophistication, fashioning progressively more complex technologies right up to the genetic techniques that enable us to fiddle with the self-same ribbons of our chromosomes that made the brains that conceived tools in the first place. If this is true, all of our technologies are an extension of us, and each human invention is really another expression of biological evolution.
Moravec and Margulis arent alone in asking questions that force us to bend our traditional thinking about evolution. Scientist and inventor Ray Kurzweil has, like Moravec, pointed out that the rate of technological change is increasing at an exponential rate. Also like Moravec, he foresees machines as intelligent as we are evolving by mid century. Unlike Moravec he doesnt necessarily believe they will arrive in the form of robots.
Initially Kurzweil sees us reengineering ourselves genetically so that we will live longer and healthier lives than the DNA we were born with might normally allow. We will first rejigger genes to reduce disease, grow replacement organs, and generally postpone many of the ravages of old age. This, he says, will get us to a time late in the 2020s when we can create molecule-sized nanomachines that we will program to tackle jobs our DNA never evolved naturally to undertake.
Once these advances are in place we will not simply slow aging, but reverse it, cleaning up and rebuilding our bodies molecule by molecule. We will also use them to amplify our intelligence; nestling them among the billions of neurons that already exist inside our brains. Our memories will improve; we will create entirely new, virtual experiences, on command, and take human imagination to levels our currently unenhanced brains cant begin to conceive.2 In time (but pretty quickly) we will reverse engineer the human brain into a vastly more powerful, digital version.
This view of the futures isnt fundamentally different from Moravecs brain-to-robot download, except it is more gradual. Either way we will have melded with our technology if, in fact, those barriers ever really existed in the first place, and in the end, erase the lines between bits, bytes, neurons and atoms.
Or looked at another way, we will have evolved into another species. We will no longer be Homo sapiens, but Cyber sapiensa creature part digital and part biological that will have placed more distance between its DNA and the destinies they force upon us than any other animal. And we will have become a creature capable of steering its own evolution (cyber derives from the Greek word for a ships steersman or navigatorkybernetes). The world will face an entirely new state of affairs.
Why would we allow ourselves to be displaced? Because in the end, we wont really have a choice. Our own inventiveness has already unhinged our environment so thoroughly that we are struggling to keep up. In a supreme irony we have created a world fundamentally different from the one into which we originally emerged. A planet with six and a half billion creatures on it, traveling in flying machines every day by the millions, their minds roped together by satellites and fiber optic cable, rearranging molecules on the one hand and leveling continents of rain forest on the other, growing food and shipping it overnight by the trillions of tonsall of this is a far cry from the hunter-gatherer, nomadic life for which evolution had fashioned us 200,000 years ago.
So it seems the long habit of our inventiveness has placed us in a pickle. In the one-upmanship of evolution, our tools have rendered the world more complex and that complexity requires the invention of still more complex tools to help us keep it all under control. Our new tools enable us to adapt more rapidly, but one advance begs the creation of another, and each increasingly powerful suite of inventions shifts the world around us so powerfully that still more adaptation is required.
The only way to survive is to move faster, get smarter, change with the changes, and the best way to do that is to amplify ourselves eventually right out of our own DNA so we can survive the new environmentsphysical, emotional and mentalthat we keep recreating.
Is all of this too implausible to consider? Will Homo sapiens really give way to Cyber sapiens that seamlessly integrate the molecular and digital worlds just as our ancestors merged the technological and biological worlds two million years ago? Evolution has presided over stranger things. It took billions of years before the switching and swapping of genes brought us into existence. Our particular brain then took 200,000 years to get us from running around in skins with stone weapons to the world we live in today. Evolution is all about the implausible. And the drive to survive is a relentless shaper of the seemingly impossible. We ourselves are the best proof.
If all of this should happen; if DNA itself goes the way of the dinosaur, what sort of creature will Cyber sapiens be? In some ways we cant know the answer anymore than Homo erectus could imagine how his successors would someday create movies, invent computers and write symphonies. Our progeny, our mind children, will certainly be more intelligent with brains that are both massively parallel, like the current version we have, and unimaginably fast. But what of those primal drives that we carry inside our skulls, and those non-verbal, unconscious ways of communicating? What of laughter and crying and kissing? Will Cyber sapiens know a good joke when he hears one, or smile appreciatively at a fine line of poetry? Will he tousle the machine made hair of his offspring, hold the hand of the one he loves, kiss soulfully, wantonly and uncontrollably? Will there be a difference between the brains and behaviors of he and she? Will there even be a he and a she? And what of pheromones and body language and nervous giggles? Maybe they will have served their purpose and gone away. Will Cyber sapiens sleep, and if they do, will they dream? Will they connive and gossip, grow mad with jealousy, plot and murder? Will they carry with them a deep, if machine made, unconscious that is the dark matter of the human mind, or will all of those primeval secrets be revealed in the bright light cast by their newly minted brains?
We may face these questions sooner than we imagine. The future gathers speed every day.
Id like to think the evolutionary innovations and legacies that have combined to make us so remarkable, and so human, wont be left entirely behind as we march ahead. Perhaps they cant be. After all, evolution does have a way of working with what is already there, and even after six million years of wrenching change, we still carry with us the echoes of our animal ancestors. Maybe the best of those echoes will remain. After all, as heavy as some baggage can be, preserving a few select pieces might be a good thing, even if we are freaks of nature.
1. This was during a conversation with Professor Margulis at her home in western Massachusetts.
2. Note: the current version of a creature can never comprehend the exerience of the creature that will follow because it does not yet have the evolved capacity (whatever it is) that will make that experience possible. We cannot accurately imagine what a digitally enhanced brain will conceive any more than Homo erectus could imagine our experience of the world.
© 2006 Chip Walter. Reprinted with permission.
]]>Interview by Gregory T. Huang
Technology doesn’t make everyone happy. Just ask computer scientist Bill Joy, who has pioneered everything from operating systems to networking software. These days the Silicon Valley guru is best known for preaching about the perils of technology with a gloom that belies his name. Joy’s message is simple: limit access to information and technologies that could put unprecedented power into the hands of malign individuals (what is sometimes called asymmetric warfare). He is also translating that message into action: earlier this year, his venture-capital firm announced a $200 million initiative to fund projects in biodefence and preparation for pandemics. Gregory T. Huang caught up with Joy at the recent Technology Entertainment Design conference in Monterey, California.
Do you think your fears about technological abuse have been proven right since your Wired essay?
When I wrote that essay in 2000, I was very concerned about the potential for abuse. Throughout history, we dealt with individuals through the Ten Commandments, cities through individual liberty, and nation states through mutual non-aggression plus an international bargain to keep the peace. Now we face an asymmetric situation where technology is so powerful that it extends beyond nations to individuals — some with revenge on their minds. On 11 September 2001 I was living in New York City. Our company had a floor in a building that went down. I had a friend on a plane that crashed. That was a huge warning about asymmetric warfare and terrorism.
Did we learn the right lesson?
We can’t give up the rule of law to fight an asymmetric threat, which is what we seem to be doing at the moment, because that is to give up what makes us a civilisation. A million-dollar act causes a billion dollars’ damage and then a trillion-dollar response that makes the problem worse. September 11 was essentially a collision of early 20th-century technology: the aeroplane and the skyscraper. We don’t want to see a collision of 21st-century technology.
What would that sort of collision look like?
A recent article in Science said the 1918 flu is too dangerous to FedEx: if you want to work on it in a lab, just reconstruct it yourself. The reason we can do this is a consequence of the fact that new technologies tend to be digital. You can download gene sequences of pathogens from the internet. So individuals and small groups super-empowered by access to self-replicating technologies are clearly a danger. They can cause a pandemic.
Why do pandemics pose such a huge danger?
AIDS is a sort of pandemic, but it moves slowly. We don’t have much experience with the fast-moving varieties. We are not very good as a society at adapting to things we don’t have gut-level experience with. People don’t understand the magnitude of the problem: in terms of the number of deaths, there’s a factor of 1000 between a pandemic and a normal flu season. Public policy has not been constructive, and scientists continue to publish pathogen sequences, which is really quite dangerous.
Why is it so dangerous?
If in turning AIDS into a chronic disease, or making cocktails of antivirals for flu, or using systems biology to construct broad-spectrum cures for many diseases, we make the tools universally available to people of bad intent, I don’t know how we will defend ourselves. We have only a certain amount of time to come to our senses and realise some information has to be handled in a different way. We can reduce the risk greatly without losing much of our ability to innovate. I understand why scientists are reluctant, but it’s the only ethically responsible thing to do.
So more technology is making the problem worse?
Unfortunately, yes. We need more policy.
What would that look like?
We could use the very strong force of markets. Rather than regulate things, we could price catastrophe into the cost of doing business. Right now, if you want approval for things, you go through a regulatory system. If we used insurance and actuaries to manage risk, we might have a more rational process. Things judged to be dangerous would be expensive, and the most expensive would be withdrawn. Drugs would make it to market on economic estimates of risk not regulatory evaluations of safety. This process could also be used to make companies more liable for the environmental consequences of their products. It’s both less regulation and more accountability.
How are you combating the threat of pandemics?
We recently raised $200 million for biodefence and pandemic preparedness. We have started out focusing on bird flu. We need several antivirals, better surveillance, rapid diagnostics and new kinds of vaccines that can be manufactured quickly. If we fill these gaps, we can reduce the risk of a pandemic.
Do other technological advances excite you?
I have great confidence that we will extend the limits of Moore’s law to give us another factor of 100 in what computer chips can do. If a computer costs $1000 today, we can have that for $10 in 2020. The challenge is: will we develop educational tools to take advantage of such devices? That’s a great force for peace.
Another area that gives us hope is new materials. The world’s urban population is expected to more than double to 6 billion this century. We need clean water, energy and transportation. Carbon nanotubes have incredible properties, and can be applied to develop fuel cells, make clean water, or make ethanol for electric-powered transport. My company has dedicated $100 million to this.
How do you see the increasing connectedness of human societies affecting innovation?
It’s diffusing ideas at an incredible rate. You can use communications and search tools and find out incredible things. You see companies doing interesting things, and you can find out huge amounts very quickly. We can write a worldwide research briefing paper in an hour if we shut the door and unplug the telephone. That’s something you couldn’t do before.
What’s the downside?
It’s like putting a stick in a hornet’s nest. We have religious and secular societies coming into contact, pre-Enlightenment values conflicting with Enlightenment values. It will be a messy process of change. Technology has brought western pop culture to the rest of the world. I’m not a fan of it, but the values it has brought to the world actually offend people in cultures that have been around for longer than my particular set of world views.
Will the human race survive the next 100 years?
We have to make it through a pandemic to understand the nature of that sort of threat. Whether we do that before we unleash the technology, I’m not sure. Either way, I don’t believe we will become extinct this century, though we could make a pretty big mess. I hope we can do some sensible things. It is not enough to do great science and technology, we need sensible policy. We still think that if we find true things and publish them, good things happen. We should not be that naive.
If you could ask the god of technology one question, what would it be?
It seems that a perfect immune system is a disadvantage. If you are perfectly immune, you cannot evolve. A lot of evolution occurs because of selective pressure that your perfect immune system would prevent. This would leave the abusers of biotechnology with the advantage over the defenders, because society needs to be vulnerable so it can evolve. My question is, is that true, because it would prove that we had better limit access to some information. It would mean not only that we cannot make a perfect immune system, but that it would be a bad idea.
© 2006 New Scientist
]]>The development of molecular nanotechnology (MNT) promises to lead rapidly to cheap superior replacements for a large majority of durable goods, a substantial fraction of all non-durable goods, all existing utilities, and some services. For this reason and due to the relatively low expected cost of developing nanofactories,1 MNT represents the largest commercial opportunity of all time. Unfortunately, the very size of the opportunity combined with its extreme suddenness, military significance, potential for disruption of existing institutions, and ease of duplicationcreates certain severe complications that lead to difficulties in capturing the value created.
MNT also has the potential to impact the timeframes and severities of a number of major global risks such as those of terrorism, emergent disease, global warming, omnicidal war, and human extinction due to competition by either intelligent or unintelligent robotic competitors, for which reason there are important non-commercial motivations for preventing its unrestricted utilization. As a result of these difficulties and of the intrinsic uncertainty associated with any particular attempt to develop MNT, commercial development of MNT is likely to be much less rapid than would be predicted from a simple consideration of the value to be created, relevant time horizon, and risk adjusted discount rate.
Despite this, it remains highly probable that MNT will first be realized by a commercial project for the simple reason that probabilistic priors so strongly favor commercial development of new technologies. A slew of militarily relevant technologies were developed by the US, German, and Russian governments during the Second World War and in its aftermath, but that was at a time when the commercial and public sectors were far more fully integrated than they are today and when the external pressures forcing governmental efficacy were greater. By contrast, over the last few decades, virtually every significant technological development has been commercial in origin (or even recreational, e.g. the Open Source movement and SpaceShip One) rather than public. Governmental R&D initiatives, such as those aimed at curing cancer and AIDS and at developing space travel and fusion power have tended to fail totally or almost totally during the past 30+ years.
Given that an important subset of possible scenarios are driven by commercial development, it seems prudent to examine in some detail the major features of most commercial scenarios and to identify the ways in which developers may experience unique difficulties distinct from those associated with the development of other products and the ways in which they may manage those difficulties. This paper will attempt to do that, examining the probable implications of both relatively open and relatively secretive development programs in the event of successful development of MNT. It will be assumed that the developers are highly rational and informed, and that they are attempting to maximize profit in the relatively short term while avoiding the most serious risks of MNT. Development will be assumed to occur within the next 20 years, over the backdrop of a world politically and technologically fairly similar to our own, and with a historically typical gap of a few years between the initial development of the technology and its successful imitation or implementation by competing projects. It also will be assumed that the more powerful MNT applications, such as those in intelligence amplification, neuroscience, extremely powerful distributed robotic systems, and artificial intelligence (AI) will take some time to emerge even given nanofactories and massive funding.
The simplest and most traditional of the problems facing MNT developers is competitive pricing. Setting the prices of MNT goods close to the cost of production provides little profit with which to expand or compensate for risk undertaken, while setting prices too high threatens both to unnecessarily reduce consumption below the optimal level and to draw both legal and illegal competitors into the field. In addition, given the number of industries in which MNT products are likely to compete and the political clout of many of those industries, either high or low prices could motivate antitrust concerns. Theoretically, a higher price is indicative of a monopoly while a lower price indicates competition, but a lower price will also lead to more successful and rapid competition with existing companies and to greater market share, and this could be seen as evidence of monopoly status or of anticompetitive tactics.
Motivating competitors to develop MNT is probably the most serious risk associated with high pricing. In order to minimize this risk it will be necessary for prices to be relatively low, and also for expenses to appear as great as possible. It will be particularly desirable (from the commercial developer’s point of view) that the apparent cost of developing MNT be as great as possible, as this is the expense that can most easily be inflated. One way in which this can be done is to publicly spend as much money as possible on research ostensibly aimed at developing nanofactories over a fairly long period of time after nanofactories actually have been developed. Money can soundly be borrowed in order to fund this research, even at high interest rates, due to the certainty of eventual success. Meanwhile, profits can be generated via the sale of supposedly incremental results of the nanofactory research such as gem quality or better diamonds, doped silicon computers modestly more powerful than those otherwise available at a given price, and inexpensive carbon nanotubes.
Once the nanofactories are publicly acknowledged to exist, the apparent low hanging fruit associated with the supposed development trajectory will be depleted, and a substantial fraction of the global pool of technical experts plausibly capable of relevant work will have already been recruited, discouraging imitation. In addition, the creditors will constitute a class of stakeholders in the new technology who are nonetheless integrated into the existing economic system. Loan repayment will contribute to the justification of profit to the public and to the government. In general, the public appears to accept the legitimacy of high profit margins most readily when the product in question is an extremely expensive luxury, an extremely inexpensive everyday item, or a new product with an explicit need to amortize development costs. It is important to point out that it is excessive profit margins, not excessive profits that usually are considered objectionable. For this reason, actual profits will be greater if expenses can be increased, because the dollar value of a 200% markup is larger on a product costing $100 to produce than on one costing $10. Wasteful expenditures on supposed inputs also can create stakeholders.
Like software, restricted versions of MNT products can easily be designed and can be sold for lower prices than unrestricted versions. For instance, less expensive copies of a given product can be sold to less wealthy countries, or even less wealthy regions within a country. This might be accomplished without competing with the products sold to wealthier regions by installing GPS or inertial locators to monitor product location and disable them from functioning outside of their licensed area. In this manner, profitability can be maximized by selling to all potential customers for prices that constitute a reasonable fraction of their willingness to pay. With built-in biometric sensors, some MNT devices could even be assigned prices based on the personal characteristics of their purchaser. In addition to maximizing profit, this sort of strategy should greatly reduce any humanitarian concerns regarding the distribution of MNT products. The public generally accepts the existence of restricted software without resentment. Nanostructured physical objects can be made more difficult to hack than either software or contemporary hardware, so the restrictions on use built into MNT products can be more robust than those built into today’s printers or software.
The most likely outcome of patenting nanofactories in any given country would be widespread patent violation both by other countries and by many criminal organizations. This would probably be followed by the slew of problems2 that long have been predicted to accompany uncontrolled MNT development, such as unstable arms races, malicious grey goo, and massively oppressive MNT empowered governments. In addition, pirate nanofactories would be used to build nanofactories of unpatented design, which then would be patented.
All this does not mean that IP law cannot contribute some value to an MNT “first mover.” A large number of patents of variable scope can be produced to restrict the products that a competing MNT developer can produce legally. Patents on key components can obstruct possible commercial efforts to develop competing nanofactories without revealing too much about the workings of existing nanofactories. In a field as large and as unexplored as nanotechnology, there surely will be room for a number of extremely broad patents that can be used to slow down competitors. In such a fast moving field, even a patent that delays competition by a few months before being overturned could be extremely valuable. Potential patents might include mechanochemistry, carbon mechanochemistry, self-replicating machines, self-replicating programmable productive systems, diamondoid nanoscale machines, and more, but should be chosen to avoid revealing too much about how a nanofactory can be built.
Governments may attempt to force developers to share MNT production capabilities or may simply steal such capabilities. When high-level officials finally begin to distinguish between reality and science fantasy and to recognize the technology’s potential, they rightly will see MNT as a national security issue. However, preventing simple theft is relatively easy. Nanofactories can be made large enough that they can’t be stolen covertly and/or lost. They can also be networked wirelessly or otherwise equipped for easy inventory. It would add little complexity to equip all nanofactories with oxidative self-destruction systems. The best way to resist forceful interrogation is probably to not have any individuals within the company who know everything or almost everything that is needed in order to build a new nanofactory, and to hold out the threat of not doing business with countries that violate the company’s rights. Directly threatening a country like the United States in this manner would be unwise. Rather than doing that, an indirect threat could be delivered by setting up production facilities in some high political risk countries with little respect for private property. If this is done, it is likely that one of these countries will attempt to steal MNT production capabilities prior to any developed country doing so. If the company responds by destroying all stolen assets, not sharing information, and refusing to trade with that country, this will deter other nations from repeating their mistake, at least in the short term. The desire not to imitate the behavior of disreputable states will be another incentive for developed countries to respect the rights of the developing company.
Throughout the early commercialization of MNT, the continual borrowing of as much money as possible will be a major imperative. This is true for several reasons. The first of these is that it is important to retain control of the company and associated technology in order to implement a relatively long-term plan rather than one that might maximize shareholder profits in the very short term, for which reason stock should not be sold to raise capital. The second is that over the first decade or so, the scale of operation associated with the developing company will be continually increasing at such a rate as to make even ludicrous debts from a few years back trivial. The third reason is to acquire the previously mentioned sets of justificatory expenses and of influential stake-holding creditors. A fourth reason will become relevant later in development, once the potential of MNT is well established and the broader public and public intellectuals become hostile. Hostility is a nearly certain early result of any massive technological disruption regardless of the quality of life improvements it makes available (aging reversal technologies may turn out to be an exception to this generalization, since their psychological impact will be unprecedented in scope and is not easily predicted, but thus far even aging reversal seems to fit this generalization). As hostility develops in response to massive technological impact, it may be both possible and desirable to slow governmental activity by reducing governmental access to funds. This might be accomplished by competing with the government to drive up the price of debt and by releasing products which make an attractive lifestyle achievable on the interest payments from a moderate amount of high yield debt, reducing the size of the work-force and thus increasing the cost of running a large bureaucracy. Such actions should be undertaken gradually so that they are not interpreted as an attack on borrowers and bureaucracies, as that would lead to escalation. By raising both the interest rate and the wages of skilled labor, potential competitors can be further prevented from developing MNT independently.
Due to the potential for economic and social disruption, some countries may refuse to allow the import of MNT-derived products. This is not a serious problem for an MNT producer. A general boycott by all major nations is extremely unlikely, especially considering the magnitude of the benefits that MNT will make available. Tariffs would take some time to put into effect and whatever nation stood to improve its trade balance via MNT exports would petition the WTO for tariff elimination. In addition, MNT can be used to produce traditional capital for the production of non-MNT products.4
One of the earliest products released by an MNT developer is likely to be inexpensive hydrocarbons for fuel and other applications. These can be made by harvesting solar energy over the oceans, using it to hydrolyze water, and using the hydrogen to reduce atmospheric or other (limestone?) CO2. The machinery for all of this can be produced quickly in any quantity with MNT. Floating solar platforms can be made with either hydrocarbon production or MNT manufacturing capabilities. The manufacturing centers should be designed to utilize the hydrocarbons as feedstock and solar energy as a power source in order to rapidly produce more platforms of both types. Design and control for such platforms should be non-problematic, and their products could be sold on the global petrochemicals and natural gas market. In this case, there would be no practical difference between a country that chooses to purchase oil from traditional sources and one that purchases MNT-derived oil, as both would apply demand to the same pool of global production and impacting the same global price, making boycotts ineffective unless they were extremely broad. Hydrocarbon storage facilities probably will have to conform to all normal laws regarding the storage and transport of hydrocarbons, complicating implementation somewhat. However, simply violating regulations and hiring legal teams to delay the imposition of fines until they are no longer relevant may be an acceptable strategy for faster implementation if the regulatory framework would otherwise slow development overly much.
While MNT will accelerate the development of new products, it will reduce the time required to build new capital even more. As a result, production capabilities sufficient to satisfy global petrochemical demand should take much less time to develop than designs capable of competing in a wide variety of industries. The revenue generated via the initial products will be an important part of what enables the rapid development of newer products.
The revenue from this early activity will be more than sufficient to hire as many researchers and administrators as can be productively utilized to develop new MNT designs. Integrating so many new employees without critical security risks will be a difficult problem, but it should be a manageable one as there are already many companies that face similar difficulties. At this point, the MNT developers also should have enough money to purchase both public opinion and political influence in so far as these goods can be rapidly purchased.
In order to minimize opposition it will be critically important for the developers not to be seen as a non-competitive monolith. This will be particularly difficult if MNT development is overt as opposed to remaining a secret, but it is probably possible under either secret or public development. The company may be best able to avoid conveying the impression of monopoly if it carefully and legally shares its technology with a few select partners who thoroughly appreciate the dangers associated with MNT (especially the critical dangers of uncontrolled AI and unstable arms races), the need to avoid them, and the consequent need to avoid further disseminating the basic technology. If these partners compete in the production and sale of relatively safe MNT products, it is possible that the market generally will be seen as saturated and further entrants will be discouraged. This decision would constitute a non-secretive alternative to the earlier prospect of inflating the apparent cost and difficulty of MNT development, although both strategies could be pursued sequentially. In the case of such a strategy, as in contemporary oligopoly arrangements, branding will become an extremely important part of profit maximization. A more trusted brand probably would be able to charge a substantial premium, especially for nanomedical products and services once those are developed.
d) First Mover Advantages
A large fraction of the profitability associated with nanomedicine, and to a lesser degree that associated with any new MNT product, is likely to occur during the period of initial release. This is true because MNT products often will solve problems cleanly and completely, leaving no significant vestigial market. For instance, one of the first novel nanomedical devices produced using MNT is likely to be a powder of biocompatible glucose oxygen fuel cells with internal temperature sensors to avoid excess waste heat and a binding site for later removal from the bloodstream. The purpose of this device would be simply to burn fuel, producing waste heat. From the public’s perspective it will be a rapid weight loss infusion capable of safely producing one to two pounds of weight loss per day (or several times that in extremely cold weather or while the body is immersed in cool water). Once this system is safely developed and successfully marketed, the market effectively will be gone. People may continue to become overweight, but the world’s accumulated pool of overweight people willing to use nanomedicine will be expended. Those overweight people who are reluctant to use new medical technologies will surely still prefer, when they eventually decide to use one, to use the established brand even if it costs somewhat more than its competition, as its safety will have been more thoroughly established. Furthermore, later nanomedical devices will incorporate the weight loss function as a mere side effect of their other capabilities, making this design obsolete. In other fields, the advantages from safety, branding, superior R&D, and expansion into a technological frontier will not favor the first mover as completely, but it is a basic economic result that, all else being equal, oligopoly quantity competition leaves first movers with dominant market share even in the long run.3
Given the above result, are competing MNT producers likely to engage in the alternation of de facto collusion and quantity or monopolistic competition typical of contemporary oligopolies? The simple answer is yes, at least in the short term, as this behavior maximizes short run profits for all competitors under the constraints imposed by antitrust law and prisoner’s dilemmas. However, MNT will be associated with novel productive powers that may call the default assumption into doubt. For instance, the traditional MNT vision of home manufacturing, the software metaphor of unlimited manufacturing capacity matching production precisely to demand, and even the growing paradigm of online agent-based purchasing all suggest price competition as a plausible alternative. Still, there seem to be few large examples of actual price competition in the world of retail, even where they would be most expected, such as in the sale of bottled water, public domain IP, internet retailing, and the like. Even freelance service work such as housekeeping, therapy, tutoring, and most other examples of work by the self employed are far from perfectly competitive, with agencies matching consumers to producers and keeping large commissions and with many producers spending more time searching for clients than working, and demanding far more for an hour of work than the value of an hour of their time.
By reducing the scale of manufacture, in addition to improving the ability to match supply to demand, MNT and nanoblock4 assembly seem likely to produce a world where retail is relatively more important and wholesale less. Wal-Mart or its successor still may sell MNT-built products, but if they do, they probably will sell them primarily through large factory/grocery stores rather than from giant wholesale stores, as the combination of a nanofactory with virtual reality environments for trying out products will greatly reduce the necessary floor space and inventory space. It is also reasonable to suggest that members of a much wealthier society will be less inclined to travel substantial distances in order to shop, and less likely to accept uninteresting work for under ten dollars an hour. Smaller stores that offer a better atmosphere and knowledgeable service thus will have both more customers and less difficulty finding employees. As a result, brands will be easily differentiated and price competition will be even less prevalent than it is today.
The sale of energy will provide the first MNT mover with yet another advantage over later competitors. If claims can be established to solar energy streams sufficient to satisfy global energy demand, and environmental laws can be passed to restrict the utilization of solar energy streams other than those initially tapped, competitors may have to pay a larger amount for solar energy inputs than first movers.
At this point, it is still far from clear whether the developers of MNT will or should choose to publicize their achievement. Their decision probably will be driven in part by the nature of the company that makes the final enabling innovations, and in part by the intensity of the technological competition. If MNT is developed in a world where it is still widely considered a retro-futurist fantasy, competition will be much less intense than in one where it is developed as the result of intense international competition. I personally expect a scenario reminiscent of that accompanying the birth pangs of the airplane, i.e. many competitors all over the world but no very large and competent concerted efforts aiming at a technology that was still taken by consensus to be impossible despite a technological infrastructure that was making its achievement noticeably less difficult every year. In such a scenario, a private company that wishes to utilize MNT productive capabilities will be able to do so rather overtly without creating widespread awareness of what is happening. Inexpensive solar panels are surely within the range of what they can publicly produce, but rapidly deployed macroscale floating solar oil factories are not. In a world where MNT is seen as completely discredited, or in one where ubiquitous but mundane “nanotechnology” had made Drexlerian predictions seem as quaint as those once made about nuclear energy or space travel, even the solar oil factories might not lead to widespread correct conclusions without an accurate explanation; conversely, if MNT was the 21st century’s space race, there would be little point in secrecy and every reason to develop and market all important applications possible applications as quickly as possible.
Unfortunately, it is hard to imagine a world where the replacement of traditional industry by molecular manufacturing is taken for granted by everyone even moderately future-oriented in the same way that today all such people see as inevitable the digital replacement of analogue film-making, Chinese dominance of durable goods manufacture, or the transition to HDTV. The economic and political havoc that would be expected to result from a widespread belief in truly radically near future change is difficult to calculate, and might even be sufficient to make such a prophesy self-preventing. For this reason among others, it is fair to say that even weeks after the development of MNT is announced, the majority of investors still will not know about it. Even those who do will probably understand it less well than today’s typical science fiction author, and will thus not base any informed investment decisions on their knowledge of MNT. It is also easy to imagine a near-future world filled with constant inaccurate claims of MNT breakthroughs, such that accurate information would not trigger immediate market adjustments upon its release.
Much has been made of the large number of jobs that might be eliminated with the advent of molecular manufacturing. If all or nearly all jobs were to rapidly become unnecessary, the resulting economic disruption would not necessarily cause major hardship, as some have feared. However, most work is not associated with the production of products that can easily be replaced by MNT. Instead, early MNT products will almost eliminate certain sectors, such as manufacturing; will greatly reduce the need for workers in some others, such as mining, utilities, construction, and transportation/warehousing of goods; will have little direct impact on the demand for work in some fields, such as educational services, management, and food services; and will greatly increase the demand for a few professions, especially information technology and possibly scientific and technical services. Theoretically, capital can be substituted for most varieties of labor, and MNT also will greatly expand the ease of creation of capital while devaluing existing capital, but it will take time for new capital to replace most workers. For instance, in the short term, trash-collecting robots are unlikely, but in the long term, home recycling and incineration units are likely.
I estimate that MNT will make 10% – 20% of all current US jobs obsolete within a year of development, 20% – 40% within two years, and in the absence of strong AI will make 60% – 80% of current work unnecessary within a decade of development, as more powerful tools multiply the capabilities of service workers in fields like waste management and accommodations/food services. Many workers probably will be retained by their employers for months or years after their services are no longer necessary due either to contractual stipulations or simply to slow managerial reaction times. In addition, laws may be passed further restricting the elimination of jobs, but ultimately obsolete industries will disappear even with government life support and will eliminate jobs by closing if they can’t do so with layoffs.
At the same time that many jobs disappear, so will many workers. Great uncertainty, high discount rates, high interest rates, and novel low cost lifestyle options will provide many workers with strong incentives to leave their jobs and either retire or try to found businesses more suited to the new economy. This will drive the expenses faced by many employers upwards, as noted earlier, but will do little to mitigate the problem of unemployment, as the workers who have the capital to invest and retire are by definition not those most threatened by the loss of their jobs and typically cannot be easily replaced by even larger numbers of inappropriately trained workers.
Most of the neediest workers will be covered by state unemployment insurance, which will have the added benefit of increasing non-discretionary governmental spending. Increases in the duration of unemployment payouts should be lobbied for, but even if these are successful, more will be needed. Further subsidies for the unemployed may be possible through investments in companies (such as MyRichUncle.com) that give loans in exchange to a fraction of the borrower’s future earnings. However, several million people still will be in need of both money to live on and meaningful work that they are not able to find for themselves. Dealing with those people is not a core business function, but providing low cost goods to any agencies that show competence in doing so (groups such as Habitat for Humanity, etc.) probably will be a very sound investment in good will.
By contrast, although it would be possible to support all of the displaced people or hire them for make-work, spending money directly to do so generally would be expected to aggravate the resentment that was supposed to be mitigated. One of the most important things to do when mitigating resentment is to work hard to fight the impression that people with MNT can do anything and that all remaining problems are therefore their fault. For PR purposes, it is probably best to downplay what the technology is capable of. This also will tend to reduce governmental fear, public paranoia, and pressure to share dangerous technologies with militaries that cannot be trusted with them.
The second major class of risk that must be avoided is that associated with intentional abuse. This includes everything from the production of self-replicating robots to rapid military build-ups to universal intrusive surveillance (even, possibly, surveillance of brain activity, hence of thoughts). The extreme number of potentially disastrous abuses that MNT lends itself to is a very strong argument for making every possible effort to either maintain secrecy regarding MNT techniques, or at least to limiting access to extremely trustworthy parties. Many other essays in this collection will discuss the consequences of failing to maintain secrecy, but for the purposes of this paper, it should suffice to assert that so long as MNT remains tightly controlled these risks should be manageable.
The final and most critical danger associated with MNT is that it will lead to the release of massive computing power and the acquisition of neurological knowledge that will make it easier to develop AI (artificial intelligence) than to control it, leading to a total loss of control and human extinction. It is obviously best to respond to this by being extremely judicious with respect to the distribution of devices for studying the brain and by limiting the available computing power available for a dollar to a level significantly greater than that being produced by competing companies but far less than what could be made available. It is best if the gap between available MNT computers and traditional
9 computers is great enough to dominate the market and end incremental development of computing power, but small enough not to contribute substantially to reducing the cost of parallel projects aimed at developing MNT or AI. Despite such precautions, MNT development will accelerate AI development in many ways. The most significant of these may be the increased ability to spend time on long-term personal projects resulting from increased personal freedom.
The largest risks are likely to be of an internal origin, as some of the thousands of researchers in the company may attempt to evolve an AI on internal nanocomputers. An obvious way to ameliorate this problem is to limit design and production to low power computers, or to dedicated computers for running molecular simulations and designing products, or for other very specific purposes. In the long run though, this is a stopgap measure. Some strategy must be developed for ensuring that mankind is not accidentally wiped out by an AI. The scope of this problem goes beyond that of this paper, but it is probably a good starting place to assert the desirability of doing whatever is possible to direct global R&D towards the development of technology for making people more intelligent and away from technology for making machines more intelligent.
Ultimately, it does appear that AI can be developed safely and that preventing unsafe AI permanently should be possible, but it also appears that the level of intelligence required to safely develop AI is approximately independent of the available level of computing power, while that required to unsafely develop AI decreases with computing power. For this reason, increasing intelligence and reducing available computing power both contribute to risk reduction. Anti-aging technology also may contribute, because it provides a de facto increase in the amount of thought that a person can ultimately apply to any given problem, although the development of anti-aging technology will be strongly commercially and PR driven in any event, and thus requires no further justification.
1. “Molecular Manufacturing: What, Why and How” by Chris Phoenix (http://wise-nano.org/w/Doing_MM)
2. See “Dangers of Molecular Manufacturing” (http://www.crnano.org/dangers.htm)
3. In price competition, producers compete to sell for the lowest possible price. They choose what price they will sell at and then sell as many as the public demands at that price. In practice, this requires that the company be able to match supply precisely to demand. Economically this is equivalent to perfect competition and eliminates all profit. In quantity competition, producers sell undifferentiated products to wholesalers, setting the quantity sold to maximize profits. As the number of competitors increases this becomes more like perfect competition because each producer has increasingly little incentive to restrict quantity in order to maintain demand. By committing to a particular level of production in advance, earlier entrants can establish equilibria where they sell larger volumes than later entrants. With a linear demand curve, each entrant will sell half the volume of its predecessor. In monopolistic competition, companies sell similar but branded goods and use marketing and reputation to maintain a willingness to pay a premium over the market price for branded products. Branded goods are imperfect substitutes with high cross elasticities of demand, so as the price of one brand increases, consumers gradually switch over to its competition.
4. For an explanation of nanoblock manufacturing, see “Safe Utilization of Advanced Nanotechnology” by Chris Phoenix and Mike Treder (http://www.crnano.org/safe.htm).
]]>Most students of artificial intelligence are familiar with this forecast made by Vernor Vinge in 19931: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
That was thirteen years ago. Many proponents of super-intelligence say we are on track for that deadline, due to the rate of computing and software advances. Skeptics argue this is nonsense and that we’re still decades away from it.
But fewer and fewer argue that it won’t happen by the end of this century. This is because history has shown the acceleration of technology to be exponential, as explained in well-known works by inventors such as Ray Kurzweil and Hans Moravec, some of which are elucidated in this volume of essays.
A classic example of technology acceleration is the mapping of the human genome, which achieved most of its progress in the late stages of a multi-year project that critics wrongly predicted would take decades. The rate of mapping at the end of the project was exponential compared to the beginning, due to rapid automation that has since transformed the biotechnology industry.
The same may be true of molecular manufacturing (MM) as self-taught machines learn via algorithms to do things faster, better, and cheaper. I won’t describe the technology of MM here because that is well covered in other essays by more competent experts.
MM is important to super-intelligence because it will revolutionize the processes required to understand our own intelligence, such as neural mapping via neural probes that non-destructively map the brain. It also will accelerate three-dimensional computing, where the space between computing units is reduced and efficiency multiplied in the same way that our own brains have done it. Once this happens, the ability to mimic the human brain will accelerate, and self-aware intelligence may follow quickly.
This type of acceleration suggests that Vinge’s countdown to the beginning of the end of the human era must be taken seriously.
The pathways by which super-human intelligence could evolve have been well explained by others and include: computer-based artificial intelligence, bioelectronic AI that develops super-intelligence on its own, or human intelligence that is accelerated or merged with AI. Such intelligence might be an enhancement of Homo sapiens, i.e. part of us, or completely separate from us, or both.
Many experts argue that each of these forms of super-intelligence will enhance humans, not replace them, and although they might seem alien to unenhanced humans, they will still be an extension of us because we are the ones who designed them.
The thought behind this is that we will go on as a species.
Critics, however, point to a fly in that ointment. If the acceleration of computing and software continues apace, then super-intelligence, once it emerges, could outpace Homo sapiens, with or without piggybacking on human intelligence.
This would see the emergence of a new species, perhaps similar in some ways, but in other ways fundamentally different from Homo sapiens in terms of intelligence, genetics, and immunology.
If that happens, the gap between Homo sapiens and super-intelligence could quickly become as wide as the gap between apes and Homo sapiens.
Optimists say this won’t happen, because everybody will get an upgrade simultaneously when super-intelligence breaks out.
Pessimists say that just a few humans or computers will acquire such intelligence first, and then use it to subjugate the rest of us Homo sapiens.
For clues as to who might be right, let’s look at outstanding historical examples of how we’ve used technology and our own immunology in relation to less technologically adept societies, and in relation to other species.
When technologically superior Europeans arrived in North and South America, the indigenous populations didn’t have much time to contemplate such implications because in a just few years, most who came in contact with Europeans were dead from disease. Many who died never laid eyes on a European, as death spread so quickly ahead of the conquerors through unknowing victims.
Europeans at first had no idea that their own immunity to disease would give them such an advantage, but when they realized it, they did everything to use it as a weapon. They did the same with technologies that they consciously invented and knew were superior.
The rapid death of these ancient civilizations, numbering in the tens of millions of persons across two continents, is not etched into the consciousness of contemporary society because those cultures left few written records and had scant time to document their own demise. Most of what they put to pictures or symbols was destroyed by religious zealots or wealth-seeking exploiters.
And so, these civilizations passed quietly into history, leaving only remnants.
By inference, enhanced intelligence easily could take choices about our future out of our hands, and may also be immune to hazards such as mutating viruses that pose dire threats to human society.
Annihilation of Homo sapiens could occur in one of many ways:
If Vernor Vinge is right, we have 18 years before we will face such realities. Centuries ago, the fate of Indian civilizations in North and South America was decided in a similar time span. So, the time to address such risks is now.
This is especially true because paradigms shift more quickly now; therefore, when the event occurs we’ll have less time, perhaps five years or even just one, to consider our options.
What might we use as protection against these multi-factorial threats?
Sun Microsystems’ cofounder Bill Joy’s April 2000 treatise, "Why the future doesn’t need us,"2 summarized one field of thought, arguing the case for relinquishment– eschewing certain technologies due to their inherent risks.
Since that time, most technology proponents have been arguing why relinquishment is impractical. They contend that the march of technology is relentless and we might as well go along for the ride, but with safeguards built in to make sure things don’t get too crazy.
Nonetheless, just how we build safeguards into something smarter than us, including an upgraded version of ourselves, has as yet gone unanswered. To see where the solutions might lie, let’s again look at the historical perspective.
If we evaluate the arguments between technology optimists and relinquishment pessimists in relation to the history of the natural world, it becomes apparent that we are stuck between a rock and a hard place.
The rock in this case could be an asteroid or comet. If we were to relinquish our powerful new technologies, chances are good that an asteroid would eventually collide with Earth, as has occurred before, thus throwing human civilization back to the dark ages or worse.
For those who scoff at this as an astronomical long shot, be reminded that Comet Shoemaker-Levy 9 punched Earth-sized holes in Jupiter less than a decade after the space tools necessary to witness such events were launched, and just when most experts were forecasting such occurrences to be once-in-a-million-year events that we would likely never see.
Or perhaps we would be thrown back by other catastrophic events that have occurred historically, such as naturally induced climate changes triggered by super-volcanos, collapse of the magnetosphere, or an all-encompassing super-nova.
Due to those natural risks, I argue in my book, Our Molecular Future, that we may have no choice but to proceed with technologies that could just as easily destroy us as protect us.
Unfortunately, as explained in the same book, an equally bad "hard place" sits opposite the onrushing "rock" that threatens us. The hard place is our social ineptness.
In the 21st century, despite tremendous progress, we still do amazingly stupid things. We prepare poorly for known threats including hurricanes and tsunamis. We go to war over outdated energy sources such as oil, and some of us increasingly overfeed ourselves while hundreds of millions of people ironically starve. We often value conspicuous consumption over saving impoverished human lives, as low income victims of AIDS or malaria know too well.
Techno-optimists use compelling evidence to argue that we are vanquishing these shortcomings and that new technologies will overcome them completely. But one historical trend bodes against this: emergence of advanced technologies has been overwhelmingly bad for many of the less intelligent species on Earth.
To cite a familiar refrain: We are massacring millions of wild animals and destroying their habitat. We keep billions more domestic farm animals under inhumane, painful, plague-breeding conditions in increasingly vast numbers.
The depth and breadth of this suffering is so vast that we often ignore it, perhaps because it is too terrible to contemplate. When it gets too bothersome, we dismiss it as animal rights extremism. Some of us rationalize it by arguing that nature has always extinguished species, so we are only fulfilling that natural role.
But at its core lies a searing truth: our behavior as guardians of less intelligent species, which we know feel pain and suffering, has been and continues to be atrocious.
If this is our attitude toward less intelligent species, why would the attitude of superior intelligence toward us be different? It would be foolish to assume that a more advanced intelligence than our own, whether advanced in all or in only some ways, will behave benevolently toward us once it sees how we treat other species.
We therefore must consider that a real near-term risk to our civilization is that we invent something which looks at our ways of treating less intelligent species and decides we’re not worth keeping, or if we are worth keeping, we should be placed in zoos in small numbers where we can’t do more harm. Resulting questions:
These questions have been debated, but no broad-based consensus has emerged. Instead, as the discussions run increasingly in circles, they suggest that we as a species might be comparable to ‘apes designing humans’.
The ape-like ancestors of Homo sapiens had no idea they were contributing DNA to a more intelligent species. Nor could they hope to comprehend it. Likewise, can we Homo sapiens expect to comprehend what we are contributing to a super-intelligent species that follows us?
As long as we continue to exercise callous neglect as guardians of species less intelligent than ourselves, it could be argued that we are much like our pre-human ancestors: incapable of consciously influencing what comes after us.
The guardianship issue leads to another question: How well are we balancing technology advantages against risks?
In the mere 60 years since our most powerful weaponsnuclear bombswere invented, we’ve kept them mostly under wraps and congratulated ourselves for that, but we have also seen them proliferate from at first just one country to at least ten, with some of those balanced on the edge of chaos.
Likewise, in the nanoscale technology world that precedes molecular manufacturing, we’ve begun assessing risks posed to human health by engineered nanoparticles, but those particles are already being put into our environment and into us.
In other words, we are still closing the proverbial barn doors after the animals have escaped. This limited level of foresight is light years away from being able to assess how to control the onrushing risks of molecular manufacturing or of enhanced intelligence.
Many accomplished experts have pointed out that the same empowerment of individuals by technologies such as the Internet and biotech could make unprecedented weapons available to small disaffected groups.
Technology optimists argue that this has occurred often in history: new technologies bring new pros and cons, and after we make some awful mistakes with them, things get sorted out.
However, in this case the acceleration rate by its nature puts these technologies in a class of their own, because the evidence suggests they are running ahead of our capacities to contain or balance them. Moreover, the number of violently disaffected groups in our society who could use them is substantial.
To control this, do we need a “pre-crime” capacity as envisaged in the film Minority Report, where Big Brother methods are applied to anticipate crime and strike it down preemptively?
The pros and cons of preemptive strikes have been well elucidated recently. The idea of giving up our freedom in order to preserve our freedom from attack by disaffected groups is being heavily debated right now, without much agreement.
However, one thing seems to have been under-emphasized in these security debates:
Until we do the blatantly positive things such as eliminate widespread diseases, feed the starving, house the homeless, disenfranchise dictators, stop torture, stop inhumane treatment of less intelligent species, and other do-good things that are treated today like platitudes, we will not get rid of violently disaffected groups.
By doing things that are blatantly humane, (despite the efforts of despots and their extremist anti-terrorist counterparts to belittle them as wimpy) we might accomplish two things at once: greatly reduce the numbers of violently disaffected groups, and present ourselves to super-intelligence as being enlightened guardians.
Otherwise, if we continue along the present path, we may someday seem to superintelligence what our ape-like ancestors seem to us: primitive.
In deciding what to do about Homo sapiens, a superior form of intelligence might first evaluate our record as guardians, such as how we treat species less intelligent than ourselves, and how we treat members of our same species that are less technologically adept or just less fortunate.
Why might super-intelligences look at this first? Because just as we are guardians of those less intelligent or fortunate than us, so super-intelligences will be the guardians of us and of other less intelligent species. Super-intelligences will have to decide what to do with us, and with them.
If Vinge is accurate in his forecast, we don’t have much time to set these things straight before someone or something superior to us makes a harsh evaluation.
Being nice to dumb animals or poor people is by no means the only way of assuring survival of our species in the face of something more intelligent than us. Using technology to massively upgrade human intelligence is also a prerequisite. But that, on its own, may not be sufficient.
Compassion by those who possess overwhelming advantages over others is one of the special characteristics that Homo sapiens (along with a few other mammals) brings to this cold universe. It is what separates us from an asteroid or super-nova that doesn’t care whether it wipes us out.
Further, compassionate behavior is something most of us could agree on, and while it is often misinterpreted by some as a weakness, it is also what makes us human, and what most of us would want to contribute to future species.
If that is so, then let’s take the risk of being compassionate and put it into practice by launching overarching works that demonstrate the best of what we are.
For example, use molecular manufacturing and its predecessor nanotechnologies to eliminate the disease of aging, instead of treating the symptoms. That is what I personally have decided to focus on, but there are many other good examples out there, including synthesized meat that eliminates inhumane treatment of billions of animals, and cheap photovoltaic electricity that could slash our dependence on oiland end wars over it.
Such works are not hard to identify. We just have to give them priority. Perhaps then we will seem less like our unwitting ancestors and more like enlightened guardians.
1. The Coming Technological Singularity: How to Survive in the Post-Human Era http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
2. http://www.wired.com/wired/archive/8.04/joy.html
© 2006 Douglas Mulhall. Reprinted with permission.
]]>Those responsible for the safety of a nation—leaders and military and police forces—might be hard pressed to deal with a world in which any weapon or dangerous device could be manufactured in large quantities at the press of a button, at the same time that economic and social norms are being overthrown by rapid change.
We can expect that—by default—authorities will want molecular manufacturing (MM) to be tightly restricted—kept out of private hands, and limited to the few nations that initially have it. That approach might provide some added security—or it might simply create such incredible pent-up demand that any barriers and restrictions are quickly overcome by black markets, intellectual property piracy, rogue-nation programs to duplicate MM, etc.
This essay attempts to chart a middle path for the early years of MM availability—one that allows most of the benefits of MM to be widely available to all individuals and nations, while maintaining some control over key elements. I will not go into who will hold that control, other than to suggest the obvious—that those nations that hold the reins of world power are likely to exercise it to retain power, by delegating it in a controlled fashion to cooperative nations and subordinate authorities.
It is not the objective of this essay to look at radical social changes that might arise due to molecular manufacturing, but rather to see how well MM can fit with existing forms.
Atom Precise – each atom and bond between atoms in an object is as planned in a design. Also used to describe the process or capability of making atom precise objects.
Nanoblocks – atom precise constructs with size on the order of 100 nanometers that can be mechanically connected to form larger objects. Each nanoblock would have one or more functions—as simple as providing physical strength and support, or as complex as digital computation and communication.
Fabber – a device that automatically assembles individual products for human use. In the context of this essay, it will refer to a device that constructs products out of nanoblocks, specifically excluding atom precise nanofactories – those that build products directly atom-by-atom.
Nanoblock-based fabbers will have a number of technical advantages over direct atom precise molecular manufacturing. Even their disadvantages (less precision, lower strength in products) can be considered advantages for purposes of security.
Standardization of nanoblocks—their modes of interconnection and interaction, their functions, and so on—can greatly simplify the process of designing atom precise products. Use of nanoblocks raises the level of design above the point that requires deep understanding of nanoscale physics and chemistry, to the point where anyone could use automated software tools to design simple but useful products, and expert engineers could reasonably design extremely complex and capable products. For example, there would be no need to re-design a nanoscale computer out of individual atoms every time one wished to incorporate information processing into a product, or to re-invent means of digital communication throughout a product.
While the amount of energy expended to form a single atom-to-atom bond and the waste heat generated is tiny, the number of atoms and bonds in a typical finished product for human use is so large that energy and heat issues will be non-trivial when constructing human-scale products. The energy used and heat released to build things out of nanoblocks should be orders of magnitude smaller, as most of the energy is consumed and heat released in the process of making the nanoblocks. Energy supply and heat removal will be much easier for nanoblock fabbers, allowing them to be more compact and operate much faster—though, of course, they still will need a supply of "raw materials"—a store of nanoblocks rather than whatever atomic or molecular feedstock atom precise nanofactories may use.
The nanoblocks needed by a fabber could be made in advance. Energy consumption and heat dissipation would be spread over time, with nanoblocks being stored in the fabber for later quick construction of finished products. Alternatively, nanoblocks could be produced in bulk by centralized nanofactories near convenient energy supplies, to be distributed and sold to owners of fabbers. The energy required to ship a kilogram of nanoblocks, even halfway around the world, should be a fraction of the energy required to produce them.
It should be possible to design nanoblocks to allow controlled disassembly—i.e. recycling of products made out of reusable nanoblocks. Each nanoblock could have an ID embedded that specifies its type—reliably sorting nanoblocks would be far more efficient than sorting atoms. This would mean that the energy that goes into producing them would not be wasted when one no longer needs or wants the product they compose. Instead, the unwanted object could be taken apart, and the nanoblocks sorted for re-use in making new objects. This would save energy and avoid the massive production of junk that could result from large-scale use of inexpensive manufacturing.
A related concept—utility fog1—would be a programmable substance consisting of "foglets." Each foglet would be a tiny simple robot, able to interact with vast numbers of other foglets to form nearly any shape imaginable, including objects that are able to move and react to human beings. One might be able to re-create the Star Trek "holodeck" using foglets—an environment in which almost anything becomes possible. The flexibility that makes this idea attractive also creates the risk that the utility fog might be infected with an information virus designed to take it over for malicious purposes, harming or killing or simply trapping a human in the utility fog environment. The fixed-function approach of building things out of nanoblocks and recycling things when they are no longer needed seems safer, at least for the early days of molecular manufacturing.
The use of nanoblocks creates opportunities to make molecular manufacturing safer.
With a careful selection of the types of nanoblocks made available, a fabber should not be able to build an atom precise nanofactory out of nanoblocks, nor devices that will be a significant help in any attempt to "bootstrap" production of an atom precise nanofactory, reducing the risk of proliferation of atom precise MM to "rogue nations" or terrorists.
A nanoblock-only fabber (i.e. one which cannot produce its own nanoblocks, and so requires a supply of nanoblocks as input) could be distributed world-wide without releasing atom precise MM to everyone, avoiding any risk that anyone could start using it to produce massive quantities of dangerous products out of freely available atoms. Yet it would allow construction of almost as wide a range of products as an atom precise nanofactory, for not much more cost—reducing demand for atom precise MM.
There would be products that could not be made out of nanoblocks, of course—such as nanoblocks themselves. This fact could give official security forces with access to atom precise nanofactories an advantage, as weapons and systems made with atom precise nanofactories will be somewhat more capable than any created using nanoblock fabbers.
Products of commercial or security value that cannot be made out of nanoblocks and require atom precise assembly could be made in centralized plants where security measures could be taken. One simple security measure would be to have such products made by dedicated function nanofactories, with the design built in at the lowest level and unable to be altered without destroying the nanofactory. These dedicated function nanofactories would be produced using general-purpose programmable nanofactories in a few extremely high security plants.
Anyone familiar with the "grey goo" exponential self-replication scenario might ask whether a device made of nanoblocks might disassemble objects made of recyclable nanoblocks and re-use those nanoblocks to produce copies of the device—a "lumpy goo" scenario.
To prevent this, one solution would be to design nanoblocks to require use of a key-like manipulator—too small to be made of or emulated by nanoblocks—to lock blocks together in order to fabricate objects. So long as the key-like manipulator is only built into fabbers, and never made part of or attached to a commonly available nanoblock, only fabbers will be able to build things from those nanoblocks—eliminating much of the potential to build a malicious self-replicator out of nanoblocks. The same key would be required to disassemble objects for recycling—preventing malicious disassembly of objects made of nanoblocks, outside of dedicated recycling devices.
One could object that preventing the fabber from making copies of itself would eliminate a potentially major advantage. A fabber that can make copies of itself could be distributed very rapidly, creating a huge market for nanoblocks and nanoblock-based designs in a very short period of time. That should be a significant advantage for a manufacturer willing to give up income from the fabber and focus on selling nanoblocks. So long as the nanoblocks were non-reusable, the risk of exponential self-replication would be minimized—and the manufacturer could expect their fabber to become a universal standard before competitors got to market, making their nanoblock business quite profitable.
However, other companies would very quickly begin producing reverse-engineered "clone" and improved nanoblocks, cutting into the original manufacturer’s revenues. It would likely not be long before someone offered re-usable nanoblocks, opening the way to exponentially self-replicating systems.
Given the value of recyclable nanoblocks for energy and cost savings and convenient disposal, and the security risks of self-copying fabber components, it seems wisest to allow recyclable nanoblocks but prohibit fabbers that can self-copy. Very likely the cost of fabbers will fall rapidly in any case, since they would themselves be made with atom precise MM.
The above assumes a relatively free market in fabber and nanoblock designs. That may not be the case if the government is involved and sets a single standard that all manufacturers must follow. In that case, one might see a "utility" model, where nanoblock prices are controlled to allow manufacturers a "reasonable" profit. This scenario would be likely to slow innovation—but, of course, that might be exactly the effect desired by the government. Non-self-copying fabbers with recyclable nanoblocks seem the most likely choice in such a standard.
Fabbers will very likely be targeted with the equivalent of computer viruses—malware designs that will attempt to infect fabbers and transmit copies of themselves, and probably use the fabber to produce something annoying or dangerous. The greatest danger would be if fabbers were connected directly to the Internet, allowing very rapid spread of such a virus without human intervention.
One way to fight this would be to keep all fabbers "offline"—designed to only allow loading new designs by manually transferring a design on a physically separate storage medium. This should slow the spread of malware down to human speeds, allowing humans a chance to become aware of the problem and deal with it.
It may prove useful to establish a program that allows anyone with an interest in "clever fabber hacks" or atom precise molecular manufacturing to exercise their curiosity in a safe, controlled environment. This would help reduce the incidence of ‘experiments’ analogous to releasing computer viruses and worms into the wild, by giving hackers an alternative and encouraging environment. Their creative—or potentially destructive—ideas could benefit society or help plan defenses against potential dangers. It also provides an opportunity to catch the few who are going down the wrong path and turn them around – or at least keep know who they are if they seem inclined to persist in dangerous pursuits.
Malicious users could produce dangerous or otherwise undesirable nanoblock-based products. For example, a murderer might create a knife, kill someone, and disassemble the evidence. Or perhaps create a household robot—but program it to wreak havoc. Defenses against such abuses should be taken into consideration. There are several approaches that might be helpful.
Since recyclable nanoblocks would have a readable type-ID built in, it would be trivial to extend that to a unique ID, making it possible to backtrack the source of an otherwise anonymous malicious automated device, or obtain a clue from nanoblocks torn off a more mundane object such as a knife. With users knowing this, fewer will seriously contemplate engaging in malicious production.
The use of nanoblock-limited fabbers (i.e., those which cannot make their own nanoblocks) has some likely implications for society. Certainly costs of many material goods should fall, raising the standard of living of many people around the world.
If instead, self-copying fabbers and non-recyclable nanoblocks are available, benefits for less developed nations may arrive a bit sooner, but the need to continually buy more nanoblocks will limit their long-term impact.
Some visions of life with atom precise MM have people going "off the grid"—quitting their jobs, setting up independent solar powered homesteads, and ending capitalism and perhaps economics as we know them. That scenario would be very unlikely with non-recyclable nanoblocks, and limited with recyclable nanoblocks, as people would still need to engage in productive economic activity in order to have money to buy replacement nanoblocks.
With most jobs in manufacturing and distribution eliminated, people would largely find jobs in the service sector. Service jobs will shift even more to specialization, due to increased competition. Developed nations have already gone far in this direction, and other nations will likely be forced to follow suit. This will be a difficult transition for nations that have only recently begun developing and have been heavily dependent upon manufacturing for export—services will be more difficult to export, and local consumers may not be as used to consuming services.
Another common vision of life after the arrival of atom precise MM has a tension between free "open source" designs and commercially available designs. The greater ease of designing with nanoblocks instead of atoms would likely give the open source approach extra impetus. Still, there will also be a fair number of things that people will not trust to be made from nanoblocks, and conventional commerce in those products will continue. Also, as always, there will be elements of style and usage that will cause people to pay for things even though free alternatives are available, just as people today will pay more for a real Rolex; than a fake, or pay for a commonly used operating system even though free operating systems are available.
With so many choices, and so many people seeking employment in services, it seems likely that many stores will focus on personal service and product advice. Goods purchased in a shop will be priced based on a combination of service and the prestige of certain designers, with a very small component of the cost of the nanoblocks used in product construction. There still will be "big chain" stores with vast showrooms filled with goods, but even there, the key will be the service of providing one place to go see and compare a huge variety of goods. They may make some goods while you wait, others they’ll have available off the shelf, still others—especially larger goods—they’ll make and deliver to your home. Likely, there also will be a way to buy "limited uses" designs for home production.
Making nanoblock-limited fabbers available to everyone promises to provide most of the easily imaginable benefits of unrestricted atom precise MM, with significantly fewer risks. Fabbers can provide useful advantages of speed, efficiency, and safety. Certainly, they are not a cure-all, creating a perfect utopia—but the problems remaining may be humanly manageable.
Perhaps fabbers would only be a transition phase before a shift to a more liberal availability of atom precise MM, but given all the risks and uncertainties raised by molecular manufacturing, this more controlled introduction seems warranted. The most likely alternative is not free release of atom precise MM, but even tighter restrictions. Fabbers limited to constructing things out of nanoblocks seem like a reasonable compromise approach, and one that government authorities and others may consider acceptable.
1. J. Storrs Hall, "Utility Fog: The Stuff that Dreams are Made Of", http://discuss.foresight.org/% 7Ejosh/Ufog.html
]]>In one of those essays, The Need for Limits, Chris Phoenix speaks of the Enlightenment in terms of a synergy: enhanced human productivity with machines, partially supporting a philosophical examination of the human condition. Though certainly that, the Enlightenment also was a watershed period when the economic foundations of the European economy changed, and the authority of Revealed Truth was forced to contend with the authority of Rational Thought and its practical cousin, Scientific Inquiry. The shifts in the economy created a massive transformation of social life, from agrarian to urban. The current era has parallels to all of these forces, movements already in play but not yet complete…and in some cases not fully articulated.
As a peripheral member of a futurists group2 in my professional field (policing, and more broadly, criminal justice), I have noticed that futurists tend to be concerned with the end results of trends, the state of things ten, twenty, or fifty years from now. By contrast, I am more concerned with the collateral damage we may sustain in the process of getting to those future states from where we are now.
This essay approaches that interstitial state in four sections. The first section looks at the control of the technology; the second, for the criminal potentials inherent in it. Using the template of the Enlightenment, the third section looks at the darker channels of social transformation, particularly the impact on work and social worth. The fourth section draws an admittedly leap-of-faith parallel between the Enlightenments impact on religious authority, and technologys impact upon the authority of economic capital and law.
Nanotechnology holds remarkable potential to change the world, but like most recent technologies, it emerges within a larger system of laws, codes of conduct, and social expectations developed for previous capacities. Those mechanisms will shape its emerging uses, possibly retarding or constraining the applications of the technology in undesirable ways. At issue is whether micro-level processing will be merely one more tool (and thus alter our lives incrementally), or a Promethean breakthrough that will alter human existence in profound ways. My interest, as one who stands outside the Halls of Science looking in, tends to center on the possibilities that I can understand from a laymans perspective.
Trying to grasp in laymans terms the implications of a new and only marginally understood technology leads to a search for analogies, framing the new in terms of the familiar (for good or ill).
As a non-scientist, the most salient question for me is, When do I get to play with the new toy? Given the general limits of corporate use of nanotechnology, the first new toy that will become available to me most likely will be the desktop assembler, or personal nanofactory (PN).
The most knowledgeable members of CRNs Global Task Force3 have engaged in a lengthy discussion about desktop manufacturing and its social consequences, and as of this writing, there seems to be a lack of consensus about the capacity, and thus the full impact, of PNs. If we accept the position of the optimists, and expect fully-capable devices to be available in the not-too-distant future, secondary questions arise: Will the devices be provided in fully-capable form (probably transformative), or will their functionality be curtailed in defense of the corporate profits to be derived from them? If the latter, how will control be maintained? Some answers are perhaps to be found in current trends, since the courts often look to historical analogs in dealing with new issues.
If we posit that desktop manufacturing becomes widely available, as seems inevitable, the dominant forces of the economy have two avenues of recourse to maintain control over the new technology for monetary benefit. The first will be the control of raw materials for molecular assembly, which appears to share the delivery profile of heating fuels in contemporary life. More important is the second area, already suggested by Phoenix: patents and copyrights.4
The development of nanotechnology is taking place within a corporate nest of ideas and resources (much like licensed computer software development), with some independent researchers and consortia operating on a freeware basis. Molecular assembly at any sort of commercial or individual level will require patterns to guide assembly, and these are likely to be controlled by patents. The majority of patents are almost certain to be controlled by corporate interests. Renewable user site licenses, comparable to commercial software packages, are the most likely form of retaining economic benefit for a corporate entity. One of the possible ways of maintaining economic control over site licenses would be some form of cyber-degradable program that self-destructs after a finite period, and must be renewed. For example, a user could download (or purchase on a one-use or renewable-use media platform) the code that would allow the manufacture of only a certain number of rolls of toilet paper by a personal nanofactory.
Patents and the fundamental premises of intellectual property are already under challenge, but the challenges have been met with an equally strong legal response anchored in precedent. The courts have handed the reins of control over digital recordings of music to the star-making machinery behind the popular songs through conservative interpretation of intellectual property statutes. The huge profits to be made from licensing technological advancements for industry virtually assures that the field of nanotechnology will be similarly bound.
The most recent Promethean technology, file-sharing, theoretically stood to liberate music from the chains of capital. However, Napster, Kaaza, Grokster, and their lesser clones have lost the legal battles, and the technology has been co-opted by industry giants into new distribution-for-profit mechanisms. Corporations and universities alike write eminent domain over patents and patentable discoveries into their employment contracts, and genetic patterns and discoveries are subject to copyright. Unknown garage bands and the metaphorical garage workshops of independent researchers still can be found beyond the current reach of over-grasping capital, but only until they become good or useful enough to attract attention.
As new genetic building block discoveries and other chemical compounds are placed under patent, the copyright has become the new castle moat or the new dog in the manger (depending upon ones perspective), intended to keep easily-duplicated properties under the control of their owners. Paradoxically, only those products deemed legitimate are defended by patents and lawsuits so vigorously; illegal products and contraband are not. Corporate interests have far deeper pockets and a true metric for measuring loss and injury. There is greater freedom in the illicit trades, where control of trafficked, harmful artifacts rests with hugely inefficient, underfunded, and understaffed public enforcement agencies.
The exponential explosion of child pornography (and its hate- and racial supremacy-based counterparts) over the Internet is a cautionary tale in its own right. Like the illicit drug trade in the physical world, neither child porn nor hate-mongering is impervious to law enforcement efforts, but the occasional victories of enforcement seem to have little long-term effect on the larger industry or movement. The underground distribution of molecular patterns for assembly might easily be accomplished by the same mechanisms, like the basic virus codes that any script-kiddie can download, tinker with, and release back into the wild.
While the first generation of personal nanofactories probably will come with a fixed number of pre-programmed patterns, market forces will demand versatility. Units will need a capacity to acquire new assembly patterns as they are developed, and there seem to be few options beyond what is now available for computer data. Patterns may be downloaded over hardwired or Wi-Fi networks, or be manually transferred by whatever media replace the current disk drives and flash memory sticks. Each format would spawn a black market of unknown proportions, and with the black markets come the accompanying risks of epidemic and pandemic consequences of criminal use.
We should anticipate that a new drug industry will piggyback on the basic molecular assembly phenomenon, and the potential implications for the social fabric are enormous. One of the most desirable benefits of nanotechnology is that of precise targeting of therapeutic drugs; however, the same technology will have associated benefits for illegal pharmacopoeia. While the complexity of the patterns most likely will delay this until a second or third-level level of PN development, once the basic patterns for psychotropic drugs are understood and the assembly technology sufficiently enabled, individual drug manufacture is almost certain to become a social tsunami. There are strong analogies to the current methamphetamine epidemic: less than two decades ago, the manufacture of crystal methamphetamine required a well-equipped clandestine lab, a chemist, a criminal organization for protection and distribution. Today, meth is the new bathtub gin, easily made in any number of Rube Goldberg processes in basements, trailers, campers, garages, or pickup trucks.
Unlike methamphetamine, a micro-assembly drug manufacture process would need only the basic molecular components, not the more elaborate precursor chemicals (like pseudoephedrine) whose control is now part of our anti-drug strategy. That suggests a much greater availability, with corollary hazards of greater social experimentation and conceivably even poly-drug experimentation. The toxic byproducts of meth labs are threats to law enforcement agencies, the families of meth addicts, and neighborhoods. We do not yet know the degree to which micro-manufacture byproducts will be toxic, if at all.
Illicit micro-manufacture may be a mixed blessing. On the one hand, effectively eliminating organized crime from the market may lessen the toxic effects of the war on drugs: the corruption involved with importation of drugs, and the violence of competing drug markets. At least potentially, even the criminogenic nature of drug dependency may be lessened: since the base materials would likely be the same as for legitimate micro-manufacture, it is less likely that a specialized, higher-priced supply chain would be necessary. The dynamics of that supply chain create additional crimes: violence among criminal enterprises competing for turf high, and both personal and property crime committed by addicts desperate to meet the dealers price. Absent the supply market, the cost of personally manufactured drugs would be cheaper, and the risks of their creation considerably lower in terms of legal discovery and interdiction. However, the potential free access to addictive and mind-altering substances will almost certainly exacerbate the social problems associated with the addictions and dependencies that result. The same delivery method could surreptitiously create markets for new designer drugs, addictive and involuntarily piggybacked on legitimately disseminated nanoproduct codes. The number of what ifs that need to be resolved before either scenario happens leave the possibilities within the realm of fiction for now, but if the analogies to the Internet hold true, they must be anticipated as a contingency.
Should we ever develop a drug-based cure for the addictions, of course, it might be to our collective advantage to attempt to disseminate it via whatever outlaw networks and mechanisms develop, the angelic counterpart to the demonic assault-by-micro-drugs of the original scenario. Therapeutic nano-rehab, even at the time of a medical crisis, may not be sufficient to stem the drug crisis, however. Involuntary detoxification has a poor history of neutralizing the psychological dependencies that drive post-sobriety returns to addictive substances. The evil twin of involuntary detoxification is involuntary addiction.
Lurking beyond therapeutic use is the possibility of totalitarian control using the same methods. The Promethean paradox that attends all new technologies is even more pronounced for those that escape Newtonian-level detection. Medical research is racing ahead in its understanding of neural processes, including the sites in the brain responsible for certain behaviors. As nanomedicine develops capacities for intervening in psychological dependencies or other maladies, it also develops the capacity for inducing mind control or other forms of incapacitation.
Downstream, there is also the potential for mass murder via compromised assembly codes. In the physical world, tainting a medicine with poison can only be done efficiently at the factory source, and even then must bypass or defeat stringent quality control measures. Any other corruption can take place only on a relatively small scale. The introduction of a virulent and unsuspected corruption of a drug assembly code is not so limited. It shares more in common with the computer virus than the Tylenol poisoner. Since black market codes originate and enter the data stream outside the domain of legitimate quality control measures, and the drug-using community is unlikely to give designer drug codes great scrutiny (at least in the initial rounds), massassination (mass assassination or pharmaceutical cleansing) via bogus codes is a distinct possibility in a networked distribution system. It would challenge both medical institutions and law enforcement agents. It is admittedly an outside possibility, requiring a rare combination of technological savvy and social alienation, but the world since September 2001 has been dealing with more and more one in a bazillion scenarios. Nothing should be taken off the table in terms of exploring, and preparing for, unpleasant misappropriation of technology.
To a certain degree, the massassination scenario depends upon the nature of the dissemination of manufacture codes. The most logical assumption is that distribution of product blueprints for desktop manufacturing will be done via the Internet or its successor entity. The current attempts to defeat music and film pirate copies would have serious analogs in any new process that challenged traditional sources of corporate and investment income, especially unrestricted use of molecular assembly technology. The Spy vs. Spy battle between corporate interests and hacktivism will doubtless continue in the nano- and micro-arenas as in cyberspace. Even if controls evolve another way, such as physical distribution of codes on one-use portable media like the flash memory stick, markets for stolen and counterfeit products will emerge, just as the current computer viruses and malware are piggybacked on the legitimate use of the Internet. Beating security encryptions to transform a one-use code into a version capable of electronic dissemination will be an instant challenge for the criminal and black-hat hacking communities.
There are some differences, though. While the viruses and Trojan horses that hector cyberspace have consequences ranging from irritating (the Blue Screen of Death) to life-changing (severe financial crises resulting from identity theft), it is only at the most extreme range that they could be considered life-threatening. Identity theft that labels an innocent citizen as a dangerous criminal has some potential for creating life-threatening situations, but most of the jeopardy is financial or social. Viruses and worms may take down a network or three, or transform the World Wide Web into the World Wide Wait with deleterious consequences for commerce, but they do not directly assault the networks users. A corrupted, mislabeled, or maliciously designed micro-manufacture code could break the fourth wall, crossing out of cyberspace into the physical realm.
The closest parallel in the physical world, the batch of bad heroin that kills users in clusters, does not really provide an accurate analog for a malicious assembly code incident. Relatively few seek heroin under any circumstances, and no one but the most desperate heroin addict would seek out bad heroin (as has happened in some isolated cases). The first killer virus loose in whatever network provides product codes for PNs will affect hundreds and perhaps thousands of innocents, whether it comes as a terrorist strike or an unintended consequence of a hacking adventure. No one will have to seek it: once in the wild, it will arrive unbidden in the In Box.
Defenses to such a scenario potentially exist, but security measures are one of the most attractive fruits of the Tree of Knowledge. Like contemporary Internet defenses, and the laws passed to outlaw new designer drugs, defensive maneuvers almost always stimulate new offensive attacks. Any combination of zeros and ones, in any transportation medium, can be hijacked and compromised: the track record of Internet security does not bode well for the free and easy commercial transfer of assembly codes for the molecules-up creation of products.
During the Industrial Revolution in England, improved agricultural efficiencies accelerated the process of enclosure, dislocating the rural population no longer needed for raising and harvesting crops. Simultaneous improvements in the production of iron and steel, in weaving, and other areas began to transform cottage industries into factory-based industries, and urbanization rapidly changed the face of the country. The nature of trade shifted from one-off mercantile ventures and royal charters to stable capital for long-term ventures. Factory industries supplanted cottage industries, local artisans, and craft guilds, but the concentration of work in brick-and-mortar containers still left some out of work: the notorious surplus labor that kept wages low. The expansion of the new manufacturing base managed to absorb surplus labor for some time, until the advent of widespread robotics in the second half of the twentieth century.
A robust generation of personal nanofactories may very well bifurcate commerce into those items that can be manufactured at home and those which still must be purchased through the familiar retail supply chains. While a certain amount of jobs will be created around the transportation of raw materials for PNs, they will be paltry in comparison to the jobs the devices displace in manufacture, transport, and sales. Globalization has already imposed a certain amount of social dislocation in the manufacturing sectors; a maturing nanotechnology could very well trigger a long-term social dislocation not seen since the English migration from the newly-enclosed farmlands to the new factories of the Industrial Revolution.
The need for human labor seems to be diminishing at an accelerated rate inverse to Ray Kurzweils description5 of the advance of technology. The shift from human muscle to animal muscle took millennia; from animal to human-guided mechanical, centuries; from human-guided to robotic, decades; and the emergence of computer-directed manufacture seems measured in years if not months. Human society, however, still is anchored in a near-medieval paradigm where social worth is measured by the type and extent of work one engages in. The pecking order of work starts at the menial and dirty level, maids and animal rendering and manual labor (the province of illegal immigrants and paroled convicts) comparable to carrying the hod. The next step up is the marginally cleaner and less taxing service economy of McJobs, which jousts with the decline of blue-collar union-affiliated manufacturing jobs for the next higher rung (salaries and benefits alone give the advantage to unionized jobs, regardless of the decades-long decline in union membership, though the recent perturbations in the airline and automobile industries in particular, and corporate pension plans generally, leave even that in doubt). Above that are the traditional white-collar jobs, but the new aristocracy—sharply defined by the accelerating concentration of wealth in at least American society—is comprised of those who let their money work for them, the investing class, the owners of the means of production.
Work is devalued in other ways: in the symbolic change of language in which employees are now called associates, with a presumed stake in the corporate success that is not mirrored anywhere in the reward system; in the stock market rewarding corporate actions that trim the workforce; and in the precipitous erosion of industry-sponsored pensions. Human labor has been, or is in the process of being, effectively decoupled from the part of the economy that is valued. The long-term consequences of this are by no means clear, but the advent of a personal nanofactories will not necessarily create a widespread leisure class.
Another of the volumes on my physical desktop is William Julius Wilsons When Work Disappears: The World of the New Urban Poor. It deals with the left behind problem of those under a double burden of low social status and of being dependent upon jobs in industries that have moved elsewhere (to Alabama, to Mexico, or to China). While the analogy to a nanotechnology shift need not be exact, Wilsons depictions and analyses offer a powerful warning we may need to confront within a generation: what are the social consequences when there are no alternative employment outlets for surplus labor? American history of the 20th century holds small hope that our social attitudes will change rapidly: the unemployed, underemployed, and idle always have been despised for not somehow rising above the crushing weight of social and economic forces beyond their control. Revolution traditionally has been pointless or counterproductive, and Cite Soliel endures in its multiple forms around the globe despite the potential and promises of globalization, the Green Revolution, and countless other advances.
It is tempting to suggest that nano-communes, with internal self-sufficiency that leaven the worst effects of industrial-era unemployment, will free the human spirit for more cerebral endeavors. Futures are almost never equally distributed when they arrive, and Utopian dreams of that kind have a history of being measured in months rather than decades or eras. It is difficult to envision the rise of a labor movement comparable to those of nineteenth-century Britain and the United States; it is almost easier to predict the widespread distribution of limited-capacity PNs as a form of social welfare (and social placation of the underclass).
Larger questions arise out of this potential for increased social marginality. The income gap between rich and poor has been widening for more than two decades. Globalization has transformed the American economy, and the household economy has suffered as a result. The degree to which nanotechnology, the Internet, and other technologies accelerate or buffer the social decoupling of work and status is still an undiscovered country. If the cumulative effect is acceleration, we need to anticipate the range of human adaptations that will follow. If one no longer is attached in any meaningful way to an economy and the political ideology that supports it, how long can that authority hold ones allegiance? And what are the alternatives if the allegiance cannot otherwise be reinforced?
Although it is a commonplace to think of religious worship as timeless, it actually undergoes periodic major shifts, often triggered by secular events. In the first century of the Common Era, the nature of revelation itself was transformed from the direct presence of a transcendent deity to the interpretation of a written Scripture. For Jews, the destruction of the Holy of Holies in the Second Temple ended the traditional direct contact of the High Priests. For Christians, the sudden absence of their Messiah from the streets of Jerusalem transformed the Judaic concept of messianic return into an entirely new understanding the relationship between human beings and their Creator.
The struggle for primacy between the Catholic Church and secular governments began soon after Christianity was adopted as the official religion of the Roman Empire. It continued through the Investiture Controversy of the Middle Ages, and was the decisive factor in the success of the Reformation. However, the waning of the dominance of religion was a process begun centuries earlier by resistance (heresy) within the Church itself, beginning with the Great Schism of the Eastern Orthodox traditions. The purification movements that created monastic orders within the Church presaged the later coming of the Reformation, which relocated purifying reform outside the Church and ended the sole authority of Rome to arbitrate Christian salvation. The secular challenges arising from the Enlightenment remain at play in the contemporary questions of Church and State, Science and Belief, and authority to define human relations. Increasing secularity jousts with the rise of fundamentalism and of sects, undermining traditional mainstream churches.
Whether the maturing of nanotechnology will impact the continuing struggle of religious authority is unclear. The potential is there, certainly, as the manipulation of matter at the molecular level comes perilously close to playing God, especially where it might affect what it means to be human. Artificial intelligence, genetic engineering, and cybernetic enhancements pose imminent challenges to the religious understandings of human, and nanotechnology bids to play a major role within each of those technologies. Public discourse in areas where the definitions of life are most contended are fueled as much by symbolism and metaphor as by science; misapprehensions and misunderstandings about nanotechnology may well be fuel for new battlefronts in what has been dubbed the culture wars.
During the Reformation, the monolithic authority of the Church of Rome was transformed into a limited number of Protestant denominations. The existence of each one allowed anyone to resist the Authority of the Catholic Church, and beyond that, the authority of any other church. (The earliest attempt to incorporate a denial of secular authority, under the banner of No Bishops, No Barons, was ruthlessly suppressed by secular forces, whose worldly enforcement had more immediate clout than the afterlife of religion.) The transformation of monolithic Authority into micro-authority created a market for allegiances. The old concept of rules defined and enforced by a monopolistic Church—enforced by excommunication, the denial of sacraments, and the resulting condemnation to an infernal afterlife—gave way to a free market of ideas and selection of, rather than submission to, authority that continues to this day. Catholic priests who wish to marry may find refuge in the Anglican communion. Protestant churches may fracture over rules of control and worship, and denominations may enter schism over ecclesiastical matters, as witness the current strain in the Anglican communion over the issue of gay bishops and clergy, and the social acceptance of homosexuality. Other issues less anchored in scriptural interpretation, like finances, may also trigger the sundering of ways for a congregation.
Using this as an analogy for secular considerations, it is an interesting exercise in speculation to consider whether nanotechnology generally, and desktop manufacturing in particular, will lead to nano-communes that eventually decouple individuals from the larger economy and the political system so closely tied to it. Such communities would be the natural descendants of the self-sufficient medieval monastic orders, the utopian communities of the mid-1800s, and the communes of the 1960s and beyond. Unlike their predecessors, they could be off the grid in important ways, but not necessarily withdrawn from the larger society.
In other realms, there is some additional promise in the potential for using nanotechnology as a recycling outlet. Molecular disassembly as a precursor to molecular assembly may be a completely different set of technological difficulties, and raises a series of questions about disposal of nonessential elements. The Newtonian-world vision of a methane burnoff is impractical at the molecular level, and the state of byproduct disposal is unclear at this point. If unwanted matter can be converted to energy, and stored for use, nanotechnology could change the nature of both recycling and of power. If each household ran on a green power combination of solar energy and molecular conversions, entire industries might be transformed. It stretches the imagination a bit to think that factories could be powered with wind, solar, and nano power, so the traditional power industries might not disappear, but important sectors might achieve relative independence from them.
At the same time, the intellectual property forces would still work to bind nanobased anything to the existing corporate world. If nano goes into the wild, via bootleg or Robin Hood dissemination, it could weaken the corporate hold, inspire a widespread law enforcement crackdown on piracy, or dissolve society into above-ground and Morlock-like subcultures that coexist because they have little reason to compete. In any of these scenarios, nanotechnology by itself is not an actor: it is a tool of other interests, and its impacts are dampened or enhanced by the decisions of social engineering and politics. But if the end result is the alienation of large masses of citizens from the engines of the economy and the icons of government, the costs and secondary developments will be far ranging.
Nanotechnology has its own limits. A host of major decisions in the social realm will not be changed to any great degree by nanotechnology. It will not protect the Arctic Natural Wildlife Refuge (indeed, if natural gas is the first and basic fuel for desktop manufacturing, it may exacerbate the pressures on the ANWR), nor will it stop the denuding of the Amazon rain forest. It will not eliminate prejudice, nor resolve the multiple questions of authority and Authority that attend the modern estate of humankind. We can predict safely that when this particular future of mature nanotechnology arrives, it will not be equally distributed, and may easily be a weapon of social dominance rather than the delivery vehicle of social equity. Even the utopian visions of Gene Roddenberry included a period of troubled dystopia, which Alvin Toffler captured in Future Shock: the premature arrival of the future… the imposition of a new culture on an old one that results in human beings…. increasingly disoriented, progressively incompetent to deal with their environments.
Which leaves me almost where I began: What do I make of this nanotechnology thing? I suspect it will be very much like its predecessors, a potentially transformative technology that will be bound on the bed of Procrustes of the older social and economic systems that midwifed it. Because of that, it has considerable potential to be more Pandoras Box than Holy Grail in the early going. Assuming that its byproducts do not poison the groundwater or become an airborne grey goo, it will almost have to achieve an outlaw status (or its more egalitarian potential championed by those who will be deemed outlaws) before it reaches a socially transformative cusp. In the near term, whether I buy it in a store or make it with my nanofactory, I will still have to pay for toilet paper.
Michael Buerger, an Associate Professor of Criminal Justice at Bowling Green State University and a former police officer, is a member of the Futures Working Group, a collaboration between the FBI and the Society of Police Futurists International. His broad interests mainly concern the impact of large-scale social changes and reactions to them.
1 Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology (Collegium Basilea, Basel, Switzerland), Volume 2, Number 1a
2 The Futures Working Group, a collaboration between the FBI and the Society of Police Futurists International (http://www.policefuturists.org/futures/fwg.htm)
3 Global Task Force on Implications and Policy (http://www.crnano.org/CTF.htm), organized by the Center for Responsible Nanotechnology
4 The Need For Limits (http://www.thekurzweillibrary.com/the-need-for-limits)
5 The Singularity is Near (http://singularity.com/)
]]>For centuries, we have built cultures and economies around scarcity.Economics is the “study of how human beings allocate scarce resources”1 in the most efficient way and conventional wisdom agrees that regulated capitalism results in the most efficient allocation of those scarce resources.
But what happens if resources are not scarce? What economic system would we use to allocate plentiful resources? Is there even a point to talking about the “economics of abundance”in a culture where economic equations are entirely oriented around scarcity? As Chris Anderson, editor of Wired magazine says,”My college textbook, Gregory Mankiw’s otherwise excellentPrinciples of Economics, doesn’t mention the word abundance. And for good reason: If you let the scarcity term in most economic equations go to nothing, you get all sorts of divide-by-zero problems.They basically blow up.”2
We are on the cusp of a new era that has the potential to be an era of abundance. In the coming decades, molecular manufacturing will be a reality. The Nanotechnology Glossary3 defines molecular manufacturing as “the automated building of products from the bottom up, molecule by molecule, with atomic precision.This will make products that are extremely lightweight, flexible,durable, and potentially very ‘smart’.”
And cheap. Just as Apple enabled personal publishing by marrying the Postscript language with the Macintosh interface and an inexpensive Laser Writer printer, so will the coupling of molecular manufacturing with appropriate programming tools bring about a revolution we might call “personal manufacturing.” Such personal nanofactories (PNs) already have been envisioned and are likely to be similar in look and ease of use to a printer or microwave oven. Indeed, an artist’s conception can be seen at http://www.foresight.org/nano/nanofactory.html
The advent of PNs should bring the cost of most nonfood necessities to near zero. Much of the raw material for most objects we commonly use can be found in air and dirt, with a few fortified materials thrown in. If we build things from the molecules up (and conversely, break things down into their component molecules for reuse), materials cost will nearly disappear. Information would then become the most expensive resource.
Meanwhile, computing power—information management—continues to expand exponentially even as its cost drops precipitously. Furthermore, as true artificial intelligence(AI) approaches, computers will become self-programming, and information cost may drop even more dramatically. It’s already happening. Today, most of our products contain greater and greater information content (technology) at lesser and lesser cost. It appears that even food eventually could be manufactured on the kitchen counter top personal at practically no materials cost.
However, if history is a guide, the “haves” will always want to have more and the “havenots” will end up getting relatively less. That is the way many people keep score — as the bumper sticker wisdom goes, “He who dies with the most toys wins.” It’s not just a silly ditty. It is a frank statement of the mindset of many individuals. And it is the “haves” that possess easy access to the levers of power and legislation.
In a system based on scarcity, those holding the levers of production will not easily give them up. In domestic and international markets based on scarcity, the function and responsibility of directors and officers is to maximize shareholder value — at nearly any cost that does not fall afoul of laws, or at least not so far afoul that the penalties exceed the financial gain resulting from illegal actions.
So, what kind of culture do we want? In a system of plenty, will we continue to keep score by maintaining the preponderance of benefits inside corporate walls and coffers? Will we continue to stifle the spread of benefits through secrecy and protectionism? Unless something changes, history suggests that laws, regulations, and protections will continue to be designed for the exact purpose of directing all profits and the virtually all of benefits to shareholders.
Is it possible to change this historical trend? Is it desirable? What would an economy based on abundance look like? What would we call it? Could we convince the lawmakers, the regulators, and those who currently benefit most from a system based on scarcity to relinquish what has worked so well for them?
I maintain that it is desirable and that we must drive toward an outcome whereby the benefits of molecular manufacturing accrue to the greatest number of people. War, poverty, and business drive my reasoning.
To date, all our technological and economic progress has produced a world at war and in poverty. War is largely fought over scarce resources. Widespread wealth (through universal distribution of PNs) would remove the apparent fuel for most wars.4
The World Bank estimates that 2.7 billion humans live below a level necessary to meet basic needs. The organization says that this kind of poverty includes hunger, lack of shelter, no access to medicines, and losing a child to illness brought about by unclean water.5 Few would argue that human misery is desirable. PNs could be programmed to provide basic building supplies, medicine, foodstuffs, and clean water.
As regards business, I believe we can convince a wide range of enterprises, from local to transnational, that maximizing the benefits for billions of people (read: “customers”) simultaneously maximizes value for shareholders… in the long run.
However, nearly all businesses act primarily in the interest of the short term. Corporate directors cannot allow a departure from known short-term profit centers in the market without assistance from legislation and regulators to flatten the playing field for all. Even Bill Ford, chairman of the Ford Motor Company, is calling for government to incentivize his industry to produce environmentally friendly technology6 — ostensibly, so his firm can afford to produce such vehicles while staying competitive with other auto manufacturers.
We must incentivize, strongly encourage, or require the broad sharing of the benefits of early-onset molecular manufacturing advances and breakthroughs so that the long-term benefits can be realized. This discussion needs to happen now, before entrenched interests develop protections and harden regulations adapted for maximum short-term profits while stifling innovation. Market forces can be too slow. What’s needed is a means to produce broad and inexpensive licensing so that early breakthroughs in molecular manufacturing can quickly benefit a broad swath of humanity.
Over hundreds of years, we have developed the skills of how to allocate things in short supply. For widespread abundance, we have no experience, no projections, and no economic calculations. Abundance, paradoxically, could be highly disruptive. It is time to design a new economics of abundance, so that abundance can be enjoyed in a society that is prepared for it.
1. (2003) The Columbia Electronic Encyclopedia, Sixth Edition,Columbia University Press
2. Anderson, Chris (2005) “The Tragically Neglected Economics of Abundance” http://longtail.typepad.com/the_long_tail/2005/03/the_tragically_.html
3. (2004) Burgess, Steve; Holister, Paul; Keiper, Adam; Swartz Esq., MPA, Jonathan S.; Wang, Rosa (2004) “Nanotechnology Glossary” http://www.nanotech-now.com/nanotechnology-glossary-N.htm
4. Burgess, Steve and Treder, Mike (2005) “Policy Debate” http://crnano.typepad.com/crnblog/2005/05/policy_debate.html
5. The World Bank (2006) “Poverty Analysis—Overview” http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTPOVERTY/EXTPA/0,,contentMDK:20153855~menuPK:435040~pagePK:148956~piPK:216618~theSitePK:430367,00.html
6. (2005) “Bill Ford’s Address at the National Press Club,Washington, DC” http://www.theautochannel.com/news/2005/11/22/148983.html
]]>1. Some eleven thousand years ago, in the neighborhood of Mesopotamia, some of our ancestors took up agriculture, thereby beginning the end of the hunter-gatherer existence that our species had lived ever since it first evolved. Population exploded even as nutritional status and quality of life declined, at least initially. Eventually, greater population densities led to greatly accelerated cultural and technological development.
In 1448, Johan Gutenberg invented the movable type printing process in Europe, enabling copies of the Bible to be mass-produced. Gutenberg’s invention became a major factor fueling the Renaissance, the Reformation, and the scientific revolution, and helped give rise to mass literacy. A few hundred years later, Mein Kampf was mass-produced using an improved version of the same technology.
Work in atomic physics and quantum mechanics in the first three decades of the 20th century laid the foundation for the subsequent Manhattan project during World War II, which raced to beat Hitler to the nuclear bomb.
In 1957, Soviet scientists launched Sputnik 1. In the following year, the US created the Defense Advanced Research Projects Agency to ensure that US would keep ahead of its enemies in military technology. DARPA began developing a communication system that could survive nuclear bombardment by the USSR. The result, ARPANET, later became the Internet—the long-term consequences of which remain to be seen.
2. Suppose you are an individual involved in some way in what may become a technological revolution. You might be an inventor, a funder of research, a user of a new technology, a regulator, a policy-maker, an opinion leader, or a voting citizen. Suppose you are concerned with the ethical issues that arise from your potential involvement. You want to act responsibly and with moral integrity. What does morality require of you in such a situation? What does it permit but does not require? What questions do you need to find answers to in order to determine what you ought to do?
If you consult the literature on applied ethics, you will not find much advice that applies directly to this situation. Ethicists have written at length about war, the environment, our duties towards the developing world; about doctor-patient relationships, euthanasia, and abortion; about the fairness of social redistribution, race and gender relations, civil rights, and many other things. Arguably, nothing humans do has such profound and wide-ranging consequences as technological revolutions. Technological revolutions can change the human condition and affect the lives of billions. Their consequences can be felt for hundreds if not thousands of years. Yet, on this topic, moral philosophers have had precious little to say.
3. In recent years, there have been increasing efforts to evaluate the ethical, social, and legal implications (“ELSI”) of important new technologies ahead of time. Much attention has been focused on ethical issues related to the human genome project. Now there is a push to look at the ethics of advances in information technology (information and computer ethics), brain science (neuroethics), and nanotechnology (nanoethics).
Will “ESLI” research produce any important findings? Will it have any significant effects on public policy, regulation, research priorities, or social attitudes? If so, will these effects be for the better or for the worse? It is too early to tell.
But if we believe that nanotechnology will eventually amount to a technological revolution, and if we are going to attempt nanoethics, then we might do well to consider some of the earlier technological revolutions that humanity has undergone. Perhaps there are hidden features of our current situation with regard to nanotechnology that would become more easily visible if we considered how our moral principles and technology impact assessment exercises would have fared if they had been applied in equivalent circumstances in any of the preceding technological revolutions.
If such a comparison were made, we might (for example) become more modest about our ability to predict or anticipate the long-term consequences of what we were about to do. We might become sensitized to certain kinds of impacts that we might otherwise overlook—such as impacts on culture, geopolitical strategy and balance of power, people’s preferences, and on the size and composition of the human population. Perhaps most importantly, we might be led to pay closer attention to what impacts there might be in terms of further technological developments that the initial revolution would enable. We might also become more sophisticated, and perhaps more humble, in our thinking about how individuals or groups might exert predictable positive influence on the way things develop. Finally, we might be led to focus more on systems level aspects, such as institutions and technologies for aggregating and processes information, for making decisions regarding e.g. regulations and funding priorities, and for implementing these decisions.
]]>Allow me to clarify the metaphor implied by the term “singularity.” The metaphor implicit in the term “singularity” as applied to future human history is not to a point of infinity, but rather to the event horizon surrounding a black hole. Densities are not infinite at the event horizon but merely large enough such that it is difficult to see past the event horizon from outside.
I say difficult rather than impossible because the Hawking radiation emitted from the event horizon is likely to be quantum entangled with events inside the black hole, so there may be ways of retrieving the information. This was the concession made recently by Hawking. However, without getting into the details of this controversy, it is fair to say that seeing past the event horizon is difficult (impossible from a classical physics perspective) because the gravity of the black hole is strong enough to prevent classical information from inside the black hole getting out.
We can, however, use our intelligence to infer what life is like inside the event horizon even though seeing past the event horizon is effectively blocked. Similarly, we can use our intelligence to make meaningful statements about the world after the historical singularity, but seeing past this event horizon is difficult because of the profound transformation that it represents.
So discussions of infinity are not relevant. You are correct that exponential growth is smooth and continuous. From a mathematical perspective, an exponential looks the same everywhere and this applies to the exponential growth of the power (as expressed in price-performance, capacity, bandwidth, etc.) of information technologies. However, despite being smooth and continuous, exponential growth is nonetheless explosive once the curve reaches transformative levels. Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in one year, and then to 40,000 and then 80,000, it was of interest only to a few thousand scientists. When ten years later it went from 10 million nodes to 20 million, and then 40 million and 80 million, the appearance of this curve looks identical (especially when viewed on a log plot), but the consequences were profoundly more transformative. There is a point in the smooth exponential growth of these different aspects of information technology when they transform the world as we know it.
You cite the extension made by Kevin Drum of the log-log plot that I provide of key paradigm shifts in biological and technological evolution (which appears on page 17 of The Singularity Is Near). This extension is utterly invalid. You cannot extend in this way a log-log plot for just the reasons you cite. The only straight line that is valid to extend on a log plot is a straight line representing exponential growth when the time axis is on a linear scale and the a value (such as price-performance) is on a log scale. Then you can extend the progression, but even here you have to make sure that the paradigms to support this ongoing exponential progression are available and will not saturate. That is why I discuss at length the paradigms that will support ongoing exponential growth of both hardware and software capabilities. But it is not valid to extend the straight line when the time axis is on a log scale. The only point of these graphs is that there has been acceleration in paradigm shift in biological and technological evolution.
If you want to extend this type of progression, then you need to put time on a linear x axis and the number of years (for the paradigm shift or for adoption) as a log value on the y axis. Then it may be valid to extend the chart. I have a chart like this on page 50 of the book.
This acceleration is a key point. These charts show that technological evolution emerges smoothly from the biological evolution that created the technology creating species. You mention that an evolutionary process can create greater complexity—and greater intelligence—than existed prior to the process. And it is precisely that intelligence creating process that will go into hyper drive once we can master, understand, model, simulate, and extend the methods of human intelligence through reverse-engineering it and applying these methods to computational substrates of exponentially expanding capability.
That chimps are just below the threshold needed to understand their own intelligence is a result of the fact that they do not have the prerequisites to create technology. There were only a few small genetic changes, comprising a few tens of thousands of bytes of information, that distinguish us from our primate ancestors: a bigger skull (allowing a larger brain), a larger cerebral cortex, and a workable opposable appendage. There were a few other changes that other primates share to some extent such as mirror neurons and spindle cells
As I pointed out in my long now talk, a chimp’s hand looks similar but the pivot point of the thumb does not allow facile manipulation of the environment. In contrast, our human ability to look inside the human brain and to model and simulate and recreate the processes we encounter there has already been demonstrated. The scale and resolution of these simulations will continue to expand exponentially. I make the case that we will reverse-engineer the principles of operation of the several hundred information processing regions of the human brain within about twenty years and then apply these principles (along with the extensive tool kit we are creating through other means in the AI field) to computers that will be many times (by the 2040s, billions of times) more powerful than needed to simulate the human brain.
You write that “Kurzweil found that if you make a very crude comparison between the processing power of neurons in human brains and the processing powers of transistors in computers, you could map out the point at which computer intelligence will exceed human intelligence.” That is an oversimplification of my analysis. I provide in book four different approaches to estimating the amount of computation required to simulate all regions of the human brain based on actual functional recreations of brain regions. These all come up with answers in the same range, from 1014 to 1016 cps for creating a functional recreation of all regions of the human brain, so I’ve used 1016 cps as a conservative estimate.
This refers only to the hardware requirement. As noted above, I have an extensive analysis of the software requirements. While reverse-engineering the human brain is not the only source of intelligent algorithms (and, in fact, has not been a major source at all up until just recently because we did not have scanners that could see into the human with sufficient resolution until recently), my analysis of reverse-engineering the human brain is along the lines of an existence proof that we will have the software methods underlying human intelligence within a couple of decades.
Another important point in this analysis is that the complexity of the design of the human brain is about a billion times simpler than the actual complexity we find in the brain. This is due to the brain (like all biology) being a probabilistic recursively expanded fractal. This discussion goes beyond what I can write here (although it is in the book). We can ascertain the complexity of the design of the human brain because the design is contained in the genome and I show that the genome (including non-coding regions) only has about 30 to 100 million bytes of compressed information in it due to the massive redundancies in the genome.
So in summary, I agree that the singularity is not a discrete event. A single point of infinite growth or capability is not the metaphor being applied. Yes, the exponential growth of all facts of information technology is smooth, but is nonetheless explosive and transformative.
© 2006 Ray Kurzweil
]]>Concern: Convincing society that the brain needs to keep up with the changes ahead.
Each one of us has been entrusted with the care and nourishment of what might be the most extraordinary and complex creation in the universe. Home to mind and personality, the human brain archives cherished memories and hopes for the future. It arranges and coordinates the elements of consciousness that gives us purpose, passion, motion, and emotion.
But the brain is too fragile. It is far too vulnerable to be allowed to continue in its current state. In order to properly sustain the brain, we need to know what it likes, the challenges it craves, the rest it requires, and the protection it deserves. In short, the brain must have a strategy for its future.
But is it really necessary to take action now? I submit that if events have altered the day-to-day operations of the brain, affecting how it performs its operations and whether it can sustain for the long haul, then now is the right time to take action.
Recently, there has been a series of technological events causing irrevocable changes in the external environment of the brain. People are living longer; there is a notable increase in the number of activists supporting life extension technologies; economic reporting predicts an increase in research and development of molecular manufacturing and nanotechnology; programming engineers are reveling in the increase in research and development of superintelligence; and conservative organizations are publishing warnings indicating an increased awareness of the potential threats of superintelligence. These events will directly or indirectly affect the brain, resulting in a set of expectations for the brain to function over a longer period of time and operate at a higher level of quality than it has ever achieved in the past.
To keep pace and sustain itself for the long haul, the brain needs a strategy that takes into account the present circumstances and what the future may hold. Currently, the brain is challenged by a demand to produce better cognitive capabilities more quickly and efficiently for a longer period of time. Simultaneously there is an increased rate of neurological degeneration of brain cells resulting from increased longevity. And even though it is not a current threat, soon there will be a need to keep up with the acceleration of competitive superintelligence.
Developing a strategy for the brain requires a balance of several elements: a compelling vision for its future, strategic goals, an action plan, and a means for measuring the success of the plan. But before we can develop a strategic plan for the brain, we have to know more about the brain’s ability to meet the needs of the contemporary mind. This may seem like an abstract project because it would require us to separate the brain as a functioning organization of cells, or agents, from the mind. Nevertheless, an effective way to do this is to fictionalize the brainmake it a character or a business entity.
If the brain had an executive statement, for example, it might read something like this:
Executive Statement of the Brain
The mission of the brain is to serve its cells by adopting the advantages of emerging technologies to ensure a smart, safe and sustainable environment.
The brain develops best practices for cognitive and creative processes. The brain’s central operating system is located in the neocortex, and has connections through the internal and external communications network.
The brain’s quality services are unique and exclusive, and its target supply chain is nerve cells and synapses with upper-end job-related responsibilities. The brain’s competitive "intelligence" edge is that its services are 100% man-made, unlike competitors, such as superintelligence and friendly artificial intelligences. By this fact, the brain’s mind hopes to attract inventors and investors that value the artistry of producing neurological connections and their emergent properties such as critical thinking, imagination, day-dreaming, problem-solving, humor and intellection. Since the brain’s responsibilities are mostly to serve the day-to-day functions of the mind, as well as to elaborate networking and communications assistance for the mind and body, it is considered to be in the communications market, although some mental personas use the end-result products, such as ideas, for themselves.
In the year 2006, the brain plans to develop strategic initiatives to protect its future and gain a competitive edge in the "intelligence" marketplace. Over the past few decades, the brain’s longevity has increased along with its competitors, necessitating a reevaluation of its position and its future.
The brain’s future is uncertain due to advancing cogitative systems such as AI and superintelligence. Adding to the external environment of the brain is the fact that new intelligence enterprises entering the marketplace are drawing business away from the brain. Encephalitis and other invasive viral infections, as well as dementia and neurological breakdowns, are eating away at the resources of the brain’s affiliates. This pending shortage has created an immense demand for increased memory.
Regardless of some of the internal flaws of the brain, there is great potential for its continued success. The brain will improve faltering memory by adding a backup system; will expand to direct mind-linkup ubiquitous computing networks; will add error-correction memory replay and a global Net connection with remote neural access, guarded by security protocol. The brain plans to support its entire system by eliminating degenerative processes that impede the ability for a healthy, vital life in its goal to keep up with the many changes ahead.
While the executive statement is a fictionalized story, it does contain tangible elements. The reality is that our brains need to be protected and improved upon. The brain’s future depends upon how we want our brains to perform in the coming years and how much augmentation is actually needed, both invasively and noninvasively, to satisfy this end. Since our brains contain our memories, and our memories build our identities, this is a serious matter. But because we cannot see it as clearly as we see our expanding or shrinking bodies, the brain is dismissed while our mind presses for more immediate attention, forgetting the hard fact that unless the brain is in good physical shape, the entire system will falter.
Today the brain is vulnerable. It is vulnerable because the axial skeleton’s skull that encloses and protects the brain is not built from impenetrable material; its command-and-control center, including the white matter in between, is in constant danger of breakdown, infection, and disease; and its cognitive processes are subject to loss of information.
Trends five to ten years in the future suggest an increase in technologies, including biotech and nanotech, for building better brains to operate with better bodies in meeting the needs of people living longer. Further future trends suggest people opting for the synthetic brain over a biological brain. Markets point to an expected increase in neurosurgery, neuroinformatics, neuromarketing, biotechnologies, and human performance enhancements with an explicit focus on nanotechnology. But the consequential inclination is that of machine intelligence challenging human intelligence. Lurking in the foreground of the future is whether or not the brain will be able to keep pace with new technologies that will otherwise outperform it.
Based on potential threats and opportunities, and on the brain’s mission to serve its cells by adopting the advantages of emerging technologies to ensure a smart, safe and sustainable environment, the brain’s strategy narrows down to: (1) enhancing its performance and sustainability in order to satisfy the needs of people living longer; (2) competing with emerging superintelligence; and (3) enhancing its cognitive capabilities in order to deal with the problems of an increasingly complex world.
With these issues on the table, the brain needs a practical approach hedged by a strong vision that helps society understand the opportunities and the threats that await all of us. This is not just an abstract discussion; it includes everyone, not a select few. It is not simply a matter of being smarter or more capable; it is a matter of healthy and vital living. It is a matter of being prepared for the challenges of the future, and a measurable goal of convincing others to be prepared as well.
Convincing people is not an easy task, especially when minds have already been made up. But I think that we must work toward convincing society that the brain needs to accelerate with the rate of technological change, as our vision and audition have through innovative corrective technologies, and our arms and legs have with robotic prosthetics, and as other parts of our bodies have transformed and renewed in working together to keep us alive.
© 2006 Natasha Vita-More
]]>Human enhancementour ability to use technology to enhance our bodies and minds, as opposed to its application for therapeutic purposesis a critical issue facing nanotechnology. It will be involved in some of the near-term applications of nanotechnology, with such research labs as MIT’s Institute for Soldier Technologies working on exoskeletons and other innovations that increase human strength and capabilities. It is also a core issue related to far-term predictions in nanotechnology, such as longevity, nanomedicine, artificial intelligence and other issues.
The implications of nanotechnology as related to human enhancement are perhaps some of the most personal and therefore passionate issues in the emerging field of nanoethics, forcing us to rethink what it means to be human or, essentially, our own identity. For some, nanotechnology holds the promise of making us superhuman; for others, it offers a darker path toward becoming Frankenstein’s monster.
Without advocating any particular side of the debate, this essay will look at a growing chorus of calls for human enhancement, especially in the context of emerging technologies, to be embraced and unrestricted. We will critically examine recent “pro-enhancement” argumentsarticulated in More Than Human (2005) by Ramez Naam1, as one of the most visible works on the subject todayand conclude that they ultimately need to be repaired, if they are to be convincing.
Before we proceed, we should lay out a few actual and possible scenarios in order to be clear on what we mean by “human enhancement.” In addition to steroid use to become stronger and plastic surgery to become more attractive, people today also use drugs to boost creativity, attentiveness, perception, and more. In the future, nanotechnology might give us implants that enable us to see in the dark, or in currently non-visible spectrums such as infrared. As artificial intelligence advances, nano-computers might be imbedded into our bodies in order to help process more information faster, even to the point where man and machine become indistinguishable.
These scenarios admittedly sound like science fiction, but with nanotechnology, we move much closer to turning them into reality. Atomically-precise manufacturing techniques continue to become more refined and will be able to build cellular-level sensors and other tools that can be integrated into our bodies. Indeed, designs have already been worked out for such innovations as a “respirocyte”an artificial red blood cell that holds a reservoir of oxygen.2 A respirocyte would come in handy for, say, a heart attack victim to continue breathing for an extra hour until medical treatment is available, despite a lack of blood circulation to the lungs or anywhere else. But in an otherwise-healthy athlete, a respirocyte could boost performance by delivering extra oxygen to the muscles, as if the person were breathing from a pure oxygen tank.
What we do not mean by “human enhancement” is the mere use of tools, such as a hammer or Microsoft Word, to aid human activities, or “natural” improvements of diet and exercisethough, as we shall discuss later, agreeing on a definition may not be a simple matter. Further, we must distinguish the concept from therapeutic applications, such as using steroids to treat any number of medical conditions, which we take to be unobjectionable for the purposes of this essay.
Also, our discussion here can benefit from quickly noting some of the intuitions on both sides of the debate. The anti-enhancement camp may point to steroids in sports as an argument for regulating technology: that it corrupts the notion of fair competition. Also, some say, by condoning enhancement we are setting the wrong example for our children, encouraging risky behavior in bodies that are still developing. “Human dignity” is also a recurring theme for this side, believing that such enhancements pervert the notion of what it means to be human (with all our flaws).
On the pro-enhancement side, it seems obvious that the desire for self-improvement is morally laudable. Attempts to improve ourselves through, for example, education, hard work, and so on are uncontroversially good; why should technology-based enhancements be viewed any differently? In addition to virtue-based defenses of technological enhancement, we might also appeal to individual autonomy to defend the practice: so long as rational, autonomous individuals freely choose to participate in these projects, intervention against them is morally problematic.
In More Than Human, it is interesting to see that the debate is framed as a conservative (anti-enhancement) versus liberal (pro-enhancement) issue3. This proposed dichotomy is undoubtedly influenced by the creation and work of the U.S. President’s Council on Bioethics. Led by Leon Kass, M.D., PhD, the council released a report, Beyond Therapy, in 2004 that endorsed an anti-enhancement position; this report has become the prime target for both liberals and pro-enhancement groups. However, it would be a mistake to think that the issue necessarily follows political lines, since there may be good reason for a liberal to be anti-enhancement, as well as for a conservative to support it.
In his introductory chapter, Naam outlines the overarching theme that is supported by his research and analysis in subsequent chapters. He offers four distinct arguments in defending the pro-enhancement position: first, there are pragmatic reasons for embracing enhancement; second, regulation will not work anyway; third, respect for our autonomy licenses the practices; and, fourth, that the desire to enhance is inherently human and therefore must be respected.
1. In his first argument, Naam points out that “scientists cannot draw a clear line between healing and enhancing.”4 The implied conclusion here is that, if no principled distinction can be made between two concepts, it is irrational to afford them different moral status. So, since there are no restrictions on therapy, in that we have a right to medical aid, there also should be no restrictions on human enhancement, i.e. using the same medical devices or procedures to improve our already-healthy bodies. In other words, there is no significant or moral difference between therapy and enhancement.
There are numerous problems with such a claim; we will herein elucidate two. The first problem can be illustrated by the famous philosophical puzzle called “The Paradox of the Heap”: given a heap of sand with N number of grains of sand, if we remove one grain of sand, we are still left with a heap of sand (that now only has N-1 grains of sand). If we remove one more grain, we are again left with a heap of sand (that now has N-2 grains). If we extend this line of reasoning and continue to remove grains of sand, we see that there is no clear point where we can definitely say that on side A, here is a heap of sand, but on the side B, this is less than a heap. In other words, there is no clear distinction between a heap of sand and a less-than-a-heap or even no sand at all. However, the wrong conclusion to draw here is that there is no difference between them; so likewise, it would be fallacious to conclude that there is no difference between therapy and enhancement. It may still be the case that there is no moral difference between the two, but we cannot arrive at it through the argument that there is no clear defining line.
But, second, there likely are principled distinctions that can be made between enhancement and therapy.5 For example, Norm Daniels has argued for the use of “quasi-statistical concepts of ‘normality’ to argue that any intervention designed to restore or preserve a species-typical level of functioning for an individual should count as [therapy]”6 and the rest as enhancement. Alternatively, Eric Juengst has proposed that therapies aim at pathologies which compromise health, whereas enhancements aim at improvements that are not health-related.7
Another pragmatic reason Naam gives is that “we cannot stop research into enhancing ourselves without also halting research focused on healing the sick and injured.”8 However, this claim seems to miss the point: anti-enhancement advocates can simply counter that it is not the research they want stopped or regulated, but rather the use of that research or its products for enhancement. For instance, we may want to ban steroids from sports, but no one is calling for an outright ban on all steroids research, much of which serves healing purposes.
Naam also puts the burden of proofthat regulation of enhancement is neededon the anti-enhancement side, instead of offering an argument that enhancement need not be regulated.9 But it is unclear here why we should abandon the principle of erring on the side of caution, particularly where human health may be at stake as well as other societal impacts. Further, both sides have already identified a list of benefits or harms that might arise from unregulated human enhancement. The problem now is to evaluate these benefits and harms against each other (e.g., increased longevity versus overpopulation), also factoring in any relevant human rights. If neither side is able to convincingly show that benefits outweigh harms, or vice versa, then burden of proof seems to be a non-issue.
2. In his second argument, Naam compares a ban on enhancement to the U.S. “War on Drugs," citing its ineffectiveness as well as externalities such as artificially high prices and increased safety risks (e.g., users having to share needles because they cannot obtain new or clean ones) for those who will use drugs anyway.10 If people are as avidly driven to enhancement as they are to drugs, then yes, this may be the case. But is that a good enough reason to not even try to contain a problem, whether it is drugs, prostitution, gambling, or whatever? While such laws may be paternalistic, they reflect the majority consensus that a significant number of people cannot act responsibly in these activities and need to be protected from themselves and from inevitably harming others. Even many liberals are not categorically opposed to these regulations and may see the rationale of “greater good” behind similar regulation of enhancement.
Further, that we are unable to totally stop an activity does not seem to be reason at all against prohibiting that activity. If it were, then we would not have any laws against murder, speeding, “illegal” immigrationin fact, it is unclear what laws we would have left. Laws exist precisely because some people inescapably have tendencies to the opposite of what is desired by society or government. Again, this is not to say that human enhancement should be prohibited, only that a stronger and more compelling argument is needed.
3. In his third argument, Naam ties human enhancement to the debate over human freedom: “Should individuals and families have the right to alter their own minds and bodies, or should that power be held by the state? In a democratic society, it’s every man and woman who should determine such things, not the state…Governments are instituted to secure individual rights, not to restrict them.”11
Besides politicizing a debate that need not be political, Naam’s arguments are increasingly not anti-conservative but pro-libertarian. You would need to have already adopted the libertarian philosophy to accept this line of reasoning (as well as the preceding argument), since again, even liberals can see that the state has a broader role in creating a functioning, orderly society. This necessarily entails reasonable limits to whatever natural rights we have and also implies new responsibilitiesfor example, we shouldn’t exercise our right to free speech by slandering or by yelling “Fire!” in a crowded theater.
A democratic society is not compelled to endorse laissez-faire political philosophy and the minimal state, as some political philosophers have suggested.12 Nor would reasonable people necessarily want unrestricted freedom, e.g. no restrictions or background checks for gun ownership. Even in a democracy as liberal as ours in the United States, we understand the value of regulations as a way to enhance our freedom. For instance, our economic system is not truly a “free market”though we advocate freedom in general, regulations exist not only to protect our rights, but also to create an orderly process that greases the economic wheel, accelerating both innovations and transactions. As a simpler example, by disciplining a dog to obey commands and not run around unchecked, we actually increase that pet’s freedom by now being able to take him or her on more walks and perhaps without a leash (not to compare people with dogs or laws with behavioral conditioning).
4. Finally, Naam argues that people have been enhancing themselves from the start: “Far from being unnatural, the drive to alter and improve on ourselves is a fundamental part of who we humans are. As a species we’ve always looked for ways to be faster, stronger, and smarter and to live longer.”13 This seems to be an accurate observation, but it is an argumentative leap from this fact about the world, which is descriptive, to a moral conclusion about the world, which is normative. Or, as the philosophical saying goes, we cannot derive “ought” from “is," meaning just because something is a certain way doesn’t mean it should be that way or must continue to be that way. For instance, would the fact that we have engaged in warsor slavery, or intoleranceacross the entire history of civilization imply that we should continue with those activities?
More seriously, this argument seems to turn on an overly-broad definition of “human enhancement," such that it includes the use of tools, diet, exercise, and so onor what we would intuitively call “natural” improvement. An objection to Naam’s first argument also applies here: just because we cannot clearly delineate between enhancement and therapy or tool-use does not mean there is no line between them. We understand that steroid use by baseball players is a case of human enhancement; we also understand that using a rock to crack open a clam is not. Still, the fact that we have not arrived at a clear definition of “human enhancement” should not prevent us from using intuitive distinctions to meaningfully discuss the issue.
The point here is not that human enhancement should be restricted. It is simply that current arguments need to be more compelling and philosophically rigorous, if the pro-enhancement side is to be successful. There is admittedly a strong intuition driving the pro-enhancement movement, but it needs to be articulated more fully, resulting in an argument something like the following:
Who we are now seems to be a product of nature and nurture, most of which is beyond our control. So, if this genetic-environmental lottery is truly random, then why should we be constrained to its results? After all, we’ve never agreed to such a process in the first place. Why not enhance ourselves to be on par with the capabilities of others? And if that is morally permissible, then why not go a littleor a lotbeyond the capabilities of others?
As suggested in the above analysis, one of the first steps in discussing human enhancement is to arrive at a better definition of what it is, perhaps by adopting that used by Daniels or Juengst, though these are still tough issues. For instance, does it matter whether enhancements are worn outside our bodies as opposed to being implanted? Why should carrying around a Pocket PC or binoculars be acceptable, but having a computer or a “bionic eye” implanted in our bodies be subject to possible regulationwhat is the moral difference between the two?
Further, there are societal and ethical implications that also need to be considered, apart from those already mentioned. Before we too quickly dismiss the idea of “human dignity” as romanticized and outdated, we need to give it full consideration and ask whether that concept would suffer if human enhancement were unrestricted. Is there an obligation to enhance our children, or will parents feel pressure to do so? Might there be an “Enhancement Divide,” similar to the Digital Divide, that significantly disadvantages those without? If some people can interact with the world in ways that are unimaginable to others (such as echolocation or seeing in infrared), will that create a further “Communication Divide” such that people no longer share the same basic experiences in order to communicate with each other?
In this essay, we have tried to detail some of the challenges that nanotechnology and nanoethics will confront as applications to human enhancement become technologically viable. This will not be in the distant future, but rather sooner than many of us might have expected. It seems to the authors that a balanced and reasonable perspective is more appropriate than either polarizing extreme, if we are to responsibly and productively advance nanotechnology and its applications, particularly in light of the challenges to the pro-enhancement position that we have described.
1. Ramez Naam, More Than Human (Broadway Books, New York: 2005). See also www.morethanhuman.org.
2. Robert A. Freitas Jr., “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell,” Artificial Cells, Blood Substitutes, and Immobil. Biotech. 26(1998): 411-430
3. Naam (2005), pp.3-5.
4. Naam (2005), p.5.
5. For more discussion of these ideas, see Fritz Allhoff, “Germ-Line Genetic Enhancement and Rawlsian Primary Goods,” Kennedy Institute of Ethics Journals 15.1 (2005): 43-60.
6. Norm Daniels, “Growth Hormone Therapy for Short Stature: Can We Support the Treatment/Enhancement Distinction?," Growth: Genetics & Hormones 8.S1 (1992): 46-8.
7. Eric Juengst, “Can Enhancement Be Distinguished from Prevention in Genetic Medicine?," Journal of Medicine and Philosophy 22 (1997): 125-42.
8. Naam (2005), p.5.
9. Naam (2005), p.5.
10. Naam (2005), p.6.
11. Naam (2005), p.6-9.
12. See, for example, Robert Nozick, Anarchy, State, and Utopia (New York: Basic Books, 1974).
13. Naam (2005), p.9.
© 2006 Patrick Lin and Fritz Allhoff.
]]>Can civil societies absorb the impact of MNT without degenerating almost instantly into Hobbesian micro states, where the principal currency is direct power over other humans, expressed at best as involuntary personal service and, at the worst, sadistic or careless infliction of pain and consequent brutalization of spirit in slaves and masters alike? It is a disturbing prospect, more worrying than crazed individuals or sectarian terrorists. Are we, indeed, doomed to this outcome through frailties in our evolved nature, unsuited to such challenges, or perhaps to the rapacity of the current global economy?
A deeper question might be this: even if we assume that rich consumerist and individualist First World cultures like the USA might be prone to such collapse, is that true of all extant societies? Might more rigid or authoritarian societies have an advantage, if their citizens or subjects are too cowed by existing power structures to dash headlong into lawlessness? Might technologically simpler and poorer societies, possessing fewer goods to begin with and perhaps having fewer rising expectations, rebuff the temptations of MNT? Or might they seize upon such machines eagerly, but distribute them and their cornucopia, if only locally, on models of community or tribe unfamiliar to us in the West?
These seem to me extremely important issues that will require concentrated and imaginative study by economists, sociologists and anthropologists. Nearly half a century ago, the brilliant science fiction writer Damon Knight (1922-2002) published a parable salient to one possible sheaf of outcomes arising from successful and cheaply available molecular nanotechnological compilation of goods from cheap feedstocks. In his brief novel A For Anything1, a radical devicethe Gismoduplicates any object within its field, including human beings. It needs no feedstock supply, and draws power from batteries, thereby apparently breaching conservation laws. This premise, although invalid given our current understanding of physics, fails to dispel the force of Knight’ s allegory, since when matter compilers eventually turn information and cheap feed stocks into virtually any desirable good, the more disastrous consequences portrayed by Knight will actually become feasible, unfortunately.
Given the exponential proliferation of Gismos that apparently provide everything people need without their working for it, including copies of the Gismo and its batteries, ordered western society collapses almost instantly. Water can be produced out of the nothing (the "quantum vacuum", perhaps), greening barren lands; plans to create spacecraft that generate their own fuel in flight seem set at first to remake the entire solar system. Melodramatically yet plausibly enough, alas, Knight projects an almost instant imposition of martial law and its failure, then, worse yet, general breakdown into lawlessness and acquisition by the brutal and canny of slaves, or "slobs", who can be copied at will when they "wear out". Within half a century, America sinks into a kind of feudalism where nothing, in effect, ever again changes, where innovation seems pointless if not intolerably disruptive.
Presciently, Knight realized that this kind of stable stagnation requires more than a simple duplicator, and added the proviso that Gismos can produce "protes" or "arrested prototypes", "a gnarled lump of quasi-matter that could be stored in a pigeonhole, and would keep forever" (27). When an "inhibitor" is activated, the prote provides the information necessary to generate a complete copy of the original. In effect, the Gismo is equivalent to a nanofactory, using storable algorithms, although protes have the disadvantage of not being digitized and hence transmissible information.
The question A For Anything raises is perhaps one for specialists in cultural change and diversity. My own specialties are discourse theory and science fiction, so all I can do here is suggest diffidently certain possibilities for analytical approaches that are currently unfashionable in the academy and in the business world, but might be of use in probing the unknown. In doing so, I draw upon schemata advanced equally diffidently in my book Theory and its Discontents (1997)2, and a range of overviews of individual and culture conveniently summarized in several books by Ken Wilber, Don Beck, PhD, and others of their school3. Leaving aside the more metaphysical/ "mystical" aspects of his thought, Wilber has usefully condensed the work of some hundred specialists in a number of disciplines to yield a model of cultural phases.
To simplify brutally, Wilber and Beck propose that each society tends to segment, both through time and within a given period, according to a sequence of developmental stages. For shorthand, these are color-coded. The earliestthough not "simplest", each being as complex as the restis Instinctive, directed to brute survival (beige), followed by tribal Animism (purple), impulsive Egocentrism (red), disciplined Authority (blue), managerial/ scientific Strategic (orange), communitarian Consensus (green), multicultural Ecology (yellow), and a sort of new age global Holism (turquoise), with perhaps several transcendent states beyond this highest level. These overlap to some degree at least with my own suggested cyclical cultural dominants, and several key stages match up with "Three Systems of Action" by Mike Treder and Chris Phoenix4.
Treder and Phoenix note three significantly different systems of response for social organization: Guardian, oriented principally around provision of security; Commercial, promoting science and trade; and Informational, devoted to abundance. It is easy to see that these Dominants (to borrow a term from the communications theory of Roman Jakobson) can be mapped against the most significant dynamics of certain periods, cultures, and elements of cultures. In Wilber’s terms, Guardian would be blue, and in the USA reflect Republican conservative family values; Commercial orange, representing scientific Enlightenment values; while Informational might perhaps be green, representing postmodern inclusive global or "holistic" values, enthusiasm for open source versus proprietary development of novelty, etc. The interactions between individuals and groups dominated by one mode or another can be troublesome and, indeed, mutually incomprehensible. Green, Wilber warns, tends to "dissolve blue", which can wreak catastrophic damage on prickly red (tribal/gang) cultures or subcultures struggling to shift "upward" toward Enlightenment/ Commercial orange, by invalidating support for the intermediate "conservative" or blue Guardian stage in the interests of a premature holism.
My own analysis poses six sequential phases each half a century long and comprising two generations, punctuated by wars. The 300 years can be graphed as a sine curvean upward semicircle followed by a downward semicircle, each half comprising 150 years. (The full iterated sequence of roughly 50 year phases runs Algorithmic-We-I-It-Theory/Text-Code-Algorithmic….) I propose no numerology here, attempting rather to draw together a number of separate analyses that seem to find certain recurrences at certain intervals, not all of them compressible into a single algorithm; one influence might enhance another, a third might tend to mute it. What’s more, recent human intervention on a planetary scale might be expected to have modified, extended or suppressed such cycles anywayalthough some of the theorists I quote below do carry their schemata forward into the second half of the twentieth century.
A similar model has been suggested in Generations: the History of America’s Future, 1584 to 2069 by William Strauss and Neil Howe (New York: Morrow, 1991), whose parsed narrative discerns, like Modelski’s (below), a basic cycle four generations long, marked by disruptive "secular" and "spiritual events". Cohortsindividuals born within a given time-frameare said to resemble each other in temperament and trajectory more than they do those from earlier or later generations. The four phases, in order, are the Idealists (inner-driven, arrogant, creative), indulged in childhood after a secular event; the Reactives (disruptive in youth, pragmatic in maturity, uncultivated); the Civics (establishment figures); and the Adaptives (guilty conformists, aging into sensitive carers).5
The three phases or tonalities characterized by Treder and Phoenix match fairly well with the 150 year half cycle I discern between, say, 1850 and 2000, in which the doubled generations are characterized sequentially by the dominants I have dubbed IT (imperialism, Hot Peace, public art), THEORY (global war, religiosity, modernism) and CODE (Cold Peace, democracy, postmodernism). In tone, that half cycle begins with what Australian historian and entrepreneur J. Penman, Ph.D., calls High Vigor and moderate Stress, through Mid Vigor and High Stress, to Low Vigor today but only Medium Stress.6 These parameters are related to, and perhaps driven by, variations in child-rearing practices and those in turn, historically, on availability of adequate or abundant nutrients, levels of perceived threat and security, etcsee note 6.
Very roughly, we might expect Guardian/IT cultural phases to attempt to impose strong centralized and hierarchical command over the ownership of nanofactories and any distribution of their socially disruptive cheap goods. Commercial/THEORY phases might use state power as well as conglomerate capital power to restrict or co-opt MNT. Informational/CODE phases will be likely to embrace MNT and attempt to spread its benefits widely, perhaps to the whole world, and to resist conservative "moral values" restraints, corporate ownership, and copyrights. It is obvious, despite the natural affiliation of computer-savvy members of the Code or green generations, that very powerful forces will be strongly motivated to restrict MNT for reasons of private gain and public security, even in those societies falling increasingly under this dominant in the last 50 years
The problem foreshadowed by Knight’s novel is that resistance to the free development and distribution of MNT might elicit regression to earlier dominants. In Wilber’s terms these are beige (instinctual/subservience to parents), purple (magical thinking) and red (egocentric), which map moderately well with the earlier (and subsequent) 150 year semi-cycle I have proposed, summarized briefly as ALGORITHMIC (global conflict, classicism, aristocracy), WE (feudal disorder, formal religion at nadir, superstition at zenith), and finally I (romanticism, beginning with successful revolutions and perhaps global war and culminating in thwarted revolutions). Historically, in the West, these three dominants held sway between 1700 and 1850, continuing on into the three phases previously described. On this model, which is consistent with classic long cycle analyses by G. Modelski7 and others, we are arguably heading right now into a new algorithmic or phatic phase, with its attendant risks of banality, degeneration towards superstition, significant conflict (and perhaps the unexpected "War on Terror"and by culturally motivated terrorists and hegemonistsis an index of this). Of course, such 300-year cycleswhich I trace back through at least three iterations, and probably much fartherwould presumably be interrupted forever by a Singularity, especially one in which drastic life extension becomes possible, thereby upsetting the already muddled traditional replacement of generations raised under consecutively different conditions. Nanotechnology is clearly one of the driving forces thrusting advanced technological cultures toward just such a Singularity. One question, therefore, is whether Wilber’s orange and green phases or waves can be sustained in their dominant roles at a time when external and internal factors are arguably impelling Western cultures, as well as their foes, toward what one might regard as more primitive dominants.
Indeed, this kind of analysis might lend itself usefully to the study of contemporary cultures other than the Western. Should they all be regarded, however different they remain, as in some sense synchronized with the productive and informational drivers of the global economy? One suggestion I hesitantly made in my preliminary study is that societies throughout the world have been traditionally tied, far more than we might imagine, to a kind of global clock driven by variable insolation, and the impact of available solar energy upon climate and hence food supply. Again, even if this has been the case, it might no longer be so in an epoch where human-induced global warming is skewing traditional large-scale solar-modulated weather patterns, and in which global scientific production and transport of food and raw materials to a large extent obviates reliance upon local climatic conditions.8
In any event, it seems arguable that an analysis of cultural dominants of this kind, and their differential impact, might provide some general guidance in our expectations of the near-future impact of any truly radical and disruptive technology such as MNT.
1. Damon Knight, A For Anything, 1965, New York: Walker Publishing Co. 1970; as The People Maker 1959; short story "A for Anything", The Magazine of Fantasy and Science Fiction, Nov. 1957.
2. Damien Broderick, Theory and its Discontents, Melbourne: Deakin University Press, 1997.
3. Ken Wilber, A Theory of Everything: An Integral Vision for Business, Politics, Science and Spirituality, Boston: Shambala, 2000; Boomeritis, Boston: Shambala, 2002; I am grateful to futurist Professor Richard Slaughter for drawing my attention to Wilber’s work. See also the "Spiral Dynamics" of Don Beck, for example at http://www.integralworld.net/beck2.html
4. http://crnano.org/systems.htm
5. What drives this recurrence, in Strauss and Howe’s view, is a cycle of nurturant practice. Underprotection in childhood creates a tendency in the adults so formed to pay more attention to their own children, so the next generation shows increasing nurturance. The third step is a generation smothered by overprotection, and the reaction to such stifling is a fourth phase of decreasing nurturance, which in turn leads back to the start of the cycle.
It is interesting that the linear progression suggested by Strauss and Howe resembles a compressed version of my own model and Wilber’s, with their four-step periodicity folded into every pair of consecutive Dominant regimes in mine. Inner-driven Idealists correspond in character with my "I" generations, Reactives with "IT" empiricism, Civics with "THEORY/TEXT" governance, and Adaptives with "PHATIC/ALGORITHMIC" conformity. Two stages are elided: "CODE", following "THEORY", and "WE", following "PHATIC", but the two models operate at different scales. Neither is there a gross discord between the order of the two sequences. No doubt this is connected with the individual life-stage structure that also underlies each model: Youth (which conflates "WE" and "I" stages), Rising Adults ("IT"), Midlife Adults ("THEORY/TEXT" plus the shift to "CODE"), Elders (the transition from "CODE" to "PHATIC" or "RULE").
6. Jim Penman, The Hungry Ape, Melbourne, 1992, cited Broderick, 1997.
To sketch briefly the broad basis of Penman’s mechanism, operating on cultures via typical patterns for discipline of their infants: Societies using early control tend to develop a politics based on group loyalty; in a time-frame of low Restraint they produce feudalism, and during high Restraint, they produce stable city states and nation states. Their populations are open to change, and have elaborate economic skills. By contrast, societies lacking early control favor a politics based on personal, face-to-face authority; low Restraint stretches of the cycle are marked by unstable control over regions with shifting borders, while during high Restraint regimes they build large imperial dominions. Their populations are tradition-bound, and less skilled (Penman, p. 184).
7. George Modelski, Long Cycles in World Politics, Seattle: University of Washington Press, 1987.
If Modelski is correct, since 1494 the world system, parameterized in versions of the four Parsonian variables (economy, polity, societal community, and pattern maintenance or media/information apparatuses), has passed through five "long cycles", each with four generational phases. The cycles run to a little more than a century each, and climax in devastating contests for world leadership. These global conflicts last between 23 and 31 years, with the same average as his cycle generation, 27 years. The turn of the millennium marked the exhausted stage of an American century, and, if no better and more humane means is devised for adjudicating leadership, the world probably would be doomed to a new global war in perhaps 2030 (but not until then).
8. A somewhat different but arguably overlapping analysis was developed by Raymond H. Wheeler, a former professor of psychology at the University of Kansas and president of the Kansas Academy of Sciences, who constructed his own grand theory of cultural recurrence. Around the middle of the 20th century, Wheeler orchestrated a massive research project, drawing on up to two hundred co-workers, to reduce all of recorded history to coherent summary form. As the data from 2500 years of records were tabulated, he discerned a number of recurrent patterns world-wide. The most notable was a roughly 100-year climatic cycle, varying between 70 and 120 years, which seemed to fall into four predictable phases. From this periodicity, and drawing on then-prevalent doctrines of cultural and ethnic character, he theorized a regular swing of mass psychological emphasis between "classical" or "centralist" and "romantic" or "individualist" styles of community and culture, summarized in Ellsworth Huntington, Mainsprings of Civilization, [1945] 1959, New York: Mentor, 515-7. (Huntington was an explorer and Yale professor of geography and climatology whose books ranged from Civilization and Climate (1915) to his magnum opus, Mainsprings of Civilization, published two years before his death. His thesis of strong climatic determinism strikes us today as crankily ethnocentric at best, for he sought to discover why "vigorous" peoples like wealthy Euro-Americans were so much more successful than the "indolent", "feminized" races nearer the equator or otherwise trapped and stultified by debilitating circumstances. In the era of the Asian Tigers on the Pacific Rim, not to mention the historic defeat of American military efforts by tropical Vietnamese and the current imbroglio in Iraq, this claim seems not just racist but ludicrous. We should not be entirely distracted, however, by our legitimate distaste for colonial premises and rhetoric. Huntington’s comparative ethnography remains a rich trove of data, usefully categorized, on historical and environmental flows in the fortunes of nations.)
Obviously these climate-driven distinctions cannot be found literally everywhere simultaneously, because a global shift like the El Niño vacillation will bring unusually abundant rain to one region while filching it from another. Still, events like the Maunder Minimum suggest that at least some secular climatic variations on the order of a century can be due to changes in the sun’s internal clock. It is feasible that more subtle variations depend on more regular solar pacemakers, such as the deep processes that also cause the sunspot cycle and perhaps (even in the absence of human intervention) modulate global warming and cooling.
Wheeler and his team found their data was usefully schematized by a four-fold sequence: Warm-Wet, Warm-Dry, Cold-Wet, and Cold-Dry. Each contributed to a certain characteristic mode of collective behavior, so that "similar events have occurred throughout history during the same phases of the 100-year climate cycle" (Dewey and Mandino, Cycles, 1971, New York: Manor Books, 138). Adapting this model in brutally schematic form, and projecting 20 years (without taking account of drastic global climate change), we might map the 20th century thus (138-9):
WARM/WET: 1900-24 | WARM/DRY 1925-49 |
COLD/WET 1950-74 |
COLD/DRY 1975-1999 |
WARM/WET: 2000-24 |
early stability; nationalism; imperialist and expansionary wars; good crops; genius flourishes; prosperity |
police states; introversion; surrealism; economic collapse; cruel mass war; crops recover; revival begins |
individualism; decentralized politics; emancipation; mechanical scholarship; shift to anarchistic tone |
weakened government; migrations; race riots; class struggle; revolution; new leadership emerges |
early stability; nationalism; imperialist and expansionary wars; good crops; genius flourishes; prosperity |
Since Wheeler announced his model just prior to the mid-century, this makes a prescient cultural display, although he missed Greenhouse heating.
© 2006 Damien Broderick. Reprinted with permission.
]]>In this essay, I wish to raise my concern over some of the problems of today’s world, and try to suggest how they can be eliminated, or at least their negative impact be reduced, by developing operational worldwide molecular design and manufacturing capabilities.
The Unabomber Manifesto ("Industrial Society And Its Future") by Theodore Kaczynski is one of the most interesting documents of our times, in terms of both its history and its content. Thanks to the work of Information Technology pioneers such as some of the people he targeted, you can read the full text of the Unabomber Manifesto online.
Quoting from the Wikipedia article:
The main argument of Industrial Society and Its Future is that technological progress is undesirable, can be stopped, and in fact should be stopped in order to free people from the unnatural demands of technology, so that they can return to a happier, simpler life close to nature. Kaczynski argued that it was necessary to cause a "social crash", before society became any worse. He believes a collapse of civilization is likely to occur at some point in the future; thus, it is better to end things now, rather than later, because the further society develops, the more painful things will be when the collapse occurs. If it does not occur, he says, humans will have the freedom and significance of house pets, although they may be happy, in a society dominated by machines or an elite social class.
I am (and you are, I hope) definitely against Kaczynski’s final determinations. However, I have to agree with most critics who say that the Manifesto is very well written and that its conclusions, flawed as they are and despite the horrible acts of murder they spawned, are based on a well articulated analysis of some of the problems of today’s world.
One of Kaczynski’s central points is that the "natural" social and cultural environment for a human being is a relatively small community, not too dependent on the outside world for any necessary resource, where everyone has a chance to know everyone else and to actively contribute to the life of the community. He claims that an interconnected world in which the quality of each person’s life depends on things that take place far away is dehumanizing and cannot work without decreasing the freedom, the rights, and ultimately the happiness and well-being of people. He argues that the very technologies needed to sustain a globalized world contribute to creating more dehumanization. This produces a runaway feedback loop that can only result in an unnatural environment, putting far too much strain on our mental resourcesand at some point, something has to break.
So, Kaczynski wishes to go back to a world of loosely connected, relatively independent small communities. But this is difficult because in today’s world no small community could ever produce all that is needed to meet its own energy, food, communications, and health care requirements. Hence, Kaczynski proposes to break the technological foundations of our global civilization by any means, including murder.
The deep interconnectedness of today’s world also creates huge geopolitical tensions. The situation in the Middle East is a sad example of what can happen when the economy of one region is too strongly dependent on resources located in another region, and where too many players seek control over the complex planet-wide production and distribution networks crucial to the functioning of our global infrastructure.
(A big advantage of solar energy, and one of the main reasons why its deployment should be pursued much more aggressively, is that it can be produced locally by those who require it. A nation following this route would sharply reduce their vulnerability to hostile actions, and to the blackmail of others based on threatening to disrupt their energy supply. In addition, this would reduce that nation’s propensity to wage war against others for the control of energy supplies.)
I definitely do not want to go back to a pre-industrial age as Kaczynski proposes. Indeed, I like many aspects of globalization. I like that in some sense we can all regard ourselves as citizens of One World. I like that with the Internet I can know what happens and what people think on the other side of the planet, and that I can participate in virtual communities held together by common interests and values instead of geographic location. I like to see thinkers and doers from all over the world working together at near-thought speed to develop new ideas and goods.
So, I am definitely not a sympathizer of the anti-globalization movement. But I can see worth in some of the points they make, partly based on Kaczynski’s writings. Perhaps we can take their best arguments into account by recognizing that although the option of living in a global interconnected world is good for many, nobody should be forced to do so, and a local community of like-minded people who wish to live their lives in relative isolation from the rest of the worldprovided of course they do not oppress their citizens or threaten other communitiesshould have the opportunity and the means to do so. A good, albeit perhaps extreme, example is in Damien Broderick’s Transcension.
Another problem of the modern world is that it is very difficult to build effective supranational governance bodies, because existing nation-states, especially those with a long history, refuse to give up sovereignty and power. This difficulty is often seen in the United Nations and in other supranational bodies such as the European Union. Few, if any, of today’s nation-states would seriously consider allowing such organizations to have real and effective decision-making power, let alone the means to enforce the decisions made. It appears that a gradual breakup of existing nation states into smaller entities, relatively autonomous but co-operating when co-operation is necessary for all parties involved, will be a necessary prerequisite for the creation of supranational governance structures including regional and world "governments".
I have given two different but connected arguments for “small is beautiful.” And, speaking of small things, I believe that emerging NBIC* technologies, and in particular molecular nanotechnology, will offer the opportunity to retain the benefits of globalization while at the same time significantly reducing the dependence of local communities on the external world as far as the availability of material goods (food, medicines, energy, vehicles, toys, designer items, etc.) is concerned.
Richard Feynman was the first to articulate the possibility of molecular nanotechnology (although not by that name). In his 1959 essay, "There’s Plenty of Room at the Bottom," he argued that there is nothing in the laws of physics to prevent us from building molecular size machines able to precisely place individual atoms and molecules according to design specifications and build complex structures and chemical compounds one atom at a time. Feynman wrote:
It would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. Give the orders and the physicist synthesizes it. How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developeda development which I think cannot be avoided.
Eric Drexler, who coined the term "nanotechnology" and popularized it in Engines of CreationThe Coming Era of Nanotechnology, was among the first to realize that nanotechnology will achieve its disruptive potential when molecular machines will be able to build other molecular machines by assembling them from atoms and molecules available in their environment. Given replicant nanotechnology, it is easy to see how, with suitable programming and assuming that all needed molecular "bricks" can be extracted from the environment (a safe assumption in most cases), it is possible to assemble any substance or structure for which detailed design specifications are available. So, our future economy will not be based on material goods, but on design specifications for material goods. We already have examples of this today:
A document can be transmitted over the Internet and reproduced, on screen or on paper, by whomever has to read it. This technology is available to nearly all consumers, at least in the Western world, at the (relatively) low cost of a PC, a printer, and an Internet connection.
A VHDL (VHSIC hardware description language) design specification for an application specific integrated circuit is as good as the device itself in the sense that it can be taken to a suitable hardware foundry and used to reproduce the device with an automated process. The fundamental difference from the previous example is that today one needs very complex and expensive machinery and extensive know-how to generate a physical instantiation of the device. But I think we can safely predict that the costs will drop and circuit printing will become more and more like document printing.
Instead of shipping physical objects, their detailed design specification in a "Matter Description Language" or "Molecular Description Language" (MDL) will be transmitted over a global data grid evolved from today’s Internet and then physically instantiated ("printed") by "nano printers" at remote sites. The usage of nano printers, also called nanofactories, is described in Neal Stephenson’s The Diamond Age. The term “Matter Compiler” (MC) used by Stephenson in the novel is especially good as, by analogy with the software development process, it suggests the idea of organizing (compiling) matter from design specifications. Reading Stephenson’s descriptions of young Nell trying to use her mother’s cheap kitchen MC to compile clothes, toys, and mattresses makes it easier to understand the basic concepts of molecular manufacturing.
Assuming it still exists at that time, the Coca Cola Company will not sell physical cans, but will license the MDL description of its popular beverage for on-site compilation by customers. I assume Coca Cola and all other commercial companies will need some means to enforce their intellectual property rights to make sure that customers pay what they are supposed to pay. This probably will be done by a limit on the number of times a given MDL design can be assembled by a given user, with protection technologies conceptually similar to those used today for Digital Rights Management (DRM). Of course, there will be plenty of 15 year-old hackers willing and able to crack whatever DRM protection scheme manufacturers can think of, and then make available cracked DRM-free design specs on the global data grid.
I do not see any reason why molecular nanotechnology should change the basic laws of economy, so I assume that the MDL description of an Armani suit will cost as much as the Armani suit costs today. And I believe tomorrow’s designers of luxury items will be perfectly entitled to charge a lot of money for their creations. But what happens if the MDL descriptions of basic goods that a local community needs are priced beyond their reach? And what happens if these licenses are withdrawn for political reasons, perhaps to force a community to submit to an aggressor community or to an overreaching central authority?
Basic goods should be free, or priced within the means of everyone. In other words, Coca Cola can be expensive, but water must be free. Armani suits can be expensive, but basic clothing must be free. Who will develop royalty-free MDL descriptions of basic goods that everyone on the planet can use? The answer, I think (or at least I hope), is that they will be developed with an Open Source development model by armies of MDL programmers.
In the online version of this essay, I make frequent use of Wikipedia articles as references for two reasons: first, I am fond of Wikipedia as one of the best examples of Open Source development; and second, Wikipedia articles are as good as, and often better than, equivalent articles in expensive encyclopedias. I can rest assured that all Wikipedia references that I use in this article will be maintained under the spontaneous quality assurance and control processes that are emerging within the Wikipedia community, and will be further improved by countless users and experts. So, linking to Wikipedia is much safer than linking to a commercial website that may disappear if the owner goes out of business. (If you are reading a hardcopy version of this essay and wish to have further information on the terms and concepts mentioned, please go to the URL http://en.wikipedia.org/ and enter your search keywords.)
It seems likely that many of the arguments used today in favor of the Open Source movement will be applicable to tomorrow’s nanotech economy. The availability of Open Source MDL specifications for all basic goods will result, I believe, in a better worlda world where citizens and communities will be free to do their own thing (provided they do not reduce the right and ability of others to do the same) without having to give in to pressure and blackmail from hostile parties or meddlesome central authorities who threaten to disrupt their supply of basic material goods.
* Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science, Edited by Mihail C. Roco and William Sims Bainbridge, National Science Foundation, June 2002, http://www.wtec.org/ConvergingTechnologies/Report/NBIC_report.pdf
© 2006 Giulio Prisco
]]>Conflicts, clashes, battles, and wars: this is the stuff of which history is made. The world as we know it today is largely a product of wars fought and peoples conquered.
We like to look back admiringly on other things our species has produced: great works of art, brilliant inventions, sage philosophers, brave explorers, and selfless peacemakers. But the real star of the human story is war. In fact, very often those things we admirephilosophy, technology, leadership, superb writing and speechmakingare put to maximum use in the service of war.
The story is not yet over. Within our lifetimes, we are likely to witness battles on a scale never before seen. Powered by molecular manufacturing, an advanced form of nanotechnology, these near-future wars1 may threaten our freedom, our way of life, and even our survival.
Some wars are between opponents of roughly equal fighting ability. As a result, these conflicts tend to drag on, often for years and killing millions, until finally one side emerges victorious. Recent examples include the American Civil War, World War I, and World War II.
Occasionally one adversary will possess huge advantages over the other, in which case the war typically is quite short. A famous instance is the spectacular one-sided victory of Spanish conquistador Francisco Pizarro over the Incan empire in 1532. What makes this story so remarkable is that an army of 80,000 soldiers was overwhelmed and decimated in one day by a force of only 169 men.
Normally we would expect that an aggressor facing such great numbers would be a decided underdog, virtually assured of defeat. Jared Diamond, in his book Guns, Germs, and Steel,2 analyzes this historic eventclearly a major turning point in the course of human civilizationand describes the elements that gave the Spaniards a stunningly easy victory.
Diamond lists superior military technology based on guns, steel weapons, and horses; infectious diseases; maritime technology; centralized political organization; and writing.
These advantages can be categorized as follows (with items from 1532 in parentheses):
Looking forward, we can imagine a similar situation: an apparently strong nation, a superpower or empire within their realm, suddenly and overwhelmingly defeated by an adversary with superior technology and other advantages.
Molecular manufacturingthe ability to construct powerful, atomically precise products at an exponentially increasing pacecould provide the tools for a spectacular one-sided victory by an apparent underdog equipped with superior:
Despite vastly greater numbers, the Incasthe most developed civilization in the Americaswere not able to mount a serious resistance against the advanced technology of Spain.
Could todays most powerful civilization, the United States, as easily be conquered by a nano-enabled attacker? This appears possible, if molecular manufacturing does provide for huge gains in all five areas, as many analysts (including this author) believe it will.
No nation lacking the nanotech advantage will be able to resist a foeno matter how small or weak in conventional termsthat wields the power of molecular manufacturing.3
It is not certain, of course, that large-scale war will occur within the next few decades. But if it does, and if both (or all) sides are nano-enabled, that event could last a relatively long time, and casualties could be in the billions. If, on the other hand, only one combatant possesses the awesome capabilities of nano-built weapons, computers, and infrastructure, that war might be over very quickly, and could leave the victor in total command of the world.
1. Treder, Mike (2005) “War, Interdependence, & Nanotechnology” (Future Brief) http://www.futurebrief.com/miketrederwar002.asp
2. Diamond, Jared (1997) Guns, Germs, and Steel (W. W. Norton, New York)
3. Phoenix, Chris (2003) “Molecular Manufacturing: Start Planning” (Public Interest Report, 56:2) http://www.fas.org/faspir/2003/v56n2/nanotech.htm
© 2006 Mike Treder
]]>One of the fundamental questions driving any attempt at forecasting the future is: what kind of society do we want to live in? Or, for the farther future: what kind of society do we want our children to live in? How would widely available nanofactories change our lives and our world? Will multi-national corporations gain exclusive control of molecular manufacturing (MM), using it to dominate social institutions and dictate public policy from a purely capitalist and/or monopolist perspective? Will personal nanofactories foster global anarchy and create a form of modern tribalism based upon religion, ideology, or culture, and pit independent city-states or autonomous regions against one another? Will the world’s nations devolve further into a technologically-driven arms race with the winner dominating or destroying the planet with powerful MM-enabled weapons? Will the world’s Big Brothers grow larger and more tyrannical, using advanced nanotechnology to "protect" their law abiding masses through increasing surveillance, control and internal subjugation? Or, will personal freedom grow and evolve along with our technology, giving people and communities the ability to maintain their rights as individuals and protect the social welfare of their communities and nations while fostering global peace, security, and prosperity?
These questions and a host of others have no easy answers. One significant factor on the path to our future is our world as it exists today, a world largely dominated by governments and the forces they employ to maintain civil order and internal security. In today’s stable societies of the developed nations, government police and para-military forces provide the preponderance of domestic order maintenance services, enforcing criminal laws and ordinances, arbitrating physical disputes, investigating crimes, and responding to disastersprofessional functions usually deemed appropriate in modern democracies to ensure the continued safety and security of a community or nation. These activities and the manner in which they are carried out will have a direct and profound impact on the kind of world we and our children will live in, particularly in regards to the maintenance of civil liberties and individual freedom.
It is important therefore to give careful consideration to the ways in which governments use technology today to provide for public safety and security, and how that might change as a result of new technological advances. We need to give close scrutiny to the capabilities afforded the civil police by modern technologyparticularly the potential power bestowed by molecular nanotechnology and personal nanofactoriesbefore these capabilities are realized. What capabilities do we want the police to have and which do we want to restrict? How much capability do they need in order to provide for public order and safety in an age of advanced nanotechnology? Are they capable of wielding the power afforded them through augmented reality, unmanned aerial vehicles, robots, surveillance, data-mining, and biometrics, technologies that will be greatly enhanced and widely distributed by personal nanofactories? Can we afford to place such power in the hands of government? And if not, what is the alternative for ensuring peace and social stability for the world’s billions?
As we consider the appropriate limits on police surveillance and enforcement capabilities we also need to consider the ways in which criminals and terrorists might exploit advanced technologies like personal nanofactories in carrying out their goals, and the impact their actions will have on liberty and democracy if they succeed. While government action can have dramatic and negative impacts on our ability to be and remain free, so too the actions of a lone criminal or terrorist group armed with advanced technology can have severe repercussions on the social psyche, and thereby the economy and stability of a nation or the world. Successful terrorist attacks and chronic criminal activities in a globalized world have a fundamentally destabilizing affect on communities and nations, often fostering highly reactionary programs and policies aimed at providing short-term safety for the many at the expense of liberty for a perceived few.
In other words, simply limiting police use of technology is no guarantee that civil liberties will be maintained. On the contrary, the public’s perception of danger will inevitably drive policing and security operations within communities and nations whether the civil police are equipped and empowered to act or not. Recent activities along the border between the United States and Mexico demonstrate that in today’s world, with the ready availability of advancing technology, someone will end up conducting police operations when communities believe they face criminal and terrorist threats that remain unchallenged by the civil police. Groups such as the Minutemen and the American Border Patrol are non-government organizations formed by average citizens, frustrated at the lack of response by their government regarding illegals crossing the national border. Armed with widely available technology not currently utilized by the civil police (unmanned aerial vehicles with video cameras and wireless links for surveillance), and probably more than a few weapons, these groups conduct border interdiction operations outside of government sanction.
In the wake of the September 11th terrorist attacks, the US also has experienced a growing involvement in domestic security by military and private security forces. In the United States after 9/11, the Pentagon formed the US Northern Command, a first-ever strategic military command whose primary mission is to conduct domestic military operationsessentially law enforcement and civil security missionsin response to terrorist events and natural disasters. Similarly, private security agencies such as DynCorp and SAIC have taken on a much broader role within communities to combat terrorism and cyber-crimes such as identity theft and credit card fraud, filling a law enforcement and civil security niche that state and local police departments are either ill-equipped or unable to deal with.
Life in the 21st Century is only getting more complex. Information technologies and mass media confront the populace on a daily basis with graphic real-time images of death and destruction along with gripping narrative accounts of all the world’s problems, raising public fear and driving citizen demands for higher and higher levels of security. The specter of technology out of control is a frequent topic of popular books, movies and television, causing many people to question the wisdom of continued technological advancement. Molecular manufacturing and personal nanofactories will raise even further the level of public fear and create new conflicts and opportunities for criminal and terrorist groups to exploit to their advantage.
Advancing technology in general and molecular manufacturing in particular make predictions about the future difficult at best. Still, conceptualizing all the potential scenarios and contemplating new and appropriate strategies, programs and policies necessary to avoid a dystopian future is important, however imprecise. Regardless of the particulars, it seems clear that in a world of growing conflict and fear, policing and law enforcement will play a rather large role, for good or for ill. When communities and nations are threatened with or confronted by persistent criminal exploitation and catastrophic terrorist attacks, the public will demand action to prevent further personal danger, economic loss or social unrest.
The type of policing we end up with and its effectiveness at preventing significant harm while lowering public fear will be a factor governing the nature and extent of our civil liberties as MM and personal nanofactories become part of our world. What would our civil liberties look like after a major terrorist attack if the military, utilizing MM-enabled surveillance devices and weapons, is in the best position with the best capabilities to conduct domestic policing operations? What kind of society would ensue if all significant policing in our communities and nations is conducted by corporations and hired security guards? Whose civil liberties would be protected when concerned citizen groups and vigilantes take community security into their own hands and use personal nanofactories to arm themselves like the military?
Of all the organizations and entities capable and willing to conduct domestic policing and security missions, only the civil police are sworn to uphold the civil liberties of all people. The military is trained and equipped to defeat opposing armies on foreign battlefields, to seize objectives and kill anyone who stands in the way. Corporate security forces and privately paid police forces are focused on the bottom line and are loyal to those who pay them. Individual citizens, concerned citizen groups and vigilantes are concerned only with their own safety and the civil liberties of those within their own interest group. Nevertheless, each of the above groups will play a role in policing neighborhoods, enforcing laws, and providing domestic security. Each will be a necessary component for effectively securing our communities and nations from criminal and terrorist predators of the future. The challenge will be to create a model in which the actions of these groups compliment one another, enhancing the collective effects of the whole, not working at cross purposes or creating additional conflicts that add to local, regional, national and global insecurity.
In a world of advanced technologies, molecular manufacturing capabilities, and personal nanofactories, an effective law enforcement process will be essential to peace and social stability. No single group can provide the right balance of domestic policing capabilities and each has dangerous tendencies that when employed in isolation can be detrimental to someone’s rights and freedoms. As with most of what troubles us in the information age, 20th Century solutions will not solve 21st Century problems. Centralization, parochialism and hierarchy are being replaced with distributed systems based upon collaboration across local, wide-area and global networks. The successful policing model of the future will need to move in this direction as well. To deal effectively with the challenges and dangers posed by tomorrow’s technologies, we must form a collaborative policing network, consisting of all citizens, agencies and forces with useful capabilities and appropriate law enforcement interest. A collective and collaborative effort will do a better job of upholding liberty for all people while providing the safety and security necessary for continued social and technological advancement.
© 2006 Thomas J. Cowper
]]>Ray Kurzweil consistently has predicted 2029 as the year to expect truly Turing-test capable machines. Kurzweil’s estimates1 are based on a broad assessment of the progress in computer hardware, software, and neurobiological science.
Kurzweil estimates that we need 10,000 teraops for a human-equivalent machine. Other estimates (e.g. Moravec2) range from a hundred to a thousand times less. The estimates actually are consistent, as Moravec’s involve modeling cognitive functions at a higher level with ad hoc algorithms, whereas Kurzweil is assuming we’ll have to simulate brain function at a more detailed level.
So, the best-estimate range for human-equivalent computing power is 10 to 10,000 teraops.
The Moore’s Law curve for processing power available for $1000 (in teraops) is:
2000: 0.001 2010: 1 2020: 1,000 2030: 1,000,000
Thus, sophisticated algorithmic AI becomes viable in the 2010s, and the brute-force version in the 2020s, as Kurzweil predicts. (Progress into atomically precise nanotechnology is expected to keep Moore’s Law on track throughout this period. Note that by the NNI definition, existing computer hardware with imprecise sub-100-nanometer feature sizes is already nanotechnology.)
However, a true AI would be considerably more valuable than $1000. To a corporation, a good decision-maker would be worth at least a million dollars. At a million dollars, the Moore’s law curve looks like this:
2000: 1 2010: 1,000 2020: 1,000,000
In other words, based on processing power, sophisticated algorithmic AI is viable now. We only need to know how to program it.
Current brain scanning tools recently have become able to see the firing of a single neuron in real time. Brain scanning is on a track similar to Moore’s law, in a number of critical figures of merit such as resolution and cost. Nanotechnology is a clear driver here, as more sophisticated analysis tools become available to observe brains in action at ever-higher resolution in real time.
Cognitive scientists have worked out diagrams of several of the brain’s functional blocks, such as auditory and visual pathways, and built working computational models of them. There are a few hundred such blocks in the brain, but that’s all.
In the meantime, purely synthetic computer-based artificial intelligence has been proceeding apace, beating Kasparov at chess, proving a thorny new mathematical theorem that had eluded any human mathematician, and driving off-road vehicles 100 miles successfully, in the past decade.
Existing AI software techniques can build programs that are experts at any well-defined field. The breakthroughs necessary for such a program to learn for itself easily could happen in the next decade. It’s always difficult to predict breakthroughs, but it’s quite as much a mistake not to predict them. One hundred years ago, between 1903 and 1907 approximately, the consensus of the scientific community was that powered heavier-than-air flight was impossible, after the Wright brothers had flown.
The key watershed in AI will be the development of a program that learns and extends itself. It’s difficult to say just how near such a system is, based on current machine learning technology, or to judge whether neuro- and cognitive science will produce the sudden insight necessary inside the next decade. However, it would be foolish to rule out such a possibility: all the other pieces are essentially in place now. Thus, I see runaway-AI as quite possibly the first of the "big" problems to hit, since it doesn’t require full molecular manufacturing to come online first.
A few points: The most likely place for strong AI to appear first is in corporate management; most other applications that make an economic difference can use weak AI (many already do); corporations have the necessary resources and clearly could benefit from the most intelligent management (the next most probable point of development is the military).
Initial corporate development could be a problem, however, because such AIs are very likely to be programmed to be competitive first, and worry about minor details like ethics, the economy, and the environment later, if at all. (Indeed, it could be argued that the fiduciary responsibility laws would require them to be programmed that way!)
A more subtle problem is that a learning system will necessarily be self-modifying. In other words, if we do begin by giving rules, boundaries, and so forth to a strong AI, there’s a good chance it will find its way around them (note that people and corporations already have demonstrated capabilities of that kind with respect to legal and moral constraints).
In the long run, what self-modifying systems will come to resemble can be described by the logic of evolution. There is serious danger, but also room for optimism if care and foresight are taken.
The best example of a self-creating, self-modifying intelligent system is children. Evolutionary psychology has some disheartening things to tell us about children’s moral development. The problem is that the genes, developed by evolution, can’t know the moral climate an individual will have to live in, so the psyche has to be adaptive on the individual level to environments ranging from inner-city anarchy to Victorian small town rectitude.
How it works, in simple terms, is that kids start out lying, cheating, and stealing as much as they can get away with. We call this behavior "childish" and view it as normal in the very young. They are forced into "higher" moral operating modes by demonstrations that they can’t get away with “immature” behavior, and by imitating ("imprinting on") the moral behavior of parents and high-status peers.
In March 2000, computer scientist Bill Joy published an essay3 in Wired magazine about the dangers of likely 21st-century technologies. His essay claims that these dangers are so great that they might spell the end of humanity: bio-engineered plagues might kill us all; super-intelligent robots might make us their pets; gray goo might destroy the ecosystem.
Joy’s article begins with a passage from the "Unabomber Manifesto," the essay by Ted Kaczynski that was published under the threat of murder. Joy is surprised to find himself in agreement, at least in part. Kaczynski wrote:
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case, presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
But that either/or distinction is a false one (Kaczynski is a mathematician, and commits a serious fallacy applying pseudo-mathematical logic to the real world in this case).
To understand just how complicated the issue really is, let’s consider a huge, immensely powerful machine we’ve already built, and see if the terms being applied here work in its context. The machine is the U.S. government and legal system. It is a lot more like a giant computer system than people realize. Highly complex computer programs are not sequences of instructions; they are sets of rules. This is explicit in the case of "expert systems" and implicit in the case of distributed, object-oriented, interrupt-driven, networked software systems. More to the point, sets of rules are programs.
Therefore, the government is a giant computer programwith guns. The history of the twentieth century is a story of such giant programs going bad and turning on their creators (the Soviet Union) or their neighbors (Nazi Germany) in very much the same way that Kaczynski imagines computers doing.
Of course, you will say that the government isn’t just a program; it’s under human control, after all, and it’s composed of people. However, it is both the pride and the shame of the human race that we will do things as part of a group that we never would do on our ownthink of Auschwitz. Yes, the government is composed of people, but the whole point of the rules is to make them do different thingsor do things differentlythan they would otherwise. Bureaucracies famously exhibit the same lack of common sense as do computer programs, and are just as famous for a lack of human empathy.
But, virtual cyborg though the government may be, isn’t it still under human control? In the case of the two horror stories cited above, the answer is: yes, under the control of Stalin and Hitler respectively. The U.S. government is much more decentralized in power; it was designed that way. Individual politicians are very strongly tied to the wishes of the voters; listen to one talk and you’ll see just how carefully they have to tread when they speak. The government is very strongly under the control of the voters, but no individual voter has any significant power. Is this "under human control"?
The fact is that life in the liberal western democracies is as good as it has ever been for anyone anywhere (for corresponding members of society, that is). What is more, I would argue vigorously that a major reason is that these governments are not in the control of individuals or small groups. In the 20th century, worldwide, governments killed upwards of 200 million humans. The vast majority of those deaths came at the hand of governments under the control of individuals or small groups. It did not seem to matter that the mechanisms doing the killing were organizations of humans; it was the nature of the overall system, and the fact that it was a centralized autocracy, that made the difference.
Are Americans as a people so much more moral than Germans or Russians? Absolutely not. Those who will seek and attain power in a society, any society, are quite often ruthless and sometimes downright evil. The U.S. seems to have constructed a system that somehow can be more moral than the people who make it up. (Note that a well-constructed system being better than its components is also a feature of the standard model of the capitalist economy.)
This emergent morality is a crucial property to understand if we are soon to be ruled, as Joy and Kaczynski fear, by our own machines. If we think of the government as an AI system, we see that it is not under direct control of any human, yet it has millions of nerves of pain and pleasure that feed into it from humans. Thus in some sense it is under human control, in a very distributed and generalized way. However, it is not the way that Kaczynski meant in his manifesto, and his analysis seems to miss this possibility completely.
Let me repeat the point: It is possible to create (design may be too strong a word) a system that is controlled in a distributed way by billions of signals from people in its purview. Such a machine can be of a type capable of wholesale slaughter, torture, and genocidebut, if the system is properly controlled, people can live comfortable, interesting, prosperous, sheltered, and moderately free lives within it.
What about the individual, self-modifying, soon-to-be-superintelligent AIs? It shouldn’t be necessary to tie each one into the “will of the people”; just keep them under the supervision of systems that are tied in. This is a key point: the nature (and particularly intelligence) of government will have to change in the coming era.
Having morals is what biologist Richard Dawkins calls an "evolutionarily stable strategy." In particular, if you are in an environment where you’re being watched all the time, such as in a foraging tribal setting or a Victorian small town, you are better off being moral than just pretending, since the pretending is extra effort and involves a risk of getting caught. It seems crucial to set up such an environment for our future AIs.
Back to Bill Joy’s Wired article: he next quotes from Hans Moravec’s book Robot: Mere Machine to Transcendent Mind,4 "Biological species almost never survive encounters with superior competitors." Moravec suggests that the marketplace is like an ecology where humans and robots will compete for the same niche, and he draws the inevitable conclusion.
What Moravec is describing here is not true biological competition; he’s just using that as a metaphor. He’s talking about economic displacement. We humans are cast in the role of the makers of buggy whips. The robots will be better than we are at everything, and there won’t be any jobs left for us poor incompetent humans. Of course, this sort of thing has happened before, and it continues to happen even as we speak. Moravec merely claims that this process will go all the way, displacing not just physical and rote workers, but everybody.
There are two separable questions here: Should humanity as a whole build machines that do all its work for it? And, if we do, how should the fruits of that productivity be distributed, if not by existing market mechanisms?
If we say yes to the first question, would the future be so bad? The robots, properly designed and administered, would be working to provide all that wealth for mankind, and we would get the benefit without having to work. Joy calls this "a textbook dystopia", but Moravec writes, "Contrary to the fears of some engaged in civilization’s work ethic, our tribal past has prepared us well for lives as idle rich. In a good climate and location the hunter-gatherer’s lot can be pleasant indeed. An afternoon’s outing picking berries or catching fishwhat we civilized types would recognize as a recreational weekendprovides life’s needs for several days. The rest of the time can be spent with children, socializing, or simply resting."
In other words, Moravec believes that, in the medium run, handing our economy over to robots will reclaim the birthright of leisure we gave up in the Faustian bargain of agriculture.
As for the second question, about distribution, perhaps we should ask the ultra-intelligent AIs what to do.
1. Kurzweil, Ray (2005) The Singularity Is Near: When Humans Transcend Biology (Viking Adult)
2. Moravec, Hans (1997) “When will computer hardware match the human brain?” (Journal of Evolution and Technology) http://www.transhumanist.com/volume1/moravec.htm
3. Joy, Bill (2000) “Why the future doesn’t need us.” (Wired Magazine, Issue 8.04) http://www.wired.com/wired/archive/8.04/joy.html
4. Moravec, Hans (2000) Robot: Mere Machine to Transcendent Mind (Oxford University Press, USA)
© 2006 J. Storrs Hall
]]>Nanotechnologythe precise engineering of tiny but powerful machinesis advancing quickly, leaping from the pages of science fiction into world-class research laboratories, and coming soon to a desktop near you.
Like electricity or computers before it, nanotechnology will bring greatly improved efficiency and productivity in many areas of human endeavor. In its mature form, known as molecular nanotechnology (MNT) or molecular manufacturing (MM), it will have significant impact on almost all industries and all parts of society. Personal nanofactories (PNs) may offer better built, longer lasting, cleaner, safer, and smarter products for the home, for communications, for medicine, for transportation, for agriculture, and for industry in general.
However, as a general-purpose technology, MM will be dual-use, meaning that in addition to its civilian applications, it will have military uses as wellmaking far more powerful weapons and tools of surveillance. Thus, it represents not only wonderful benefits for humanity, but also grave risks.
Progress toward developing the technical requirements for desktop molecular manufacturing is moving forward rapidly. By reading the collection of essays in the March 27 issue of Nanotechnology Perceptionsor reading them here at KurweilAI.netyou will learn how PNs will bring radical changes to society, and to your life.
Several factors will come together to make MM truly revolutionary.
In August 2005, the Center for Responsible Nanotechnology (CRN), a non-profit research and advocacy organization, announced the formation of a Task Force convened to study the societal implications of this rapidly emerging technology. Bringing together a diverse group of world-class experts from multiple disciplines, CRN is spearheading an historic, collaborative effort to develop comprehensive recommendations for the safe and responsible use of nanotechnology.
Many of the profound implications of molecular manufacturing are explored in an initial collection of 11 new essays, all written by members of the CRN Task Force and published in the March 24 issue of Nanotechnology Perceptions. From military and security issues to human enhancement, artificial intelligence, and more, we take a look under the lid of Pandora’s box to see what the future might hold. A second collection of essays exploring additional concerns will form the next issue of Nanotechnology Perceptions.
Reacting to the huge risks of MM, some advocate that all research be halted. Our first two essays, “Nanotechnology Dangers and Defenses” by inventor and author Ray Kurzweil and “Molecular Manufacturing: Too Dangerous to Allow?” by Nanomedicine author Robert A. Freitas Jr., explore these issues. They survey the dangers, discuss ways to mitigate them, and analyze the weaknesses of relinquishment.
“Nano-Guns, Nano-Germs, and Nano-Steel,” an essay by Mike Treder, explores the troubling topic of nanotech-enabled warfare. Tom Cowper, an expert in policing and criminology, offers his special perspective in “Molecular Manufacturing and 21st Century Policing.” In “The Need For Limits,” Chris Phoenix explains that we may face unprecedented risks as MM’s revolutionary potential dissolves the barriers that keep us safe.
After Giulio Prisco explores the real-world challenge of “Globalization and Open Source Nano Economy,” Damien Broderick provides a broad historical perspective of the relationship between society and technology in “Cultural Dominants and Differential MNT Uptake.”
Advanced nanotechnology could go well beyond making better consumer goods and better weapons. In “Nanoethics and Human Enhancement,” professional ethicists Patrick Lin and Fritz Allhoff look into the controversial aspects of using MM to change our bodies and minds. Noted futurist Natasha Vita-More then lays out the problems our grey matter could face in “Strategic Sustainable Brain.”
Computers built by nanofactories may be millions of times more powerful than anything we have today. The potential for creating world-changing artificial intelligence is examined by scientist J. Storrs Hall in “Is AI Near a Takeoff Point?” Finally, if some of our worst scenarios become real, we may face truly existential dilemmas. These are surveyed in depth by best-selling author David Brin in “Singularities and Nightmares: The Range of Our Futures.”
As editors of these essays, we will be pleased if you are entertained and informed. But we will be further gratified if you are inspired to learn more. We hope you’ll want to get involved in the vital work of raising awareness and finding effective solutions to the challenges presented to the world by advanced nanotechnology.
Mike Treder, Executive Director
Chris Phoenix, Director of Research
Center for Responsible Nanotechnology (www.CRNano.org)
Note: The opinions expressed in these essays are those of the individual authors and do not necessarily represent the opinions of the Center for Responsible Nanotechnology, nor of its parent organization, World Care.
]]>One common argument against pursuing a molecular assembler or nanofactory design effort is that the end results are too dangerous. According to this argument [2, 3], any researh into molecular manufacturing (MM) should be blocked because this technology might be used to build systems that could cause extraordinary damage. The kinds of concerns that nanoweapons systems might create have been discussed elsewhere, in both the nonfictional [4-6] and fictional [7] literature. Perhaps the earliest-recognized and best-known danger of molecular manufacturing [.1] is the risk that self-replicating nanorobots capable of functioning autonomously in the natural environment could quickly convert that natural environment (e.g., “biomass”) into replicas of themselves (e.g., “nanomass”) on a global basis, a scenario often referred to as the “gray goo problem” but more accurately termed “global ecophagy” [4]. As Drexler first warned in Engines of Creation in 1986 [8]:
“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous “bacteria” could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stopat least if we make no preparation…. We cannot afford certain kinds of accidents with replicating assemblers.
Such self-replicating systems, if not countered, could make the earth largely uninhabitable [4, 7-9]concerns that motivated the drafting of the Foresight Guidelines for the safe development of nanotechnology [10]. But, as the Center for Responsible Nanotechnology explains [5], (reference annotations added):
Gray goo would entail five capabilities integrated into one small package. These capabilities are: Mobilitythe ability to travel through the environment; Shella thin but effective barrier to keep out diverse chemicals and ultraviolet light; Controla complete set of blueprints and the computers to interpret them (even working at the nanoscale, this will take significant space); Metabolismbreaking down random chemicals into simple feedstock; and Fabricationturning feedstock into nanosystems. A nanofactory would use tiny fabricators, but these would be inert if removed or unplugged from the factory. The rest of the listed requirements would require substantial engineering and integration [4].
Although gray goo has essentially no military and no commercial value, and only limited terrorist value, it could be used as a tool for blackmail. Cleaning up a single gray goo outbreak would be quite expensive and might require severe physical disruption of the area of the outbreak (atmospheric and oceanic goos [4] deserve special concern for this reason). Another possible source of gray goo release is irresponsible hobbyists. The challenge of creating and releasing a self-replicating entity apparently is irresistible to a certain personality type, as shown by the large number of computer viruses and worms in existence. We probably cannot tolerate a community of “script kiddies” [11] releasing many modified versions of goo.
Development and use of molecular manufacturing poses absolutely no risk of creating gray goo by accident at any point. However, goo type systems do not appear to be ruled out by the laws of physics, and we cannot ignore the possibility that the five stated requirements could be combined deliberately at some point, in a device small enough that cleanup would be costly and difficult. Drexler’s 1986 statement can therefore be updated: We cannot afford criminally irresponsible misuse of powerful technologies. Having lived with the threat of nuclear weapons for half a century, we already know that.
Attempts to block or “relinquish” [3, 12] molecular manufacturing research will make the world a more, not less, dangerous place [13]. This paradoxical conclusion is founded on two premises. First, attempts to block the research will fail. Second, such attempts will preferentially block or slow the development of defensive measures by responsible groups. One of the clear conclusions reached by Freitas [4] was that effective countermeasures against self-replicating systems should be feasible, but will require significant effort to develop and deploy. (Nanotechnology critic Bill Joy, responding to this author, complained in late 2000 that any nanoshield defense to protect against global ecophagy “appears to be so outlandishly dangerous that I can’t imagine we would attempt to deploy it.” [12]) But blocking the development of defensive systems would simply insure that offensive systems, once deployed, would achieve their intended objective in the absence of effective countermeasures. James Hughes [13] concurs: “The only safe and feasible approach to the dangers of emerging technology is to build the social and scientific infrastructure to monitor, regulate and respond to their threats.”
We can reasonably conclude that blocking the development of defensive systems would be an extraordinarily bad idea. Actively encouraging rapid development of defensive systems by responsible groups while simultaneously slowing or hindering development and deployment by less responsible groups (“nations of concern”) would seem to be a more attractive strategy, and is supported by the Foresight Guidelines [10]. As even nanotechnology critic Bill Joy [14] finally admitted in late 2003: “These technologies won’t stop themselves, so we need to do whatever we can to give the good guys a head start.”
While a 100% effective ban against development might theoretically be effective at avoiding the potential adverse consequences, blocking all groups for all time does not appear to be a feasible goal. The attempt would strip us of defenses against attack, increasing rather than decreasing the risks. In addition, blocking development would insure that the substantial economic, environmental, and medical benefits [15] of this new technology would not be available.
Observes Glenn Reynolds [16]:
To the extent that such efforts [to ban all development] succeed, the cure may be worse than the disease. In 1875, Great Britain, then the world’s sole superpower, was sufficiently concerned about the dangers of the new technology of high explosives that it passed an act barring all private experimentation in explosives and rocketry. The result was that German missiles bombarded London rather than the other way around. Similarly, efforts to control nanotechnology, biotechnology or artificial intelligence are more likely to drive research underground (often under covert government sponsorship, regardless of international agreement) than they are to prevent research entirely. The research would be conducted by unaccountable scientists, often in rogue regimes, and often under inadequate safety precautions. Meanwhile, legitimate research that might cure disease or solve important environmental problems would suffer.
Finally, and as explained elsewhere [17], it is well-known [18] that self-replication activities, as distinct from the inherent capacity for self-replication, are not strictly required to achieve the anticipated broad benefits of molecular manufacturing. By restricting the capabilities of nanomanufacturing systems simultaneously along multiple design dimensions such as control autonomy (A1), nutrition (E4), mobility (E10), immutability (L3, L4), etc. [19], molecular manufacturing systemswhether microscale or macroscalecan be made inherently safe.
As Phoenix and Drexler [20] noted in a 2004 paper:
In 1959, Richard Feynman pointed out that nanometer-scale machines could be built and operated, and that the precision inherent in molecular construction would make it easy to build multiple identical copies. This raised the possibility of exponential manufacturing, in which production systems could rapidly and cheaply increase their productive capacity, which in turn suggested the possibility of destructive runaway self-replication. Early proposals for artificial nanomachinery focused on small self-replicating machines, discussing their potential productivity and their potential destructiveness if abused…. [But] nanotechnology-based fabrication can be thoroughly non-biological and inherently safe: such systems need have no ability to move about, use natural resources, or undergo incremental mutation. Moreover, self-replication is unnecessary: the development and use of highly productive systems of nanomachinery (nanofactories) need not involve the construction of autonomous self-replicating nanomachines…. Although advanced nanotechnologies could (with great difficulty and little incentive) be used to build such devices, other concerns present greater problems. Since weapon systems will be both easier to build and more likely to draw investment, the potential for dangerous systems is best considered in the context of military competition and arms control.
Of course, it must be conceded that while nanotechnology-based manufacturing systems can be made safe, they also could be made dangerous. Just because free-range self-replicators may be “undesirable, inefficient and unnecessary” [20] does not imply that they cannot be built, or that nobody will build them. How can we avoid “throwing out the baby with the bathwater”? The correct solution, first explicitly proposed by Freitas in 2000 [21] and later partially echoed by Phoenix and Drexler in 2004, [22] starts with a carefully targeted moratorium or outright legal ban on the most dangerous kinds of nanomanufacturing systems, while still allowing the safe kinds of nanomanufacturing systems to be builtsubject to appropriate monitoring and regulation commensurate with the lesser risk that they pose.
Virtually every known technology comes in “safe” and “dangerous” flavors which necessarily must receive different legal treatment. For example, over-the-counter drugs are the safest and most difficult to abuse, hence are lightly regulated; prescription drugs, more easy to abuse, are very heavily regulated; and other drugs, typically addictive narcotics and other recreational substances, are legally banned from use by anyone, even for medicinal purposes. Artificial chemicals can range from lightly regulated household substances such as Clorox or ammonia; to more heavily regulated compounds such as pesticides, solvents and acids; to the most dangerous chemicals such as chemical warfare agents which are banned outright by international treaties. Another example is pyrotechnics, which range from highway flares, which are safe enough to be purchased and used by anyone; to “safe and sane” fireworks, which are lightly regulated but still available to all; to moderately-regulated firecrackers and model rocketry; to minor explosives and skyrockets, which in most states can be legally obtained and used only by licensed professionals who are heavily regulated; to high-yield plastic explosives, which are legally accessible only to military specialists; to nuclear explosives, the possession of which is strictly limited to a handful of nations via international treaties, enforced by an international inspection agency. Yet another example is aeronautics technology, which ranges from safe unregulated kites and paper airplanes; to lightly regulated powered model airplanes operated by remote control; to moderately regulated civilian aircraft, both small and large; to heavily regulated military attack aircraft such as jet fighters and bombers, which can only be purchased by approved governments; to intercontinental ballistic missiles, the possession of which is strictly limited to a handful of nations via international treaties.
Note that in all cases, the existence of a “safe” version of a technology does not preclude the existence of a “dangerous” version, and vice versa. The laws of physics permit both versions to exist. The most rational societal response has been to classify the various applications according to the risk of accident or abuse that each one poses, and then to regulate each application accordingly. The societal response to the tools and products of molecular manufacturing will be no different. Some MM-based tools and products will be deemed safe, and will be lightly regulated. Other MM-based tools and products will be deemed dangerous, and will be heavily regulated, or even legally banned in some cases.
Of course, the mere existence of legal restrictions or outright bans does not preclude the acquisition and abuse of a particular technology by a small criminal fraction of the population. For instance, in the high-risk category, drug abusers obtain and inject themselves with banned narcotics; outlaw regimes employ prohibited poison chemicals in warfare; and rogue nations seek to enter the “nuclear club” via clandestine atomic bomb development programs. Bad actors such as terrorists can also abuse less-heavily regulated products such as fully-automatic rifles or civilian airplanes (which are hijacked and flown into buildings). The most constructive response to this class of threat is to increase monitoring efforts to improve early detection and to pre-position defensive instrumentalities capable of responding rapidly to these abuses, as recommended in 2000 by this author [4] in the context of molecular manufacturing.
The risk of accident or malfunction is less problematic for new technologies than the dangers of abuse. Engineers generally try to design products that work reliably and companies generally seek to sell reliable products to maintain customer goodwill and to avoid expensive product liability lawsuits. But accidents do happen. Here again, our social system has established a set of progressive responses to deal efficiently with this problem. A good example is the ancient technology of fire. The uses of fire are widespread in society, ranging from lightly-regulated matchsticks, butane lighters, campfires, and internal combustion engines, to more heavily regulated home HVAC furnaces, municipal incinerators and industrial smelters. A range of methods are available to deal quickly and effectively with a fire that has accidentally escaped the control of its user. Home fires due to a smoldering cigarette or a blazing grease pan in the kitchen are readily doused using a common household fire extinguisher. Fires in commercial buildings (e.g., hotels) or industrial buildings (e.g., factories) are automatically quenched by overhead sprinkler systems. When these methods prove insufficient to snuff out the flames, the local fire department is called in to limit the damage to just a single building, using fire trucks, water hoses and hydrants. If many buildings are involved, more extensive fire suppression equipment and hundreds of firefighters can be brought in from all across town to hold the damage to a single city block. In the case of the largest accidental fires, like forest fires, vast quantities of heavy equipment are deployed including thousands of firefighters wielding specialized tools, bulldozers to dig firebreaks, helicopters with pendulous water buckets, and great fleets of air tankers dropping tons of fire retardants. (These progressive measures also protect the public in cases of deliberate arson.) The future emergency response hierarchy for dealing with MM-based accidents will be no less exhaustive and may be equally effective in preserving human life and property, while allowing us to enjoy the innumerable benefits of this new technology. Notes Steen Rasmussen of Los Alamos National Laboratory in New Mexico: “The more powerful technology you unleash, the more careful you have to be.” [23]
The study of the ethical [24], socioeconomic [25-28] and legal [29] impact of replication-capable machines such as molecular assemblers and machines such as nanofactories that could build replicators is still in its earliest stages, and there is additional discussion of safety issues elsewhere [30]. However, two important general observations about replicators and self-replication should be noted here.
First, replication is nothing new. Humanity has thousands, arguably even millions, of years of experience living with entities that are capable of kinematic self-replication. These replicators range from the macroscale (e.g., insects, birds, horses, other humans) on down to the microscale (e.g. bacteria, protozoa) and even the nanoscale (e.g., prions, viruses). As a species, we have successfully managed the eternal tradeoff between risk and reward, and have successfully negotiated the antipodes of danger and progress. There is every reason to expect this success to continue. (As shown by the problem of invasive species, the biosphere requires time to adapt to new replicators, so human intervention may be required to prevent severe damage.)
The technologies of engineered self-replication, even at the microscale, are already in wide commercial use throughout the world. Indeed, human civilization is utterly dependent on self-replication technologies. Many important foods including beer, wine, cheese, yogurt, and kefir (a fermented milk), along with various flavors, nutrients, vitamins and other food ingredients, are produced by specially cultured microscopic replicators such as algae, fungi (yeasts) and bacteria. Virtually all of the rest of our food is made by macroscale replicators such as agricultural crop plants, trees, and farm animals. Many of our most important drugs are produced using microscopic self-replicatorsfrom penicillin produced by natural replicating molds starting in the 1940s [15] to the first use of artificial (engineered) self-replicating bacteria to manufacture human insulin by Eli Lilly in 1982 [31]. These uses continue today in the manufacture of many other important drug products such as: (a) human growth hormone (HGH) and erythropoietin (EPO), (b) precursors for antibiotics such as erythromycin [32], and (c) therapeutic proteins such as Factor VIII. A few species of self-replicating bacteria are even used directly as therapeutic medicines, such as the widely available swallowable pills containing bacteria (i.e., natural biological nanomachines) for gastrointestinal refloration, as for example SalivarexTM which “contains a minimum of 2.9 billion beneficial bacteria per capsule” [33], and AlkadophilusTM which “contains 1.5 billion organisms per capsule” [34], both at a 2005 price of ~$(0.1-0.2) x 10-9 per microscale replicator (i.e., per bacterium). Some replicating viruses, notably bacteriophages, are used as therapeutic agents to combat and destroy unhealthful infectious bacterial replicators [35], and for decades viruses have served as transfer vectors to attempt gene therapies [36]. In industry, bacteria are already employed as “self-replicating factories” [37] for various useful products, and microorganisms are also used as workhorses for environmental bioremediation [38, 39], biomining of heavy metals [40], and other applications. In due course, we will learn to safely harness the abilities of nonbiological replication-capable machines for human benefit as well.
Second, replicators can be made inherently safe. An “inherently safe” kinematic replicator is a replicating system that, by its very design, is inherently incapable of surviving mutation or of undergoing evolution (and thus evolving out of our control or developing an independent agenda), and that, equally importantly, does not compete with biology for resources (or worse, use biology as a raw materials resource [4]). One primary route for ensuring inherent safety is to combine the broadcast architecture for control [41] and the vitamin architecture for materials [42], which together eliminate the likelihood that the system can replicate outside of a very controlled and highly artificial setting. There are numerous other routes to this end [10, 19]. Many dozens of additional safeguards may be incorporated into replicator designs to provide redundant embedded controls and thus an arbitrarily low probability of replicator malfunctions of various kinds, simply by selecting the appropriate design parameters [19].
Artificial kinematic replication-capable systems which are not inherently safe should not be designed or constructed, and indeed should be legally prohibited by appropriate juridical and economic sanctions, with these sanctions to be enforced in both national and international regimes. In the case of individual lawbreakers or rogue states that might build and deploy unsafe artificial mechanical replicators, the defenses we have already developed against harmful biological replicators all have analogs in the mechanical world that should provide equally effective, or even superior, defenses. Molecular manufacturing will make possible ever more sophisticated methods of environmental monitoring, prophylaxis and safety. However, advance planning and strategic foresight will be essential in maintaining this advantage.
1. An earlier version of this essay appeared as portions of Sections 5.11 and 6.3.1 in: Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown TX, 2004, p. 199 and pp. 204-206; http://www.MolecularAssembler.com/KSRM/5.11.htm#p44 and http://www.MolecularAssembler.com/KSRM/6.3.1.htm.Copyright 2006 Robert A Freitas Jr.
2. Sean Howard, “Nanotechnology and mass destruction: The need for an inner space treaty,” Disarmament Diplomacy 65 (2002); http://www.acronym.org.uk/dd/dd65/65op1.htm; Lee-Anne Broadhead, Sean Howard, “The Heart of Darkness,” Resurgence #221, November/December 2003; http://resurgence.gn.apc.org/issues/broadhead221.htm.
3. Bill Joy, “Why the future doesn’t need us,” Wired 8(April 2000); http://www.wired.com/wired/archive/8.04/joy.html. Response by Ralph Merkle, “Text of prepared comments by Ralph C. Merkle at the April 1, 2000 Stanford Symposium organized by Douglas Hofstadter,” at: http://www.zyvex.com/nanotech/talks/stanford000401.html.
4. Robert A. Freitas Jr., “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations,” Zyvex preprint, April 2000; http://www.rfreitas.com/Nano/Ecophagy.htm.
5. “Dangers of Molecular Manufacturing,” Center for Responsible Nanotechnology, 2004; http://crnano.org/dangers.htm.
6. K. Eric Drexler, “Chapter 11. Engines of Destruction,” Engines of Creation: The Coming Era of Nanotechnology, Anchor Press/Doubleday, New York, 1986; http://www.foresight.org/EOC/EOC_Chapter_11.html. Mark Avrum Gubrud, “Nanotechnology and international security,” paper presented at the 5th Foresight Conference, November 1997; http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/. Lev Navrozov, “Molecular nano weapons: Research in China and talk in the West,” NewsMax.com, 27 February 2004; http://www.newsmax.com/archives/articles/2004/2/27/101732.shtml. Jurgen Altmann, “Military uses of nanotechnology: Perspectives and concerns,” Security Dialogue 35(March 2004):61-79. Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology, Penguin Books, New York, 2005.
7. Michael Crichton, Prey, HarperCollins Publishers, New York, 2002. Britt D. Gillette, Conquest of Paradise: An End-times Nano-Thriller, Writers Club Press, New York, 2003. John Robert Marlow, Nano, St. Martin’s Press, New York, 2004.
8. K. Eric Drexler, Engines of Creation: The Coming Era of Nanotechnology, Anchor Press/Doubleday, New York, 1986; http://www.foresight.org/EOC/
9. Philip K. Dick, “Second Variety,” Space Science Fiction, May 1953; also available in: Philip K. Dick, Second Variety and Other Classic Stories by Philip K. Dick, Citadel Press, 1991. Greg Bear, The Forge of God, Gollancz, New York, 1987; http://www.wikipedia.org/wiki/The_Forge_of_God (brief summary). Greg Bear, Anvil of Stars, Century, London, U.K., 1992; http://postviews.editthispage.com/books/byTitle/AnvilOfStars (review).
10. Foresight Institute, “Molecular Nanotechnology Guidelines: Draft Version 3.7,” 4 June 2000; http://www.foresight.org/guidelines/. Extensive excerpt at: http://www.MolecularAssembler.com/KSRM/5.11.htm#p8.
11. According to cyberjournalist Clive Thompson [43], elite writers of software viruses openly publish their code on Web sites, often with detailed descriptions of how the program works, but don’t actually release them. The people who do release the viruses are often anonymous mischief-makers, or "script kiddies"a derisive term for aspiring young hackers, "usually teenagers or curious college students, who don’t yet have the skill to program computers but like to pretend they do. They download the viruses, claim to have written them themselves and then set them free in an attempt to assume the role of a fearsome digital menace. Script kiddies often have only a dim idea of how the code works and little concern for how a digital plague can rage out of control. Our modern virus epidemic is thus born of a symbiotic relationship between the people smart enough to write a virus and the people dumb enoughor malicious enoughto spread it."
Thompson goes on to describe his early 2004 visit to an Austrian programmer named Mario, who cheerfully announced that in 2003 he had created, and placed online at his website, freely available, a program called "Batch Trojan Generator" that autogenerates malicious viruses. Thompson described a demonstration of this program: "A little box appears on his laptop screen, politely asking me to name my Trojan. I call it the ‘Clive’ virus. Then it asks me what I’d like the virus to do. Shall the Trojan Horse format drive C:? Yes, I click. Shall the Trojan Horse overwrite every file? Yes. It asks me if I’d like to have the virus activate the next time the computer is restarted, and I say yes again. Then it’s done. The generator spits out the virus onto Mario’s hard drive, a tiny 3KB file. Mario’s generator also displays a stern notice warning that spreading your creation is illegal. The generator, he says, is just for educational purposes, a way to help curious programmers learn how Trojans work. But of course I could ignore that advice."
Apparently top "malware" writers do take some responsible precautions, notes Thompson. For example, one hacker’s "main virus-writing computer at home has no Internet connection at all; he has walled it off like an airlocked biological-weapons lab, so that nothing can escape, even by accident." Some writers, after finishing a new virus, "immediately e-mail a copy of it to antivirus companies so the companies can program their software to recognize and delete the virus should some script kiddie ever release it into the wild."
12. Bill Joy, “Act now to keep new technologies out of destructive hands,” New Perspectives Quarterly 17(Summer 2000); http://www.pugwash.org/reports/pim/pim18.htm.
13. James R. Hughes, “Relinquishment or Regulation: Dealing with Apocalyptic Technological Threats,” Trinity College, Fall 2001; http://www.changesurfer.com/Acad/RelReg.pdf.
14. Spencer Reiss, “Hope Is a Lousy Defense,” Wired, December 2003; http://www.wired.com/wired/archive/11.12/billjoy_pr.html.
15. Robert A. Freitas Jr., Nanomedicine, Volume I: Basic Capabilities, Landes Bioscience, Georgetown, TX, 1999; http://www.nanomedicine.com/NMI.htm. Robert A. Freitas Jr., Nanomedicine, Volume IIA: Biocompatibility, Landes Bioscience, Georgetown, TX, 2003; http://www.nanomedicine.com/NMIIA.htm. Robert A. Freitas Jr., “Current Status of Nanomedicine and Medical Nanorobotics (Invited Survey),” J. Comput. Theor. Nanosci. 2(March 2005):1-25; http://www.nanomedicine.com/Papers/NMRevMar05.pdf.
16. Glenn Harlan Reynolds “Techno Worries Miss the Target,” SpeakOut.com, 8 June 2000; http://speakout.com/activism/opinions/5298-1.html.
17. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown TX, 2004; Sections 3.13.2.2, 4.9.3, 4.14, 4.17, 4.19, 5.7, 5.9.4; http://www.MolecularAssembler.com/KSRM.htm.
18. K. Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley & Sons, New York, 1992; http://www.zyvex.com/nanotech/nanosystems.html.
19. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown TX, 2004, Section 5.1.9; http://www.MolecularAssembler.com/KSRM/5.1.9.htm. The notations (A1, etc.) refer to specific sections in the cited literature.
20. Chris Phoenix, Eric Drexler, “Safe exponential manufacturing,” Nanotechnology 15(2004):869-872; http://www.iop.org/EJ/news/-topic=763/journal/0957-4484. See also: Paul Rincon, “Nanotech guru turns back on ‘goo’,” BBC News Online UK Edition, 9 June 2004; http://news.bbc.co.uk/1/hi/sci/tech/3788673.stm; and Liz Kalaugher, “Drexler dubs ‘grey goo’ fears obsolete,” Nanotechweb.org, 9 June 2004; http://www.nanotechweb.org/articles/society/3/6/1/1.
21. From Freitas (2000) [4]: “Specific public policy recommendations suggested by the results of the present analysis include: (1) an immediate international moratorium on all artificial life experiments implemented as nonbiological hardware. In this context, ‘artificial life’ is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue. Alternative ‘inherently safe’ replication strategies such as the broadcast architecture are already well-known….”
22. From Phoenix and Drexler (2004) [20]: “The construction of anything resembling a dangerous self-replicating nanomachine can and should be prohibited.”
23. Ronald Kotulak, “Science on verge of new ‘Creation’: Labs say they have nearly all the tools to make artificial life,” Sun-Sentinel Tribune, 28 March 2004; http://www.sun-sentinel.com/news/local/southflorida/chi-0403280359mar28,0,4395528.story?coll=sfla-home-headlines.
24. David S. Goodsell, Bionanotechnology: Lessons from Nature, John Wiley & Sons, New York, 2004.
25. Robert A. Freitas Jr., William P. Gilbreath, eds., Advanced Automation for Space Missions, NASA Conference Publication CP-2255 (N83-15348), 1982; http://www.islandone.org/MMSG/aasm and Robert A. Freitas Jr., Noninflationary Nanofactories, Nanotechnology Perceptions 2 (May 2006), http://www.rfreitas.com/Nano/NoninflationaryPN.pdf.
26. Murray Leinster, The Duplicators, Ace Books, New York, 1964; originally published as “The Lost Race,” Thrilling Wonder Stories, April 1949. Gerald D. Nordley, “On the socioeconomic impact of smart self-replicating machines,” CONTACT 2000, NASA/Ames Research Center; http://www.contact-conference.com/archive/00.html.
27. V. Weil, “Ethical Issues in Nanotechnology,” in M.C. Roco, W.S. Bainbridge, eds., Societal Implications of Nanoscience and Nanotechnology, Kluwer, Dordrecht, 2001, pp. 193-198. R.H. Smith, “Social, Ethical, and Legal Implications of Nanotechnology,” in M.C. Roco, W.S. Bainbridge, eds., Societal Implications of Nanoscience and Nanotechnology, Kluwer, Dordrecht, 2001, pp. 203-211. See also http://itri.loyola.edu/nano/societalimpact/nanosi.pdf.
28. “Task Area 3: Problems of Self-replication, Risk, and Cascading Effects in Nanotechnology: Analogies between Biological Systems and Nanoengineering,” in Philosophical and Social Dimensions of Nanoscale ResearchFrom Laboratory to Society: Developing an Informed Approach to Nanoscale Science and Technology, Working Group for the Study of the Philosophy and Ethics of Complexity and Scale [SPECS], University of South Carolina NanoCenter, 17 March 2003; http://www.cla.sc.edu/cpecs/nirt/nirt.html.
29. Frederick A. Fiedler, Glenn H. Reynolds, “Legal Problems of Nanotechnology: An Overview,” Southern California Interdisciplinary Law Journal 3(1994):593-629. Ty S. Wahab Twibell, “Nano law: The legal implications of self-replicating nanotechnology,” Nanotechnology Magazine, 2000; http://www.irannano.org/English/publication/Articles/Nano-law.htm. John Miller, “Beyond Biotechnology: FDA Regulation Of Nanomedicine,” Columbia Science and Technology Law Review, Vol. IV, 2002-2003; http://www.stlr.org/html/volume4/miller.pdf. Glenn Harlan Reynolds, “Nanotechnology and regulatory policy: three futures,” Harv. J. Law & Technol. 17 (Fall 2003); http://instapundit.com/lawrev/HJOLTnano.pdf.
30. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown TX, 2004; Sections 2.1.5, 2.3.6, 5.1.9(L), 6.3.1, 6.4.4; http://www.MolecularAssembler.com/KSRM.htm.
31. “Milestones in Medical Research,” Eli Lilly; http://www.lilly.com/about/milestones.html.
32. B.A. Pfeifer, S.J. Admiraal, H. Gramajo, D.E. Cane, Chaitan Khosla, “Biosynthesis of complex polyketides in a metabolically engineered strain of E. coli,” Science 291(2 March 2001):1790-1792, 1683 (comment).
33. “L-Salivarius Plus Other Beneficial Microflora,” Product Information Sheet No. 8058, Life Plus, 1996, at http://www.lightplus.com/lifeplus/8058.html; “Life Plus Vitamin/Herbal Answer For a Healthy Digestive Tract,” at http://members.aol.com/probb0254/salivrex.html; “Support Digestion Naturally: Salivarex,” at http://www.healthyway.net/products/digestion.htm.
34. “Alkadophilus: The Non-Refrigerated Acidophilus,” at: http://www.morter.com/HTML-FILES/ALKAdophilus.HTM, http://www.backcare-center.com/NC-AlkaLine.htm, and http://www.nutritionforhealth.com/herbalformulas.htm.
35. R.J. Payne, D. Phil, V.A. Jansen, “Phage therapy: the peculiar kinetics of self-replicating pharmaceuticals,” Clin. Pharmacol. Ther. 68(September 2000):225-230.
36. Michael G. Kaplitt, Arthur D. Loewy, eds., Viral Vectors: Gene Therapy and Neuroscience Applications, Academic Press, New York, 1995. Angel Cid-Arregui, Alejandro Garcia-Carranca, eds., Viral Vectors: Basic Science and Gene Therapy, Eaton Publishing Co., 2000. David Latchman, Viral Vectors for Treating Diseases of the Nervous System, Academic Press, New York, 2003. Curtis A. MacHida, Jules G. Constant, eds., Viral Vectors for Gene Therapy: Methods and Protocols, Humana Press, 2003.
37. Jonathan King, “Chapter 9. The biotechnology revolution: self-replicating factories and the ownership of life forms,” in Jim Davis, Thomas A. Hirschl, Michael Stack, eds., Cutting Edge: Technology, Information Capitalism and Social Revolution, Verso Books, 1997. M. Kleerebezemab, P. Hols, J. Hugenholtz, “Lactic acid bacteria as a cell factory: rerouting of carbon metabolism in Lactococcus lactis by metabolic engineering,” Enzyme Microb. Technol. 26(1 June 2000):840-848. J. Hugenholtz, M. Kleerebezem, M. Starrenburg, J. Delcour, W. de Vos, P. Hols, “Lactococcus lactis as a cell factory for high-level diacetyl production,” Appl. Environ. Microbiol. 66 (September 2000):4112-4114; http://aem.asm.org/cgi/content/full/66/9/4112. Bernard R. Glick, Jack J. Pasternak, Molecular Biotechnology: Principles and Applications of Recombinant DNA, American Society for Microbiology, Washington, DC, 2003.
38. According to Press [44]: “The first patented form of life produced by genetic engineering was a greatly enhanced oil-eating microbe. The patent [45] was registered to Dr. Ananda Chakrabarty of the General Electric Company in 1981 and was initially welcomed as an answer to the world’s petroleum pollution problem. But anxieties about releasing ‘mutant bacteria’ soon led the U.S. Congress and the Environmental Protection Agency (EPA) to prohibit the use of genetically engineered microbes outside of sealed laboratories. The prohibition set back bioremediation for a few years, until scientists developed improved forms of oil-eating bacteria without using genetic engineering. After large-scale field tests in 1988, the EPA reported that bioremediation eliminated both soil and water-borne oil contamination at about one-fifth the cost of previous methods. Since then, bioremediation has been increasingly used to clean up oil pollution on government sites across the United States.”
39. P. Kotrba, L. Doleckova, V. de Lorenzo, T. Ruml, “Enhanced bioaccumulation of heavy metal ions by bacterial cells due to surface display of short metal binding peptides,” Appl. Environ. Microbiol. 65(March 1999):1092-1098; http://aem.asm.org/cgi/content/full/65/3/1092?view=full&pmid=10049868. W. Bae, R.K. Mehra, A. Mulchandani, W. Chen, “Genetic engineering of Escherichia coli for enhanced uptake and bioaccumulation of mercury,” Appl. Environ. Microbiol. 67(November 2001):5335-5338; http://aem.asm.org/cgi/content/full/67/11/5335?view=full&pmid=11679366. X. Deng, Q.B. Li, Y.H. Lu, D.H. Sun, Y.L. Huang, X.R. Chen, “Bioaccumulation of nickel from aqueous solutions by genetically engineered Escherichia coli,” Water Res. 37(May 2003):2505-2511.
40. I. Suzuki, “Microbial leaching of metals from sulfide minerals,” Biotechnol. Adv. 19(1 April 2001):119-132. D.V. Rao, C.T. Shivannavar, S.M. Gaddad, “Bioleaching of copper from chalcopyrite ore by fungi,” Indian J. Exp. Biol. 40(March 2002):319-324. D.E. Rawlings, D. Dew, C. du Plessis, “Biomineralization of metal-containing ores and concentrates,” Trends Biotechnol. 21(January 2003):38-44. G.J. Olson, J.A. Brierley, C.L. Brierley, “Bioleaching review part B: progress in bioleaching: applications of microbial processes by the minerals industries,” Appl. Microbiol. Biotechnol. 63(December 2003):249-257.
41. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown TX, 2004, Section 4.11.3.3; http://www.MolecularAssembler.com/KSRM/4.11.3.3.htm.
42. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown TX, 2004, Section 4.3.7; http://www.MolecularAssembler.com/KSRM/4.3.7.htm.
43. Clive Thompson, “The Virus Underground,” The New York Times, 8 February 2004; http://www.nytimes.com/2004/02/08/magazine/08WORMS.html.
44. Joseph Henry Press, “Chapter 5. Biotechnology and the Environment,” Biotechnology Unzipped: Promises and Realities, National Academy of Sciences, Washington, DC, 2003, pp. 134-160; http://books.nap.edu/books/0309057779/html/134.html.
45. Ananda M. Chakrabarty, “Microorganisms having multiple compatible degradative energy-generating plasmids and preparation thereof,” United States Patent No. 4,259,444, 31 March 1981; Ananda M. Chakrabarty, Scott T. Kellogg, “Bacteria capable of dissimilation of environmentally persistent chemical compounds,” United States Patent No. 4,535,061, 13 August 1985.
© 2006 Robert A. Freitas, Jr.
]]>The Center for Responsible Nanotechnology (CRN) has created a series of new research papers in which industry experts predict profound impacts of nanotechnology on society. The first set of 11 of these original essays by members of CRN’s Global Task Force will appear in the March 27 issue of the journal Nanotechnology Perceptions. KurzweilAI.net will syndicate these essays over that week. In this preview, Chris Phoenix, CRN’s director of research, presents the challenge of how to deal with possible unintended consequences of molecular manufacturing.
Humans are good at pushing limits. We can survive in scorching deserts and in the frozen Arctic. We have flown faster than sound and sent robots to other planets. We have managed, with help from fossil fuels, to feed six billion people. Even before we had motors and technological navigation equipment, some of us were able to find and colonize islands in the middle of the vast Pacific Ocean.
Pushing limits has its darker side as well. Humans are not good at respecting each other’s rights; the ferocity of the Mongol hordes remains legendary, and the 20th century provides multiple examples of state-sponsored mass murder. Natural limits frequently are pushed too far, and whole civilizations have been wiped out by environmental backlash. We are too good at justifying our disrespect of limits, and then we often become increasingly destructive as the problem becomes more acute. More than a century ago, Lord Acton warned that "absolute power corrupts absolutely." This can be restated as, "Complete lack of limits leads to unlimited destruction."
Molecular manufacturing has the potential to remove or bypass many of today’s limits. It is not far wrong to say that the most significant remaining limits will be human, and that we will be trying our hardest to bypass even those. To people with faith in humanity’s good nature and high potential, this will come as welcome news. For many who have studied history, it will be rather frightening. A near-total lack of limits could lead straight to a planet-wide dictatorship, or to any of several forms of irreversible destruction.
Many of the plans that have been proposed to deal with molecular manufacturing, by CRN and others, assume (usually implicitly) that the plan will be implemented within some bigger system, such as the rule of law. This will be problematic if molecular manufacturing is powerful enough that its users can make their own law. We cannot assume that existing world systems will continue to provide a framework in which molecular manufacturing will play out. Those systems that adopt the new technology will be transformed; those that do not will be comparatively impotent. We will have to find ways for multiple actors empowered by molecular manufacturing to coexist constructively, without reliance on the stabilizing forces provided by today’s global institutions.
Any active system without limits will run off the rails. The simplest example is a reproducing population, which will indulge in exponential growth until it exhausts its resources and crashes. Another example can be found in the "excesses" of behavior that are seen in political revolutions. Humans systems need limits as much as any other system, for all that we try to overcome them.
Through all of history, the presence of limits has been a reasonable assumption. Nations were limited by other nations; populations were limited by geography, climate, or disease; and societies would sometimes be stable long enough to develop and agree on a morality that provided additional useful limits. A society that overstepped its bounds could expect to collapse or be out-competed by other societies.
It’s tempting to think that humanity has developed a new worldviewthe Enlightenmentthat will provide internal moral limits. However, the Enlightenment may be fading. It was supported by, and synergistic with, the brief period when people could be several times more productive using machines than by manual labor. During that period, individual people were quite valuable. However, now that we’re developing automation, people can be many times as productive (not just several times), and we don’t need all that productivity. And indeed, as abundance develops into glut, Enlightenment values and practices may be fading.
It’s tempting to think that, left to themselves, people will be generally good. History, in both microcosm and macrocosm, shows that this doesn’t work any better than Communism did. Without sufficient external limits, some people will start cheating, or choosing to violate the moral code of their society. Not only will this reduce benefits for everyone, but the ingrained human aversion to being taken advantage of will cause others to join the cheaters if they can’t prevent them. This leads to a vicious cycle, and the occasional saint won’t be enough to stop the degeneration.
It’s tempting to think that, now that we have digital computers, everything has changed and the old rules of scarcity and competition needn’t apply. As explored in CRN’s paper "Three Systems of Action," [i] digital data transfer can be unlimited-sum, with benefit unrelated to and far larger than the cost. But digital information does not replace existing systems or issues wholesale. And increasing Internet problems such as spam, phishing, and viruses demonstrate that domains of digital abundance and freedom cannot moderate their own behavior very well.
It’s tempting to think that an ongoing power struggle between human leaders would provide limits. But in an age of molecular manufacturing, this seems unlikely for two reasons. First, such a competition almost certainly would be unstable, winner-take-all, and end up in massive oppression: no better than simply starting out with a dictatorship. Second, the contest probably would shift quickly to computer-assisted design and attack, and that would be even worse than all-out war between mere humans, even humans assisted by molecular manufactured weapons. Civilians would probably be a major liability in such conflicts: easy to kill and requiring major resources (not to mention oppressive lifestyle changes) to defend.
Molecular manufacturing will give its wielders extreme powercertainly enough power to overcome all significant non-human limits (at least within the context of the planet; in space, there will be other limits such as scarcity of materials and speed of light). Even if the problem of cheaters could be overcome, we do not have many internal limits these days; the current trend in capitalism is to deny the desirability of all limits except those that arise from competition. What’s left?
Somehow, we have to establish a most-powerful system that limits itself and provides limits for the rest of our activities. Long ago, Eric Drexler proposed an Active Shield.[ii] Others have proposed building an AI to govern usthough they have not explained how to build internal limits into the AI. I have proposed creating a government of people who have accepted modifications to their biochemistry to limit some of their human impulses. All of these suggestions have problems.
Open communication and accountability may supply part of the answer. David Brin has proposed "reciprocal accountability."[iii] It’s been noted that democracies, which embody transparency and accountability, rarely have famines or go to war with each other. Communication and accountability may be able to overcome the race to the bottom that happens when humans are left to their own devices. But communication and accountability depend on creation and maintenance of the infrastructure; on continued widespread attention; and on forensic ability (being able to connect effect back to cause in order to identify perpetrators). Recent trends in US media and democracy are not encouraging; it seems people would rather see into bedrooms than boardrooms. And it’s not clear whether people’s voices will still matter to those in power once production becomes sufficiently automated that nation-scale productivity can be maintained with near-zero labor.
If we can somehow find meta-limits, then within those limits a variety of administration methods may work to optimize day-to-day life. In other words, the problem with administrative suggestions is not inherent in the suggestions themselves; it is that the suggestions rely on something else to provide limits. Without limits, nothing can be stable; with limits, wise administration will still be needed, and best practices should be researched. But perhaps the biggest problem of all will be how to develop a system of near-absolute power that will not become corrupt.
[i] http://crnano.org/systems.htm
[ii] http://www.foresight.org/EOC/EOC_Chapter_11.html#section04of05
[iii] http://davidbrin.blogspot.com/2005/09/another-pause-this-time-for-soa.html
© 2006 Chris Phoenix. Reprinted with permission.
]]>Founded in December 2002, the Center for Responsible Nanotechnology has a modest goal: to ensure that the planet navigates the emerging nanotech era safely. That’s a lot for a couple of volunteers to shoulder, but Mike Treder and Chris Phoenix have carried their burden well, and done much to raise awareness of the potential risks and benefits of molecular manufacturing, including a major presentation at the US Environmental Protection Agency on the impacts of nanotechnology….
[WorldChanging] conducted this interview as a series of email exchanges over the last few months. This post captures (and organizes) the highlights of that conversation. Mike, Christhank you. Your work is one of the reasons we have optimism for the future. Jamais Cascio
WorldChanging: So, to startwhat is the Center for Responsible Nanotechnology hoping to make happen?
Center for Responsible Nanotechnology: We want to help create a world in which advanced nanotechnologymolecular manufacturingis widely used for beneficial purposes, and in which the risks are responsibly managed. The ability to manufacture highly advanced nanotech products at an exponentially accelerating pace will have profound and perilous implications for all of society, and our goal is to lay a foundation for handling them wisely.
WC: So you set up a non-profit. How is that going?
CRN: CRN is a volunteer organization. We have no paid positions. Our co-founders have dedicated time to this cause in lieu of professional paying careers. But the thing is, technical progress toward nanotechnology is really accelerating, and it’s become more urgent than ever for us to examine the global implications of this technology and begin designing wise and effective solutions.
It won’t be easy. CRN needs to grow, quickly, to meet the expanding challenge. We’re asking people who share the belief that our research must keep moving ahead to support us with small or large donations.
WC: One of the unusual aspects of CRN is that you’re neither a nanotech advocacy group nor unmoving nanotech critics. Your focus is on the responsible development and deployment of next-generation nanotechnologies. Tell me a bit about what "responsible nanotechnology" looks like.
CRN: Youre right that we have tried hard to stay in a "middle" place. We sometimes refer to it as between resignation (forsaking attempts to manage the technology) and relinquishment (forsaking the technology altogether). Our view is that advanced nanotechnologymolecular manufacturingshould be developed as fast as it can be done safely and responsibly. Were promoting responsible rapid development of the technologynot because we believe it is safe, but because we believe it is riskyand because the only realistic alternative to responsible development is irresponsible development.
CRN: So, what does ‘responsible’ mean? First, that we take effective precautions to forestall a new arms race. Second, that we do what is necessary to prevent a monopoly on the technology by one nation, one bloc of nations, or one multinational corporation. Third, that we seek appropriate ways to share the tremendous benefits of the technology as widely as possible; we should not allow a ‘nano-divide.’ Fourth, that we recognize the possibilities for both positive and negative impacts on the environment from molecular manufacturing, and that we adopt sensible global regulations on its use. And fifth, that we understand and take precautions to avert the risk of severe economic disruption, social chaos, and consequent human suffering.
WC: How does the "responsible" approach differ from something like the "Precautionary Principle?" What’s your take on the concept of "precaution" applied to emerging technologies?
CRN: One of our earliest published papers was on that very topic. Its called "Applying the Precautionary Principle to Nanotechnology." CRNs analysis shows that there are actually two different forms of the Precautionary Principle, something that many people dont realize. We call them the ‘strict form’ and the ‘active form.’
The strict form of the Precautionary Principle requires inaction when action might pose a risk. In contrast, the active form calls for choosing less risky alternatives when they are available, and for taking responsibility for potential risks. Because the strict form of the Precautionary Principle does not allow consideration of the risks of inaction, CRN believes that it is not appropriate as a test of molecular manufacturing policy.
The active form of the Precautionary Principle, however, seems quite appropriate as a guide for developing molecular manufacturing policy. Given the extreme risks presented by misuse of nanotechnology, it appears imperative to find and implement the least risky plan that is realistically feasible. Although we cannot agree with the strict form of the Precautionary Principle, we do support the active form.
WC: What is the CRN Task Force, and what do you hope to have it accomplish? [Disclaimer: I am a member of the CRN Task Force.]
CRN: Without mutual understanding and cooperation on a global level, the hazardous potentials of advanced nanotechnology could spiral out of control and deny any hope of realizing the benefits to society. Were not willing to leave the outcome to chance.
So, last August we announced the formation of a new Task Force, convened to study the societal implications of this rapidly emerging technology. Weve brought together a diverse group of more than 60 world-class experts from multiple disciplines to assist us in developing comprehensive recommendations for the safe and responsible use of nanotechnology.
Our first project is just nearing completion. Members of the task force have written a series of essays describing their greatest concerns about the potential impacts of molecular manufacturing. We have completed editing approximately 20 excellent articles that range from discussion of economic issues and security issues, to the implications of human enhancement and artificial intelligence. They will be published in the March 2006 issue of Nanotechnology Perceptions, an academic journal maintained by a couple of European universities. We will simultaneously publish the essays at the Wise-Nano.org website, where anyone can read and comment on them.
WC: We’ve discussed the different kinds of nanotechnology on WorldChanging, and you folks posted a very useful follow-up to one of our pieces on that subject. To be clear, when we talk about "nanotechnology" in this context, we’re talking about "nanofactories." So let’s drill down a bit on that particular subject. What kinds of things could an early version of a nanofactory make? Are we just talking desktop printing of simple physical objects (like a cup), items embedding diverse materials & electronics (like a laptop), or organic and biochemical materials (like medicines or food)?
CRN: The first, tiny nanofactory will be built by intricate laboratory techniques; then that nanofactory will have to build a bigger one, and so on, many times over. This means that even the earliest usable nanofactory will necessarily work extremely fast and be capable of making highly functional products with moving parts. So, in addition to laptops and phones, an early nanofactory should be able to make cars, home appliances, and a wide array of other products.
Medicines and food will not be early products. A large number of reactions will be required to make the vast variety of organic molecules. Some molecules will be synthesized more easily than others. It may work better first to build (using a nanofactory) an advanced fluidic system that can do traditional chemistry.
Food will be especially difficult because it contains water. Water is a small molecule that would float around and gum up the factory. Also, food contains a number of large and intricate molecules for taste and smell; furthermore, nourishing food requires mineral elements that would require extra research to handle with nanofactory-type processes.
WC: It seems to me that manufacturing via nanofactories will require some different concepts of the manufacturing process than the automated assembly-line model most of us probably have in mind when we think of "factories." Parallel to early design work on the hardware end, has there been much work done on the software/design end of how nanofactories would work?
CRN: We have thought about how nanofactories would be controlled, and it seems probable that it’s just not a very difficult problem, at least for the kind of nanofactory that can include lots of integrated computers. (This should include almost any diamond-building nanofactory, and a lot of nanofactories based on other technologies as well.)
Until automated design capabilities are developed, products will be limited largely by our product design skills. A simple product-description language, roughly analogous to PostScript, would be able to build an enormous range of products, but would not even require fancy networking in the nanofactory. (Drexler discusses product-description languages in section 14.6 of Nanosystems.)
WC: What makes nanofactories so different from traditional production methods?
CRN: It’s important to understand that molecular manufacturing implies exponential manufacturingthe ability to rapidly build as many desktop nanofactories (sometimes called personal fabricators) as you have the resources for. Starting with one nanofactory, someone could build thousands of additional nanofactories in a day or less, at very low cost. This means that projects of almost any size can be accomplished quickly.
Those who have access to the technology could use it to build a surveillance system to track six billion people, weapons systems far more powerful than the world’s combined conventional forces, construction on a planetary scale, or spaceflight as easy as airplane flight is today.
Massive construction isn’t always bad. Rapid construction could allow us to build environmental remediation technologies on a huge scale. Researchers at Los Alamos National Laboratory are suggesting that equipment could be built to remove significant quantities of carbon dioxide directly from the atmosphere. With molecular manufacturing, this could be done far more quickly, easily, and inexpensively.
In addition to being powerful, the technology will also be deft and exquisite. Medical research and treatment will advance rapidly, given access to nearly unlimited numbers of medical robots and sensors that are smaller than a cell.
This only scratches the surface of the implications. Molecular manufacturing has as many implications as electricity, computers, and gasoline engines.
WC: In other words, nanotechnology is both an engineering process and (for lack of a less jargony phrase) an "enabling paradigm"it doesn’t just make it possible to do what we now do, but better/faster/ cheaper, it also makes it possible (in time) to do some things that we can’t now do.
CRN: Yes, exactly. Another good way to look at it is as a general-purpose technology: enhancing and enabling a wide range of applications. It will be similar in effect to, say, electricity or computers.
WC: Back up a sec. The complexities of surveillance systems, planetary engineering, and cheap & easy space flight come from much more than not being able to make enough or sufficiently-precise gear. There are also questions of design, of power, of scale, and so forth. These seem likely to take substantial effort and time.
CRN: The speed of development will differ for each project. But by today’s standards, almost any project could be done quite quickly. A lot of hardware development time today is spent in compensating for the high cost and large delay associated with building each prototype. If you could build a prototype in a few hours at low cost, a lot of engineering could be bypassed. Of course, this is less true for safety-critical systems. But imagine how quickly space flight could be developed if Elon Musk (SpaceX), John Carmack (Armadillo), and Burt Rutan could each build and fly a new (unmanned) spacecraft every day instead of waiting three months or more.
Power will of course have to be supplied to any project. But one of the first projects may be a massive solar-gathering array that could supply power for planet-scale engineering. A nanofactory-built solar array should be able to repay the energy cost of its construction in just a few days, so scaling up the solar array itself would not take too long.
A comparable advantage can be seen today in computer chip design. FPGA’s and ASIC’s are two similar kinds of configurable computer chips. They differ in that ASIC’s are designed before they are built, and FPGA’s can have new designs downloaded to them in seconds, even after they are integrated into a circuit. An FPGA can be designed by a person in a week or two. An ASIC requires a team of people working for several monthslargely to make absolutely sure that they have not made even a single mistake, which could cost the company millions of dollars and months of delay. The difference between today’s development cycle and nanofactory-enabled product R&D is the difference between ASIC’s and FPGA’s.
WC: The degree to which research is largely corporate, academic or governmental will obviously vary from country to country. Who are some of the organizations doing innovative work in nanotech?
CRN: There are only a few companies that are explicitly working on molecular manufacturing. Many more are doing work that is relevant, but not aiming at that goalor at least not admitting to it.
Zyvex LLC is working on enabling technologies, with the stated goal of providing "tools, products, and services that enable adaptable, affordable, and molecularly precise manufacturing."
In Japan, individual silicon atoms have been moved and bonded into place since 1994, first by the Aono group and then by Oyabu. Because this used a much larger scanning probe microscope to move the atoms, it is not a large-scale manufacturing technique.
Researchers at Rice University have developed a "nano-car" with single-molecule wheels that roll on molecular bearings, and reportedly are aiming toward "nano-trucks" that could transport molecules in miniature factories.
WC: To what degree is nanotechnology research a province of the big industrial countries, and to what degree is it accessible to forward-looking developing countries (what we term on WorldChanging the "leapfrog nations")?
CRN: In the broad sense of nanoscale technologies, some kinds of nanotech research are quite accessible to leapfrog nations. Molecular manufacturing research may be accessible as well. Atom-level simulations can now be run on desktop PC’s. Some of the development pathways, such as biopolymer approaches, require only a small lab’s worth of equipment.
We don’t yet know exactly how difficult it will be to develop a nanofactory. Several approaches are on the table, but there could be a much easier approach waiting to be discovered. It’s probably safe to say that any nation that can support a space program could also engage in substantial research toward molecular manufacturing. Note that several individuals are now supporting space programs, including Elon Musk of SpaceX and Paul Allen who funded SpaceShipOne.
WC: Do you expect home "hobbyist" designersperhaps using home-made nanotoolsto have any role in the nanotechnology revolution, as "garage hackers" did in the early days of personal computing?
CRN: We have been aware of some of the scanning probe microscope efforts. If advanced molecular manufacturing requires a vacuum scanning-probe system cooled by liquid helium, it’s doubtful you could do that in your garage. On the other hand, if all it requires is an inert-gas environment at liquid nitrogen temperatures, then some work might be doable by a very competent hobbyist.
Design of nanomachines (as opposed to construction) is already accessible to hobbyists. Without the ability to test their designs in the lab, many of the designs will have bugs, of course. However, at least in the early stages, the development of new design approaches and the demonstration that we’ve learned even approximately how to implement mechanisms will be important contributions.
WC: A big concern in a world of easy fabrication is what to do with broken or obsolete stuff. In what ways could a nanofactory-type system use "waste" materials, with an eye towards the "cradle-to-cradle" concept?
CRN: If the stuff is made of light atoms, such as carbon and nitrogen, it should be straightforward to burn it in an enclosed system. The resulting gases could be cooled and then sorted at a molecular level, and the molecules could be stored for re-use.
It seems likely that products will be designed and built using modules that would be somewhat smaller than a human cell. If these modules are standardized and re-usable, then it might be possible to pull apart a product and rearrange the modules into a different product. However, there are practical problems: the modules themselves may be obsolete, and they would need to be carefully cleaned before they could be reassembled. It would probably be easier to reduce them to atoms and start over, since every atom could be contained and re-used.
WC: That seems likely to take a serious amount of energy to accomplish thoroughly, am I right? That is, if I toss my cell phone into an incinerator, different parts will cook at different temperatures, and there are some components that would require some fairly high temps to break down. In addition, the nano-incinerator will need to be able to sort out the various atoms that are emitted by the burning object. Sounds complex.
This becomes an important issue, because a world where it’s really easy to make stuff but much harder to get rid of it starts to accelerate some already-serious problems around garbage, especially hazardous wastes.
CRN: Breaking down a carbon-based product just requires heating it a bit, then exposing it to oxygen or hydrogensomething that can combine with the carbon to produce small gas molecules. This process will likely be exothermicin other words, being high in carbon, nano-built products would burn very nicely when you wanted them to. (Adding small integrated water tanks that were drained before recycling would prevent premature combustion.)
Constructing a nano-built product requires not only rearranging a lot of molecular bonds, but computing how to do that, and moving around a lot of nanoscale machinery. A nanofactory might require several times the bond energy to accomplish all that. The energy required to break down a nano-built product should be less than the energy it took to make it in the first place. And in terms of product strength per energy invested, nano-built diamond would probably be many times better than aluminuma cheap, energy-intensive commodity.
WC: We’ve occasionally written on WC about the increasing "digitization" of physical objects, whether through embedded computer chips and sensors or even the introduction of DRM-style use controls. On the flip side, futurists have for a few years talked about the possibility of "napster fabbing"swapping design files, legally or otherwise, and/or the development of an open source culture around next-generation fabrication tools like nanofactories.
What do you see as the key intellectual property issues emerging from the rise of nanomanufacturing?
CRN: Because molecular manufacturing will be a general-purpose technology, we can expect that it will raise many of the issues that exist today in many different domains. Many issues will be the same as for software and entertainment, but the stakes will be far higher. The issues we see in medicine, with controversies over whether affordable pharmaceuticals should be provided to developing nations, will also apply to humanitarian applications of nanofactory products.
WC:To tease that point out for a minute, you’re suggesting that the issue won’t be with the difficulty or expense of making the materials, but the expense of the time necessary to come up with the design in the first place. Big pharma argues that the majority of their work is actually in dead ends, and that the high fees they charge for the drugs that do work are to make up for the time they take with the stuff that doesn’t work. Would the nanofactory worldat least the early days of itparallel this?
CRN: It’s not an exact parallel. Some percentage of pharmaceutical development costs go to preliminary testing, another percentage to clinical trialswhich are hugely expensive due to regulation and liabilityand a third percentage to advertising and incentives for doctors to prescribe the new medicine. Of these three, probably only the first will apply to early nanofactory products.
We do expect design time to be a large component of the cost of a product. But the Open Source software movement shows that significant design time can be contributed without adding to product price.
WC: So you see Open Source as an aspect of the nanofactory future?
CRN: Whether or not open source approaches will be allowed to develop nanofactory products is the single biggest intellectual property question. Open source software has been astonishingly creative and innovative, and open source products could be a rich source of innovation as well as humanitarian designs. Even businesses could benefit, since open source usually doesn’t put a final polish on its products, so commercial interests can repackage them and sell at a good profit.
However, the business interests that will want a monopoly, and the security institutions that will be uncomfortable with unrestricted fabbing, will probably oppose open source products. It would be easy to criminalize unrestricted fabbing, though far more difficult to prevent it. Prevention of private innovation, through simply not allowing private ownership of nanofactories, would have to be rigorously enforced worldwidelikely impossible, and certainly oppressive. Criminalization without prevention would almost certainly be bad policy, but it will probably be tried.
WC: We see early parallels to this in the issue of open source and "digital rights management" routines. The idea of outlawing Open Source (because it can’t be locked down) even gets kicked around from time to time. It seems likely that an open source that could result in new weapons might be even more likely to trigger this kind of response.
CRN: Historically, Open Source has been a huge source of innovation. Open source applied to molecular manufacturing could result in new weapons, but also in new defenses. Shutting down Open Source might not reduce the weapons much, but it probably would reduce the development of defenses. We should think very carefully before we reduced our capacity to design new defenses. That said, you may well be right that a combination of government and corporate interests would work together to successfully eliminate Open Source-type development.
WC: What would you say are your top concerns about how nanofactory technology might develop?
CRN: Our biggest concern is that molecular manufacturing will be a source of immense military power. A medium-sized or larger nation that was the sole possessor of the technology would be a superpower, with a strong likelihood of becoming the superpower if they were sufficiently ruthless. This implies geopolitical instability in the form of accelerating arms races and preemptive strikes. For several reasons, a nanofactory-based arms race looks less stable than the nuclear arms race was.
Related to the military concern is a tangle of security concerns. If molecular manufacturing proliferates, it will become relatively easy to build a wide range of high-tech automated weaponry. Accountability may decrease even as destructive power increases. The Internet, with its viruses, spam, spyware, and phishing, provides a partial preview of what we might expect. It could be very difficult to police such a society without substantial weakening of civil rights and even human rights.
Economic disruption is a likely consequence of widespread use of molecular manufacturing. On the one hand, we would have an abundance of production capacity able to build high-performance products at minimal expense. On the other hand, this could threaten a lot of today’s jobs, from manufacturing to transportation to mineral extraction.
Environmental damage could result from widespread use of inexpensive products. Although products filling today’s purposes could be made more efficient with molecular manufacturing, future applications such as supersonic and ballistic transport may demand far more energy than we use today.
Another major risk associated with molecular manufacturing comes from not using it for positive purposes. Artificial scarcitieslegal restrictionshave been applied to lifesaving medicines. Similar restrictions on molecular manufacturing, whether in the form of military classification, unnecessary safety regulations, or explicit intellectual property regulation, could allow millions of people to die unnecessarily.
WC: We know from the digital restrictions/"piracy" debate that technical limitations on copying, etc., do an adequate job of preventing regular folks from duplicating movies, software and such, whether for illicit reasons (passing a copy to a friend) or otherwise (making a backup or other "fair use"), while doing little to prevent real IP pirates from duping off thousands of copies to sell on the street in Shanghai or the like.
In short, there’s every reason to believe that top-down efforts to stymie the illegal/illicit/irresponsible use of nanofactories will be only marginally-effective, at best, while driving the worst stuff deep underground and preventing regular citizens from using their nanofactories in ways that would be beneficial and not significantly harmful.
CRN: It would be premature to dismiss all top-down regulation as ineffective. At the same time, the reduction in humanitarian and other benefits from excessive regulation is one of CRN’s primary concerns. It is certainly true that regulation will impose a significant cost in lost opportunities. However, because there are so many different types of harm that could be done with a nanofactory, we are not ready to say that all regulation would be undesirable.
It will be difficult to apply "fine-grained relinquishment" (Kurzweil’s term) to a general-purpose technology like nanofactories. However, we will probably have to achieve this, because both blanket permissiveness and blanket restrictions will impose extremely high costs and risks.
As we have said before, there will be no simple solutions. We will need a combination of both top-down and emergent approaches.
WC: I’ve been a pretty vocal advocate of openness as a tool for countering dangerous uses. It’s a bit counter-intuitive, I admit, but there’s real precedent for its value. Most experts see free/open source software, for example, as being more secure than closed, proprietary code. And the treatment for SARS (to cite a non-computer example) emerged directly from open global access to the virus genome.
In both cases, the key is the widespread availability of the underlying "code" to both professional and interested amateurs. The potential increase in possible harmful use of that knowledge is, at least so far, demonstrably outweighed by the preventative use.
What do you think of an open approach to nanotechnology as a means of heading off disasters?
CRN: In a false dichotomy between totally closed and totally open, the open approach would seem to increase the dangers posed by hobbyists and criminals. A totally closed approach, assuming no one in power was insanely stupid, probably would not lead to certain kinds of danger such as hobbyist-built free-range self-replicators, the so-called grey goo.
I don’t think we can count on no one in power being insanely stupid, however. Realistically, even a totally closed, locked-down, planet-wide dictator approach would not be safe.
A partially closed approach, where Open Source was criminalized but bootleg or independent nanofactories were available, would be prone to danger from criminals and rebellious hobbyistsand by the way, the world still needs a lot more research to determine just how extreme that danger is. An open approach probably would not increase the danger much versus a partially closed approach, and would certainly increase our ability to deal with the danger.
Remember Ben Franklin’s adage: Three can keep a secret, if two are dead. There would be a substantial danger of disastrous abuse even with a mere one thousand people or groups having access to the technology (and the rest of the six billion at their mercy). It’s not certain that the danger would be very much worse with a million or even a billion people empowered.
WC: Closing on a more positive note, what would you say are your biggest hopes about how this kind of technology might be applied? In other words, what does a world of responsible nanotechnology look like?
CRN: We would like to see a world in which security and geopolitical concerns are addressed proactively and skillfully, in order to maximize liberty without allowing any devastating uses.
We would like to see a world in which the ubiquity of tradeoffs is recognized, and where consequences are neither dismissed nor exaggerated. Regulation should be appropriate to the extent of the various risks. The drawbacks of inaction should be considered along with the risks and problems of action.
We would like to see a world in which everyone has access to at least a minimal molecular manufacturing capacity. The computer revolution has shown that inventiveness is maximized by a combination of commercial and open source development, and open source is a good generator of free basic products when the cost of production is tiny.
© 2006 Jamais Cascio. Reprinted with permission.
]]>It turns out that information technology is increasingly encompassing everything of value. It’s not just computers, it’s not just electronic gadgets. It now includes the field of biology. We’re beginning to understand how life processes, disease, aging, are manifested as information processes and gaining the tools to actually manipulate those processes. It’s true of all of our creations of intellectual and cultural endeavors, our music, movies are all facilitated by information technology, and are distributed, and represented as information.
Evolutionary processes work through indirection. Evolution creates a capability, and then it uses that capability to evolve the next stage. That’s why the next stage goes more quickly, and that’s why the fruits of an evolutionary process grow exponentially.
The first paradigm shift in biological evolution, the evolution of cells, and in particular DNA (actually, RNA came first)the evolution of essentially a computer system or an information processing backbone that would allow evolution to record the results of its experimentstook billions of years. Once DNA and RNA were in place, the next stage, the Cambrian explosion, when all the body plans of the animals were evolved, went a hundred times faster. Then those body plans were used by evolution to concentrate on higher cognitive functions. Biological evolution kept accelerating in this manner. Homo sapiens, our species, evolved in only a few hundred thousand years, the blink of an eye in evolutionary terms.
Then again working through indirection, biological evolution used one of its creations, the first technology-creating species to usher in the next stage of evolution, which was technology. The enabling factors for technology were a higher cognitive function with an opposable appendage, so we could manipulate and change the environment to reflect our models of what could be. The first stages of technology evolutionfire, the wheel, stone toolsonly took a few tens of thousands of years.
Technological evolution also accelerated. Half a millennium ago the printing press took a century to be adopted, half a century ago the first computers were designed pen on paper. Now computers are designed in only a few weeks’ time by computer designers sitting at computers, using advanced computer assisted design software. When I was at MIT [in the mid-1960s] a computer that took about the size of this room cost millions of dollars yet was less powerful than the computer in your cell phone today.
One of the profound implications is that we are understanding our biology as information processes. We have 23,000 little software programs inside us called genes. These evolved in a different era. One of those programs, called the fat insulin receptor gene, says, basically, hold onto every calorie because the next hunting season might not work out so well. We’d like to change that program now. We have a new technology that has just emerged in the last couple years called RNA interference, in which we put fragments of RNA inside the cell, as a drug, to inhibit selected genes. It can actually turn genes off by blocking the messenger RNA expressing that gene. When the fat insulin receptor was turned off in mice, the mice ate ravenously and remained slim. They didn’t get diabetes, didn’t get heart disease, lived 20% longer: they got the benefit of caloric restriction without the restriction.
Every major disease, and every major aging process has different genes that are used in the expression of these disease and aging processes. Being able to actually select when we turn them off is one powerful methodology. We also have the ability to turn enzymes off. Torcetrapib, a drug that’s now in FDA Phase 3 trials, turns off a key enzyme that destroys the good cholesterol, HDL, in the blood. If you inhibit that enzyme, HDL levels soar and atherosclerosis slows down or stops.
There are thousands of these developments in the pipeline. The new paradigm of rational drug design involves actually understanding the information processes underlying biology, the exact sequence of steps that leads up to a process like atherosclerosis, which causes heart attacks, or cancer, or insulin resistance, and providing very precise tools to intervene. Our ability to do this is also growing at an escalating rate.
Another exponential process is miniaturization. We’re showing the feasibility of actually constructing things at the molecular level that can perform useful functions. One of the biggest applications of this, again, will be in biology, where we will be able to go inside the human body and go beyond the limitations of biology.
Rob Freitas has designed a nanorobotic red blood cell, which is a relatively simple device, it just stores oxygen and lets it out. A conservative analysis of these robotic respirocytes shows that if you were to replace ten percent of your red blood cells with these robotic versions you could do an Olympic sprint for 15 minutes without taking a breath, or sit at the bottom of your pool for four hours. It will be interesting to see what we do with these in our Olympic contests. Presumably we’ll ban them, but then we’ll have the specter of high school students routinely outperforming the Olympic athletes.
A robotic white blood cell is also being designed. A little more complicated, it downloads software from the Internet to combat specific pathogens. If it sounds very futuristic to download information to a device inside your body to perform a health function, I’ll point out that we’re already doing that. There are about a dozen neural implants either FDA-approved or approved for human testing. One implant that is FDA-approved for actual clinical use replaces the biological neurons destroyed by Parkinson’s disease. The neurons in the vicinity of this implant then receive signals from the computer that’s inside the patient’s brain. This hybrid of biological and non-biological intelligence works perfectly well. The latest version of this device allows the patient to download new software to the neural implant in his brain from outside his body.
These are devices that today require surgery to be implanted, but when we get to the 2020s, we will ultimately have the “killer app” of nanotechnology, nanobots, which are blood cell-sized devices that can go inside the body and brain to perform therapeutic functions, as well as advance the capabilities of our bodies and brains.
If that sounds futuristic, I’ll point out that we already have blood cell-size devices that are nano-engineered, working to perform therapeutic functions in animals. For example, one scientist cured type I diabetes in rats with this type of nanoengineered device. And some of these are now approaching human trials. The 2020s really will be the “golden era” of nanotechnology.
It is a mainstream view now among informed observers that by the 2020s we will have sufficient computer processing to emulate the human brain. The current controversy, or I would say, the more interesting question is, will we have the software or methods of human intelligence? To achieve the methods, the algorithms of human intelligence, there is underway a grand project to reverse-engineer the brain. And there, not surprisingly, we are also making exponential progress. If you follow the trends in reverse brain engineering it’s a reasonable conclusion that we will have reverse-engineered the several hundred regions of the brain by the 2020s.
By early in the next decade, computers won’t look like today’s notebooks and PDAs, they will disappear, integrated into our clothing and environment. Images will be written to our retinas for our eyeglasses and contact lenses, we’ll have full-immersion virtual reality. We’ll be interacting with virtual personalities; we can see early harbingers of this already. We’ll have effective language translation.
If we go out to 2029, there will be many turns of the screw in terms of this exponential progression of information technology. There will be about thirty doublings in the next 25 years. That’s a factor of a billion in capacity and price performance over today’s technology, which is already quite formidable.
By 2029, we will have completed reverse engineering of the brain, we will understand how human intelligence works, and that will give us new insight into ourselves. Non-biological intelligence will combine the suppleness and subtlety of our pattern-recognition capabilities with ways computers have already demonstrated their superiority. Every time you use Google you can see the power of non-biological intelligence. Machines can remember things very accurately. They can share their knowledge instantly. We can share our knowledge, too, but at the slow bandwidth of language.
This will not be an alien invasion of intelligent machines coming from over the horizon to compete with us, it’s emerging from within our civilization, it’s extending the power of our civilization. Even today we routinely do intellectual feats that would be impossible without our technology. In fact our whole economic infrastructure couldn’t manage without the intelligent software that’s underlying it.
The most interesting application of computerized nanobots will be to interact with our biological neurons. We’ve already shown the feasibility of using electronics and biological neurons to interact non-invasively. We could have billions of nanobots inside the capillaries of our brains, non-invasively, widely distributed, expanding human intelligence, or providing full immersion virtual reality encompassing all of the senses from within the nervous system. Right now we have a hundred trillion connections. Although there’s a certain amount of plasticity, biological intelligence is essentially fixed. Non-biological intelligence is growing exponentially; the crossover point will be in the 2020s. When we get to the 2030s and 2040s, it will be the non-biological portion of our civilization that will be predominant. But it will still be an expression of human civilization.
Every time we have technological gains we make gains in life expectancy. Sanitation was a big one, antibiotics was another. We’re now in the beginning phases of this biotechnology revolution. We’re exploring, understanding and graining the tools to reprogram the information processes underlying biology; and that will result in another big gain in life expectancy. So, if you watch your health today, the old-fashioned way, you can actually live to see the remarkable 21st century.
© 2006 Ray Kurzweil. Reprinted with permission.
]]>Ray Kurzweil is a computer scientist, software developer, inventor, entrepreneur, philosopher, and a leading proponent of radical life extension. He is the coauthor (with Terry Grossman, M.D.) of Fantastic Voyage: Live Long Enough to Live Forever, which is one of the most intriguing and exciting books on life extension around. Kurzweil and Grossmans approach to health and longevity combines the most current and practical medical knowledge with a soundly-based, yet awe-inspiring visionary perspective of whats to come.
Kurzweils philosophy is built upon the premise that we now have the knowledge to identify and correct the problems caused by most unhealthy genetic predispositions. By taking advantage of the opportunities afforded us by the genomic testing, nutritional supplements, and lifestyle adjustments, we can live long enough to reap the benefits of advanced biotechnology and nanotechnology, which will ultimately allow us to conquer aging and live forever. At the heart of Kurzweils optimistic philosophy is the notion that human knowledge is growing exponentially, not linearly, and this fact is rarely taken into account when people try to predict the rate of technological advance in the future. Kurzweil predicts that at the current rate of knowledge expansion well have the technology to completely conquer aging within the next couple of decades.
I spoke with Ray on February 8, 2006. Ray speaks very precisely, and he chooses his words carefully. He presents his ideas with a lot of confidence, and I found his optimism to be contagious. We spoke about the importance of genomic testing, some of the common misleading ideas that people have about health, and how biotechnology and nanotechnology will radically affect our longevity in the future.
David: What inspired your interest in life extension?
Ray: Probably the first incident that got me on this path was my fathers illness. This began when I was fifteen, and he died seven years later of heart disease when I was twenty-two. He was fifty-eight. Ill actually be fifty-eight this Sunday. I sensed a dark cloud over my future, feeling like there was a good chance that I had inherited his disposition to heart disease. When I was thirty-five, I was diagnosed with Type 2 diabetes, and the conventional medical approach made it worse.
So I really approached the situation as an inventor, as a problem to be solved. I immersed myself in the scientific literature, and came up with an approach that allowed me to overcome my diabetes. My levels became totally normal, and in the course of this process I discovered that I did indeed have a disposition, for example, to high cholesterol. My cholesterol was 280 and I also got that down to around 130. That was twenty-two years ago.
I wrote a bestselling health book, which came out in 1993 about that experience, and the program that Id come up with. Thats what really got me on this path of realizing thatif youre aggressive enough about reprogramming your biochemistryyou can find the ideas that can help you to overcome your genetic dispositions, because theyre out there. They exist.
About seven years ago, after my book The Age of Spiritual Machines came out in 1999, I was at a Foresight Institute conference. I met Terry Grossman there, and we struck up a conversation about this subjectnutrition and health. I went to see him at his longevity clinic in Denver for an evaluation, and we built a friendship. We started exchanging emails about health issuesand that was 10,000 emails ago. We wrote this book Fantastic Voyage together, which really continues my quest. And he also has his own story about how he developed similar ideas, and how we collaborated.
Theres really a lot of knowledge available right now, although, previously, it has not been packaged in the same way that we did it. We have the knowledge to reprogram our biochemistry to overcome disease and aging processes. We can dramatically slow down aging, and we can really overcome conditions such as atherosclerosis, that leads to almost all heart attacks and strokes, diabetes, and we can substantially reduce the risk of cancer with todays knowledge. And, as you saw from the book, all of that is just what we call Bridge One. Were not saying that taking lots of supplements and changing your diet is going enable you to live five hundred years. But it will enable Baby Boomerslike Dr. Grossman and myself, and our contemporariesto be in good shape ten or fifteen years from now, when we really will have the full flowering of the biotechnology revolution, which is Bridge Two.
Now, this gets into my whole theory of information technology. Biology has become an information technology. It didnt used to be. Biology used to be hit or miss. Wed just find something that happened to work. We didnt really understand why it worked, and, invariably, these tools, these drugs, had side-effects. They were very crude tools. Drug development was called drug discovery, because we really werent able to reprogram biology. That is now changing. Our understanding of biology, and the ability to manipulate it, is becoming an information technology. Were understanding the information processes that underlie disease processes, like atherosclerosis, and were gaining the tools to reprogram those processes.
Drug development is now entering an era of rational drug design, rather than drug discovery. The important point to realize is that the progress is exponential, not linear. Invariably peopleincluding sophisticated peopledo not take that into consideration, and it makes all the difference in the world. The mainstream skeptics declared the fifteen year genome project a failure after seven and half years because only one percent of the project was done. The skeptics said, I told you this wasnt going to workhere you are halfway through the project and youve hardly done anything. But the progress was exponential, doubling every year, and the last seven doublings go from one percent to a hundred percent. So the project was done on time. It took fifteen years to sequence HIV. We sequenced the SARS virus in thirty-one days.
There are many other examples of that. Weve gone from ten dollars to sequence one base pair in 1990 to a penny today. So in ten or fifteen years from now its going to be a very different landscape. We really will have very powerful interventions, in the form of rationally-designed drugs that can precisely reprogram our biochemistry. We can do it to a large extent today with supplements and nutrition, but it takes a more extensive effort. Well have much more powerful tools fifteen years from, so I want it to be in good shape at that time.
Most of my Baby Boomer contemporaries are completely oblivious of this perspective. They just assume that aging is part of the cycle of human life, and at 65 or 70 you start slowing down. Then at eighty youre dead. So theyre getting ready to retire, and are really unaware of this perspective that things are going to be very different ten or fifteen years from now. This insight really should motivate them to be aggressive about using todays knowledge. Of course all of this will lead to Bridge Three about twenty years from nowthe nanotechnology revolutionwhere we can go beyond the limitations of biology. Well have programmable nanobots that can keep us healthy from inside, and truly provide truly radical life extension.
So thats the genesis. My interest in life extension stems primarily from my having been diagnosed with Type 2 diabetes. I really consider the diabetes to be a blessing because it prodded me to overcome it, and, in so doing, I realized that I didnt just have an approach for diabetes, but a general attitude and approach to overcome any health problem, that we really can find the ideas and apply them to overcome the genetic dispositions that we have. Theres a common wisdom that your genes are eighty percent of your health and longevity and lifestyle is only twenty percent. Well, thats true if you follow the generally, watered-down guidelines that our health institutions put out. But if you follow the optimal guidelines that we talk about, you can really overcome almost any genetic disposition. We do have the knowledge to do that.
David: What do you think are some of the common misleading ideas that people have about health and longevity?
Ray: One thing that I just eluded to is the compromised recommendations from our health authorities. I just had a lengthy debate with the Joslin Diabetes Center, which is considered the worlds leading diabetes treatment and research center. Im on the board, and theyve just come out with new nutritional guidelines, which are highly compromised. Theyre far from ideal, and they acknowledge that. They say, well, we have enough trouble getting people to follow these guidelines, let alone the stricter guidelines that you recommend. And my reply is, you have trouble getting people to follow your guidelines because they dont work. If people followed your guidelines very precisely theyd still have Type 2 diabetes. Theyd still have to take harsh drugs or insulin.
If they follow my guidelines the situation is quite different. Ive counseled many people about Type 2 diabetes, and Dr. Grossman has treated many people with it, and they come back and they have completely normal levels. Their symptoms are gone, and they dont have to take insulin or harsh drugs. They feel liberated, and thats extremely motivating. In many ways its easier to make a stricter change. To dramatically reduce your high Glycemic index carbs is actually easier than moderately reducing them, because if you moderately reduce them you dont get rid of the cravings for carbs. Carbs are addictive, and its just like trying to cut down a little bit on cigarettes. Its actually easier to cut cigarettes out completely, and its also easier to largely cut out high Glycemic index starches and sugars, because the cravings go away and its much easier to follow. But, most importantly, it works along with a few supplements and exercise to overcome most cases of Type 2 Diabetes.
However, this doesnt seem to be the attitude our health authorities. The nutritional recommendations are consistently compromised. Theres almost no understanding of the role of nutritional supplements, which can be very powerful. I take two hundred and fifty supplements a day, and I monitor my body regularly. Im not just flying without instrumentation. Being an engineer, I like data and I monitor fifty or sixty different blood levels every few months, and Im constantly fine-tuning my program. All of my blood levels are ideal. My Homocysteine level many years ago was eleven, but now its five. My C-reactive protein is 0.1. My cholesterol is 130. My LDL is about 60, and my HDLwhich was 28is now close to sixty. And so on and so forth.
Ive also taken biological aging tests, which measure things like tactile sensitivity, reaction time, memory, and decision-making speed. There are forty different tests, and you compare your score to medians for different populations at different ages. When I was forty I came out at about thirty-eight. Now Im fifty-sevenat least for a few more daysand I come out at forty. So, according to these tests, Ive only aged two years in the last seventeen years. Now you can dispute the absolute validity of these biological aging tests. Its just a number, but its just evidence that this program is working.
David: Why do you think that genomic testing is important?
Ray: Our program is very much not a one size fits all. Its not a one-trick pony. Were not saying that if you lower your carbs, lower your fat, or eat a grapefruit a day then everything will be fine. In fact, our publisher initially had a problem with this, but they actually got behind it enthusiastically, because it fundamentally differs, as you know, from most health books that really do have just one idea. We earnestly try to provide a comprehensive understanding of your biology and your body, which does have some complexity to it. Then we let people apply these principles to their own lives.
It is important to emphasize the issues that are concerns for yourself. We use an analogy of stepping backwards towards a cliff. Its much easier to change direction before you fall off the cliff. But, generally, medicine doesnt get involved until the eruption of clinical disease. Someone has a heart attack, or they develop clinical cancer, and thats very often akin to falling off a cliff. One third of first heart attacks are fatal, and another third cause permanent damage to the heart muscle.
Its much easier to catch these conditions beforehand. You dont just catch heart disease or cancer walking down the street one day. These are many years or decades in the making, and you can see where you are in the progression of these diseases. So its very important to know thyself, to access your own situation. Genetic testing is important because you can see what dispositions you have. If you have certain genes that dispose you to heart disease, or conversely cancer, or diabetes, then you would give a higher priority to managing those issues, and do more tests to see where you are in the progression of those conditions. Lets say you do a test and it says you have a genetic disposition to Type 2 diabetes. So you should do a glucose-tolerance test. In fact, we describe a more sophisticated form of that in the book, where you measure insulin as well, and can see if you have early stages of insulin resistance.
Perhaps you have metabolic syndrome, which a very substantial fraction of the population has. If you have these early harbingers of insulin resistance, that could lead to Type 2 diabetes, so obviously the priority of that issue will be greatly heightened. If you dont have that vulnerability then you dont have to be as concerned about insulin resistance, and so on. But if you do have insulin resistance, or you have a high level of atherosclerosis, then it really behooves you to take important steps to get these dangerous conditions under controlwhich you can do. So genomic testing is not something you do by itself. Its part of a comprehensive assessment program to know your own bodynot only what youre predisposed to, but what your body has already developed in terms of early versions of these degenerative conditions.
David: What are some of the most important nutritional supplements that you would recommend to help prevent cancer and cardiovascular disease?
Ray: We spell all that out in the book. Coenzyme Q10 is important. It never ceases to amaze me that physicians do not tell their patients to take coenzyme Q10 when they prescribe Statin drugs. This is because its well known that Statin drugs deplete the body of coenzyme Q10, and a lot of the side-effects such as muscle weakness that people suffer from Statin drugs are because of this depletion of coenzyme Q10. In any event, thats an important supplement. It is involved in energy generation within the mitochondria of each cell. Disruption to the mitochondria is an important aging process and this supplement will help slow that down. Coenzyme Q10 has a number of protective effect including lowering blood pressure, helping to control free-radical damage, and protecting the heart.
A lot of research recently shows the Curcumin, which is derived from the spice turmeric, has important anti-inflammatory properties and can protect against cancer, heart disease, and even Alzheimers disease.
Alpha-Lipoic acid is an important antioxidant which is both water and fat-soluble. It can neutralize harmful free radicals, improve insulin sensitivity, and slow down the process of advanced Glycation end products (AGEs), which is another key aging process.
Each of the vitamins is important and plays a key role. Vitamin C is generally protective as a premier antioxidant. It appears to have particular effectiveness in preventing the early stages of atherosclerosis, namely the oxidizing of LDL cholesterol.
In terms of vitamin E, theres been a lot of negative publicity about that, but if you look carefully at that research youll see that all of those studies were done with alpha-Tocopherol, and vitamin E is really a blend of eight different substancesfour tocopherols and four Tocotrienols. Alpha-Tocopherol actually depletes levels of gamma-Tocopherol, and gamma-Tocopherol is the form of vitamin E thats found naturally in food, and is a particularly important one. So we recommend that people take a blend of the fractions of vitamin E, and that they get enough gamma-Tocopherol.
There are a number of others that are important to take in general. If you have high cholesterol, Policosanol is one supplement that is quite effective, and has an independent action from the Statin drugs. Statin drugs actually are quite good. They appear to be anti-inflammatory, so they not only lower cholesterol but attack the inflammatory processes, which underlie many diseases, including atherosclerosis. But as I mentioned its important to take coenzyme Q10 if youre taking Statin drugs.
There are others. Grape seed proanthocyanidin extract has been found to be another effective antioxident. Resveratrol is another. We have an extensive discussion of the most important supplements in the book.
David: What sort of suggestions would you make to someone who is looking to improve their memory or cognitive performance?
Ray: Vinpocetine, derived from the periwinkle plant, seems to have the best research. It improves cerebral blood flow, increases brain cell TP (energy) production, and enables better utilization of glucose and oxygen in the brain.
Other supplements that appear to be important for brain health include Phosphatidylserine, Acetyl-L-Carnitine, Pregneneolone, and EPA/DHA. The research appears a bit mixed on Ginkgo Biloba, but were not ready to give up on it.
We provide a discussion in the book of a number of smart nutrients that appear to improve brain health. There are also a number of smart drugs being developed, some of which are already in the testing pipeline, that appear to be quite promising.
David: What do you think are the primary causes of aging?
Ray: Aging is not one thing. Theres a number of different processes involved and you can adopt programs that slow down each of these. For example, one process involves the depletion of phosphatidylcholine in the cell membrane. In young people the cell membrane is about sixty or seventy percent phosphatidylcholine, and the cell membrane functions very well thenletting nutrients in and letting toxins out.
The body makes phosphatidylcholine, but very slowly, so over the decades the phosphatidylcholine in the cell membrane depletes, and the cell membrane gets filled in with inert substances, like hard fats and cholesterol, that basically dont work. This is one reasons that cells become brittle with age. The skin in an elderly person begins to not be supple. The organs stop functioning efficiently. So its actually a very important aging process, and you can reverse that by supplementing with phosphatidylcholine. If you really want to do it effectively you can take phosphatidylcholine intravenously, as I do. Every week I have a I.V. with phosphatidylcholine. I also take it every day orally. So thats one aging process we can stop today.
Another important aging process involves oxidation through positively-charged oxygen free radicals, which will steal electrons from cells, disrupting normal enzymatic processes. There are a number of different types of antioxidants that you can take to slow down that process, including vitamin C. You could take vitamin C intravenously to boost that process.
Advanced Glycation end-products, or AGEs, are involved in another aging process. This is where proteins develop cross-links with each other, therefore disrupting their function. There are supplements that you can take, such as Alpha Lipoic Acid, that slow that down. There is an experimental drug called ALT-711 (phenacyldimenthylthiazolium chloride) that can dissolve the AGE cross-links without damaging the original tissues.
Atherosclerosis is an aging process, and its not just taking place in the coronary arteries, of course. It can take place in the cerebral arteries, which ultimately causes cerebral strokes, but it also takes place in the arteries all throughout the body. It can lead to impotence, claudication of the legs and limbs, and like most of these processes, its not linear but exponential, in that it grows by a certain percentage each year.
So thats why the process of atherosclerosis hardly seems to progress for a long time, but then when gets to a certain point it can really explode and develop very quickly. We have an extensive program on reducing atherosclerosis, which is both an aging process and a disease process. We cite a number of important supplements that reduce cholesterol and inflammationsuch as the omega-3 fats EPA and DHAas well as the Statin drugs. Supplements like Curcumin [Tumeric] are helpful. Supplements that reduce inflammation will reduce both cancer and the inflammatory processes that lead to atherosclerosis. There are a number of supplements that reduce Homocysteine, which appears to encourage atherosclerosis. These include Folic Acid, vitamins B2, B6, and B12, magnesium, and trimethylglycine (TMG).
So you can attack atherosclerosis five or six different ways, and we recommend that you do them all, so long as there arent contraindications for combining treatments. But generally these treatments are independent of each other. If you go to war, you dont just send in the helicopters. You send in the helicopters, the tanks, the planes, and the infantry. You use your intelligence resources, and attack the enemy every way that you can, with all of your resources. And thats really what you need to do with these conditions, because they represent very threatening processes. If you are sufficient proactive, you can generally get them under control.
David: What are some of the new anti-aging treatments that you foresee coming along in the near future, like from stem cell research and therapeutic cloning?
Ray: It depends on what you mean by near future, because in ten or fifteen years we foresee a fundamentally transformed landscape.
David: Lets just say prior to nanotechnology, and then that will be the next question.
Ray: is the next frontier is biotechnology. Were really now entering an era where we can reprogram biology. Weve sequenced the genome, and we are now reverse-engineering the genome. Were understanding the roles that the genes play, how they express themselves in proteins, and how these proteins then play roles in sequences of biochemical steps that lead to both orderly processes as well as dysfunctiondisease processes, such as atherosclerosis and cancerand we are gaining the means to reprogram those processes.
For example, we can now turn genes off with RNA interference. This is a new technique that just emerged a few years agoa medication with little pieces of RNA that latch on to the messenger RNA that is expressing a targeted gene and destroys it, therefore preventing the gene from expressing itself. This effectively turns the gene off. So right away that methodology has lots of applications.
Take the fat insulin receptor gene. That gene basically says hold on to every calorie because the next hunting season may not work out so well. That was a good strategy, not only for humans, but for most species, thousands of years ago. Its still probably a good strategy for animals living in the wild. But were not animals living in the wild. It was good for humans a thousand years ago when calories were few and far between. Today it underlies an epidemic of obesity. How about turning that gene off in the fat cells? What would happen?
That was actually tried in mice, and these mice ate ravenously, and they remained slim. They got the health benefits of being slim. They didnt get diabetes. They didnt get heart disease. They lived twenty percent longer. They got the benefits of caloric restriction while doing the opposite. So turning off the fat insulin receptor gene in fat cells is the idea. You dont want to turn it off in muscle cells, for example. This is one methodology that could enable us to prevent obesity, and actually maintain an optimal weight no matter what we ate. So thats one application of RNA interference.
Theres a number of genes that have been identified that promote atherosclerosis, cancer, diabetes and many other diseases. Wed like to selectively turn those genes off, and slow down or stop these disease processes. There are certain genes that appear to have an influence on the rate of aging. We can amplify the expression of genes similarly, and we can actually add new genetic informationthats gene therapy. Gene therapy has had problems in the past, because weve had difficulty putting the genetic information in the right place at the right chromosome. There are new techniques now that enable us to do that correctly.
For example, you can take a cell out of the body, insert the genetic information in vitrowhich is much easier to do in a Petri dishand examine whether or not the insertion went as intended. If it ended up in the wrong place you discard it. You keep doing this until you get it right. You can examine the cell and make sure that it doesnt have any DNA errors. So then you take this now modified cellthat has also been certified as being free of DNA errorsand its replicated in the Petri dish, so that hundreds of millions of copies of it are created. Then you inject these cells back into the patient, and they will work their way into the right tissues. A lung cell is not going to end up in the liver.
In fact, this was tried by a company Im involved with, United Therapeutics. I advise them and Im on their board. They tried this with a fatal disease called pulmonary hypertension, which is a lung disease, and these modified cells ended up in the right placein the lungsand actually cured pulmonary hypertension in animal tests. It has now been approved for human trials. Thats just one example of many of being able to actually add new genes. So well be able to subtract genes, over-express certain genes, under-express genes, and add new genes.
Another methodology is cell transdifferentiation, a broader concept then just stem cells. One of the problems with stem cell research or stem cell approaches is this. If I want to grow a new heart, or maybe add new heart cells, because my heart has been damaged, or if I need new pancreatic Islet cells because my pancreatic Islet cells are destroyed, or need some other type of cells, Id like it to have my DNA. The ultimate stem cell promise, the holy grail of these cell therapies, is to take my own skin cells and reprogram them to be a different kind of cell. How do you do that? Actually, all cells have the same DNA. Whats the difference between a heart cell and pancreatic Islet cell?
Well, there are certain proteins, short RNA fragments, and peptides that control gene expression. They tell the heart cells that only the certain genes which should be expressed in a heart cell are expressed. And were learning how to manipulate which genes are expressed. By adding certain proteins to the cell we can reprogram a skin cell to be a heart cell or a pancreatic Islet cell. This has been demonstrated in just the last couple years. So then we can create in a Petri dish as many heart cells or pancreatic Islet cells as I need, with my own DNA, because theyre derived from my cells. Then inject them, and theyll work their way into the right tissues. In the process we can discard cells that have DNA errors, so we can basically replenish our cells with DNA-corrected cells.
While we are at it, we can also extend the telomeres. Thats another aging process. As the cells replicate, these little repeating codes of DNA called telomeres grow shorter. Theyre like little beads at the end of the DNA strands. One falls off every time the cell replicates, and theres only about fifty of them. So after a certain number of replications the cell cant replicate anymore. There is actually one enzyme that controls thistelomerase, which is capable of extending the telomeres. Cancer actually works by creating telomerase to enable them to replicate without end. Cancer cells become immortal because they can create telomerase.
As were rejuvenating our cells, turning a skin cell into a kind of cell that I need, making sure that it has its DNA corrected, we can also extend its telomeres by using telomerase in the Petri dish. Then you got this new cell thats just like my heart cells were when I was twenty. Now you can replicate that, and then inject it, and really rejuvenate all of the bodys tissues with young versions of my cells. So thats cell rejuvenation. Thats one idea, or one technique, and theres many different variations of that.
Then theres turning on and off enzymes. Enzymes are the work horses of biology. Genes express themselves as enzymes, and the enzymes actually go and do the work. And we can add enzymes. We can turn enzymes off. One example of that is Torcetrapib, which destroys one enzyme, and that enzyme destroys HDL, the good cholesterol in the blood. So when people take Torcetrapib their HDL, good cholesterol levels, soar, and atherosclerosis dramatically slows down or stops. The phase 2 trials were very encouraging, and Pfizer is spending a record one billion dollars on the phase 3 trials. Thats just one example of many of these paradigm: manipulating enzymes. So theres many different ideas to get in and very precisely reprogram the information processes that underlie biology, to undercut disease processes and aging processes, and move them towards healthy rejuvenated processes.
David: How do you see robotics, artificial intelligence, and nanotechnology affecting human health and life span in the future?
Ray: I mentioned that we talk about three bridges to radical life extension in Fantastic Voyage. Bridge One is aggressively applying todays knowledge, and thats, of course, a moving frontier, as we learn and gain more and more knowledge. In Chapter 10 of Fantastic Voyage I talk about my program, and at the end I mention that one part of my program is what I call a positive health slope, which means that my program is not fixed.
I spend a certain amount of time every week studying a number of thingsnew research, new drug developments that are coming out, new information about myself that may come from testing. Just reading the literature I might discover something thats in fact old knowledge, but theres so much information out there, I havent read everything. So Im constantly learning more about health and medicine and my own body and modifying my own program. I probably make some small change every week. That doesnt mean my program is unstable. My program is quite stable, but Im fine-tuning at the edges quite frequently.
Bridge Two weve just been talking about, which is the biotechnology revolution. A very important insight that really changes ones perspective is to understand that progress is exponential and not linear. So many sophisticated scientists fail to take this into consideration. They just assume that the progress is going to continue at the current pace, and they make this mistake over and over again. If you consider the exponential pace of this process, ten or fifteen years from now we will have really dramatic tools in the forms of medications and cell therapies that can reprogram our health, within the domain of biology.
Bridge Three is nanotechnology. The golden era will be in about twenty years from now. Theyll be some applications earlier, but the real Holy Grail of nanotechnology are nanobots, blood cell-size devices that can go inside the body and keep us healthy from inside. If that sounds very futuristic, Id actually point out that were doing sophisticated tasks already with blood cell-size devices in animal experiments.
One scientist cured Type 1 diabetes in rats with a nano-engineered capsule that has seven nanometers pores. It lets insulin out in a controlled fashion and blocks antibodies. And thats what is feasible today. MIT has a project of a nano-engineered device thats actually smaller than a cell and its capable of detecting specifically the antigens that exist only on certain types of cancer cells. When it detects these antigens it latches onto the cell, and burrows inside the cell. It can detect once its inside and then at that point it releases a toxin which destroys the cancer cell. This has actually worked in the Petri dish, but thats quite significant because theres actually not that much that could be different in vivo as in vitro.
This is a rather sophisticated device because its going through these several different stages, and it can do all of these different steps. Its a nano-engineered device in that it is created at the molecular level. So thats what is feasible already. If you consider what I call the Law of Accelerating Returns, which is a doubling of the power of these information technologies every year, within twenty-five years these computation-communication technologies, and our understanding of biology, will be a billion times more advanced than it is today. Were shrinking technology, according to our models, at a rate of over a hundred per 3-D volume per decade.
So these technologies will be a hundred thousand times smaller than they are today in twenty-five years, and a billion times more powerful. And look at what we can already do today experimentally. Twenty-five years from now these nanobots will be quite sophisticated. Theyll have computers in them. Theyll have communication devices. Theyll have small mechanical systems. Theyll really be little robots, and they be able to go inside the body and keep us healthy from inside. They will be able to augment the immune system by destroying pathogens. They will repair DNA errors, remove debris and reverse atherosclerosis. Whatever we dont get around to finishing with biotechnology, well be able to finish the job with these nano-engineered blood-cell sized robots or nanobots.
This really will provide radical life extension. The basic metaphor or analogy to keep in mind is to ask the question, How long does a house last? Aubrey de Grey uses this metaphor. The answer is, a house lasts as long as you want it to. If you dont take care of it the house wont last that long. It will fall apart. The roof will spring a leak and the house will quickly decay. On the other hand, if youre diligent, and something goes wrong in the house you fix it. Periodically you upgrade the technology. You put in a new HVAC system and so forth. With this approach, the house will go on indefinitely, and we do have houses, in fact, that are thousands of years of old. So why doesnt this apply to the human body?
The answer is that we understand how a house works. We understand how to fix a house. We understand all the problems a house can have, because weve designed them. We dont yet have that knowledge and those tools today to do a comparable job with our body. We dont understand all the things that could wrong, and we dont have all the fixes for everything. But we will have this knowledge and these tools. We will have complete models of biology. Well reverse-engineered biology within twenty years, and well have the means to go in and repair all of the problems we have identified.
Well be able to indefinitely fix the things that go wrong. Well have nanobots that can go in and proactively keep us healthy at a cellular level, without waiting until major diseases flare up, as well as stop and reverse aging processes. Well get to a point where people will not age. So when we talk about radical life extension were not talking about people growing old and becoming what we think of today as a 95 year old and then staying at a biological age 95 for hundreds of years.
Were talking about people staying young and not aging. Actually, Im talking about even more than that, because in addition to radical life extension, well also have radical life expansion. The nanobots will be able to go inside the brain and extend our mental functioning by interacting with our biological neurons. Today we already have computers that are placed inside peoples brains, that replace diseased parts of the brain, like the neural implant for Parkinsons disease. The latest generation of that implant allows you download new software to your neural implant from outside the patientand thats not an experiment, thats an FDA approved therapy.
Today these neural implants require surgery, but ultimately well be able to send these brain extenders into the nervous system noninvasively through the capillaries of the brain, without surgery. And well be using them, not just to replace diseased tissue, but to go beyond our current abilitiesto extend our memories, extend our pattern recognition and cognitive capabilities, and merge intimately with our technology. So well have radical life expansion along with radical life extension. Thats my vision of what will happen in the next several decades.
David: What are you currently working on?
Ray: I spend maybe forty or fifty percent of my time communicatingin the form of books, articles, interviews, speeches. I give several speeches a month. Then theres my Web site: KurzweilAI.net. We have a free daily or weekly newsletter; people can sign up by putting in their email address (which is kept in confidence) on the home page.
Then I have several businesses that Im running, which are in the area of pattern recognition. Ive been in the reading machine business for thirty-two years. I developed the first print-to-speech technology for the blind in 1976, and were introducing a new version that fits in your pocket. A blind person can take it out of their pocket, snap a picture of a handout at a meeting, a sign on a wall, the back of a cereal box, an electronic display, and the device will read it out loud to them through a earphone or speaker.
Were developing a new medical technology, which is basically a smart undershirt that monitors your health. There will be a smart bra version for women. It takes a complete morphology EKG and monitors your breathing. So, for example, if youre a heart patient it could tell you whether your atrial fibrillation is getting better or worse. When youre exercising it can tell you if youre getting a problem situation. So it gives you diagnostic information. It can also alert you if you should contact your doctor. So basically your undershirt is sending this information by Bluetooth to your cell phone, and your cell phone is running this cardiac evaluation software. So thats another project.
Then we have Ray and Terrys longevity products at RayandTerry.com, which goes along with Fantastic Voyage. We have about 20 products available now, and well have about fifty within a few months. Basically all the things we recommend in the book will be available. We also have combinations. So, for example, if you want to lower cholesterol we have a cholesterol-lowering product, and you dont have to buy the eight or nine different supplements separately. We put all of our recommendations together in one combination to make it easy for people to follow. Theres a total daily care, that has basic nutritional supplements, like vitamins and minerals, and coenzyme Q-10, and so on. We have a meal-replacement shake that is low carbohydrate, has no sugar, but actually tastes good, which is actually very unique, because if youve ever tasted a low-carb meal-replacement shake you know that there in general the taste is not desirable. This might sound promotional but that was the objective, and its actually made up of the nutritional supplements that we recommend. So thats another company, and those are the companies that were running.
©2006 David Jay Brown. Reprinted with permission.
]]>Goal 7 of the NASA Astrobiology Roadmap states: “Determine how to recognize signatures of life on other worlds and on early Earth. Identify biosignatures that can reveal and characterize past or present life in ancient samples from Earth, extraterrestrial samples measured in situ, samples returned to Earth, remotely measured planetary atmospheres and surfaces, and other cosmic phenomena.” The cryptic reference to “other cosmic phenomena” would appear to be broad enough to include the possible identification of biosignatures embedded in the dimensionless constants of physics. The existence of such a set of biosignaturesa life-friendly suite of physical constantsis a retrodiction of the Selfish Biocosm (SB) hypothesis. This hypothesis offers an alternative to the weak anthropic explanation of our indisputably life-friendly cosmos favored by (1) an emerging alliance of M-theory-inspired cosmologists and advocates of eternal inflation like Linde and Weinberg, and (2) supporters of the quantum theory-inspired sum-over-histories cosmological model offered by Hartle and Hawking. According to the SB hypothesis, the laws and constants of physics function as the cosmic equivalent of DNA, guiding a cosmologically extended evolutionary process and providing a blueprint for the replication of new life-friendly progeny universes.
The notion that we inhabit a universe whose laws and physical constants are fine-tuned in such a way as to make it hospitable to carbon-based life is an old idea (Gardner, 2003). The so-called “anthropic” principle comes in at least four principal versions (Barrow and Tipler, 1988) that represent fundamentally different ontological perspectives. For instance, the “weak anthropic principle” is merely a tautological statement that since we happen to inhabit this particular cosmos it must perforce by life-friendly or else we would not be here to observe it. As Vilenkin put it recently (Vilenkin, 2004), “the anthropic’ principle, as stated above, hardly deserves to be called a principle: it is trivially true.” By contrast, the “participatory anthropic principle” articulated by Wheeler and dubbed “it from bit” (Wheeler, 1996) is a radical extrapolation from the Copenhagen interpretation of quantum physics and a profoundly counterintuitive assertion that the very act of observing the universe summons it into existence.
All anthropic cosmological interpretations share a common theme: a recognition that key constants of physics (as well as other physical aspects of our cosmos such as its dimensionality) appear to exhibit a mysterious fine-tuning that optimizes their collective bio-friendliness. Rees noted (Rees, 2000) that virtually every aspect of the evolution of the universefrom the birth of galaxies to the origin of life on Earthis sensitively dependent on the precise values of seemingly arbitrary constants of nature like the strength of gravity, the number of extended spatial dimensions in our universe (three of the ten posited by M-theory), and the initial expansion speed of the cosmos following the Big Bang. If any of these physical constants had been even slightly different, life as we know it would have been impossible:
The [cosmological] picture that emergesa map in time as well as in spaceis not what most of us expected. It offers a new perspective on a how a single “genesis event” created billions of galaxies, black holes, stars and planets, and how atoms have been assembledhere on Earth, and perhaps on other worldsinto living beings intricate enough to ponder their origins. There are deep connections between stars and atoms, between the cosmos and the microworld…. Our emergence and survival depend on very special “tuning” of the cosmosa cosmos that may be even vaster than the universe that we can actually see.
As stated recently by Smolin (Smolin, 2004), the challenge is to provide a genuinely scientific explanation for what he terms the “anthropic observation”:
The anthropic observation: Our universe is much more complex than most universes with the same laws but different values of the parameters of those laws. In particular, it has a complex astrophysics, including galaxies and long lived stars, and a complex chemistry, including carbon chemistry. These necessary conditions for life are present in our universe as a consequence of the complexity which is made possible by the special values of the parameters.
There is good evidence that the anthropic observation is true. Why it is true is a puzzle that science must solve.
It is a daunting puzzle indeed. The strangely (and apparently arbitrarily) biophilic quality of the physical laws and constants poses, in Greene’s view, the deepest question in all of science (Greene, 2004). In the words of Davies (Gardner, 2003), it represents “the biggest of the Big Questions: why is the universe bio-friendly?”
Modern statements of the cosmological anthropic principle date from the publication of a landmark book by Henderson in 1913 entitled The Fitness of the Environment (Henderson, 1913). Henderson’s book was an extended reflection on the curious fact that there are particular substances present in the environmentpreeminently waterwhose peculiar qualities rendered the environment almost preternaturally suitable for the origin, maintenance, and evolution of organic life. Indeed, the strangely life-friendly qualities of these materials led Henderson to the view that “we were obliged to regard this collocation of properties in some intelligible sense a preparation for the process of planetary evolution…. Therefore the properties of the elements must for the present be regarded as possessing a teleological character.”
Thoroughly modern in outlook, Henderson dismissed this apparent evidence that inanimate nature exhibited a teleological character as indicative of divine design or purpose. Indeed, he rejected the notion that nature’s seemingly teleological quality was in any way inconsistent with Darwin’s theory of evolution through natural selection. On the contrary, he viewed the bio-friendly character of the inanimate natural environment as essential to the optimal operation of the evolutionary forces in the biosphere. Absent the substrate of a superbly “fit” inanimate environment, Henderson contended, Darwinian evolution could never have achieved what it has in terms of species multiplication and diversification.
The mystery of why the physical qualities of the inanimate universe happened to be so oddly conducive to life and biological evolution remained just that for Hendersonan impenetrable mystery. The best he could do to solve the puzzle was to speculate that the laws of chemistry were somehow fine-tuned in advance by some unknown cosmic evolutionary mechanism to meet the future needs of a living biosphere:
The properties of matter and the course of cosmic evolution are now seen to be intimately related to the structure of the living being and to its activities; they become, therefore, far more important in biology than has previously been suspected. For the whole evolutionary process, both cosmic and organic, is one, and the biologist may now rightly regard the Universe in its very essence as biocentric.
Henderson’s iconoclastic vision was far ahead of its time. His potentially revolutionary book was largely ignored by his contemporaries or dismissed as a mere tautology. Of course there should be a close match-up between the physical requirements of life and the physical world that life inhabits, contemporary skeptics pointed out, since life evolved to survive the very challenges presented by that pre-organic world and to take advantage of the biochemical opportunities it offered.
While lacking broad influence at the time, Henderson’s pioneering vision proved to be the precursor to modern formulations of the cosmological anthropic principle. One of the first such formulations was offered by British astronomer Fred Hoyle. A storied chapter in the history of the principle is the oft-told tale of Hoyle’s prediction of the details of the triple-alpha process (Mitton 2005). This prediction, which seems to qualify as the first falsifiable implication to flow from an anthropic hypothesis, involves the details of the process by which the element carbon (widely viewed as the essential element of abiotic precursor polymers capable of autocatalyzing the emergence of living entities) emerges through stellar nucleosynthesis. As noted by Livio (Livio, 2003):
Carbon features in most anthropic arguments. In particular, it is often argued that the existence of an excited state of the carbon nucleus is a manifestation of fine-tuning of the constants of nature that allowed for the appearance of carbon-based life. Carbon is formed through the triple-alpha process in two steps. In the first, two alpha particles form an unstable (lifetime ~10-16s)8Be. In the second, a third alpha particle is captured, via 8Be(α,γ)12C. Hoyle argued than in order for the 3α reaction to proceed at a rate sufficient to produce the observed cosmic carbon, a resonant level must exist in 12C, a few hundred keV about the 8Be+4He threshold. Such a level was indeed found experimentally.
Other chapters in the modern history of the anthropic principle are treated comprehensively by Barrow and Tipler (Barrow and Tipler, 1988) and will not be revisited here.
Two recent developments have imparted a renewed sense of urgency to investigations of the anthropic qualities of our cosmos. The first is the discovery that the value of dark energy density is exceedingly small but not quite zeroan apparent happenstance, unpredictable from first principles, with profound implications for the bio-friendly quality of our universe. As noted recently by Goldsmith (Goldsmith, 2004):
A relatively straightforward calculation [based on established principles of theoretical physics] does yield a theoretical value for the cosmological constant, but that value is greater than the measured one by a factor of about 10120probably the largest discrepancy between theory and observation science has ever had to bear.
If the cosmological constant had a smaller value than that suggested by recent observations, it would cause no trouble (just as one would expect, remembering the happy days when the constant was thought to be zero). But if the constant were a few times larger than it is now, the universe would have expanded so rapidly that galaxies could not have endured for the billions of years necessary to bring forth complex forms of life.
The second development is the realization that M-theoryarguably the most promising contemporary candidate for a theory capable of yielding a deep synthesis of relativity and quantum physicspermits, in Bjorken’s phrase (Bjorken, 2004), “a variety of string vacuua, with different standard-model properties.”
M-theorists had initially hoped that their new paradigm would be “brittle” in the sense of yielding a single mathematically unavoidable solution that uniquely explained the seemingly arbitrary parameters of the Standard Model. As Susskind has put it (Susskind, 2003):
The world-view shared by most physicists is that the laws of nature are uniquely described by some special action principle that completely determines the vacuum, the spectrum of elementary particles, the forces and the symmetries. Experience with quantum electrodynamics and quantum chromodynamics suggests a world with a small number of parameters and a unique ground state. For the most part, string theorists bought into this paradigm. At first it was hoped that string theory would be unique and explain the various parameters that quantum field theory left unexplained.
This hope has been dashed by the recent discovery that the number of different solutions permitted by M-theory (which correspond to different values of Standard Model parameters) is, in Susskind’s words, “astronomical, measured not in millions or billions but in googles or googleplexes.” This development seems to deprive our most promising new theory of fundamental physics of the power to uniquely predict the emergence of anything remotely resembling our universe. As Susskind puts it, the picture of the universe that is emerging from the deep mathematical recesses of M-theory is not an “elegant universe” but rather a Rube Goldberg device, cobbled together by some unknown process in a supremely improbable manner that just happens to render the whole ensemble fit for life. In the words of University of California theoretical physicist Steve Giddings, “No longer can we follow the dream of discovering the unique equations that predict everything we see, and writing them on a single page. Predicting the constants of nature becomes a messy environmental problem. It has the complications of biology.”[1]
There have been two principal approaches to the task of enlisting the weak anthropic principle to explain the mysteriously small (and thus bio-friendly) value of the density of dark energy and the apparent happenstance by which our bio-friendly universe was selected from the enormously large “landscape” of possible solutions permitted by M-theory, only a tiny fraction of which correspond to anything resembling the Standard Model prevalent in our cosmos.
Eternal Inflation Meets M-Theory
The first approach, favored by Susskind (Susskind, 2003). Linde (Linde, 2002), Weinberg (Weinberg, 1999), and Vilenkin (Vilenkin, 2004) among others, overlays the model of eternal inflation with the key assumption that M-theory-permitted solutions (corresponding to different values of Standard Model parameters) and dark energy density values will vary randomly from bubble universe to bubble universe within an eternally expanding ensemble variously termed a multiverse or a meta-univers. Generating a life-friendly cosmos is simply a matter of randomly reshuffling the set of permissible parameters and values a sufficient number of times until a particular Big Bang yields, against odds of perhaps a googleplex-to-one, a permutation that just happens to possess the right mix of Standard Model parameters to be bio-friendly.
Sum-Over-Histories Quantum Cosmological Model
The second approach invokes a quantum theory-derived sum-over-histories cosmological model inspired by Everett’s “many worlds” interpretation of quantum physics. This approach, which has been prominently embraced by Hawking (Hawking and Hertog, 2002), was summarized as follows by Hogan (Hogan, 2004):
In the original formulation of quantum mechanics, it was said that an observation collapsed a wavefunction to one of the eignestates of the observed quantity. The modern view is that the cosmic wavefunction never collapses, but only appears to collapse from the point of view of observers who are part of the wavefunction. When Schrödinger’s cat lives or dies, the branch of the wavefunction with the dead cat also contains observers who are dealing with a dead cat, and the branch with the live cat also contains observers who are petting a live one.
Although this is sometimes called the “Many Worlds” interpretation of quantum mechanics, it is really about having just one world, one wavefunction, obeying the Schrödinger equation: the wavefunction evolves linearly from one time to the next based on its previous state.
Anthropic selection in this sense is built into physics at the most basic level of quantum mechanics. Selection of a wavefunction branch is what drives us into circumstances in which we thrive. Viewed from a disinterested perspective outside the universe, it looks like living beings swim like salmon up their favorite branches of the wavefunction, chasing their favorite places.
Hawking and Hertog (Hawking and Hertog, 2002) have explicitly characterized this “top down” cosmological model as a restatement of the weak anthropic principle:
We have argued that because our universe has a quantum origin, one must adopt a top down approach to the problem of initial conditions in cosmology, in which histories that contribute to the path integral, depend on the observable being measured. There is an amplitude for empty flat space, but it is not of much significance. Similarly, the other bubbles in an eternally inflating spacetime are irrelevant. They are to the future of our past light cone, so they don’t contribute to the action for observables and should be excised by Ockham’s razor. Therefore, the top down approach is a mathematical formulation of the weak anthropic principle. Instead of starting with a universe and asking what a typical observer would see, one specifies the amplitude of interest.
Apart from the objections on the part of those who oppose in principle any use of the anthropic principle in cosmology, there are at least three reasons why both the Hawking/Hogan and the Susskind/Linde/Weinberg restatements of the weak anthropic principle are objectionable.
First, both approaches appear to be resistant (at the very least) to experimental testing. Universes spawned by Big Bangs other than our own are inaccessible from our own universe, at least with the experimental techniques currently available to science. So too are quantum wavefunction branches that we cannot, in principle, observe. Accordingly, both approaches appear to be untestableperhaps untestable in principle. For this reason, Smolin recently argued (Smolin, 2004) “not only is the Anthropic Principle not science, its role may be negative. To the extent that the Anthropic Principle is espoused to justify continued interest in unfalsifiable theories, it may play a destructive role in the progress of science.”
Second, both approaches violate the mediocrity principle. The mediocrity principle, a mainstay of scientific theorizing since Copernicus, is a statistically based rule of thumb that, absent contrary evidence, a particular sample (Earth, for instance, or our particular universe) should be assumed to be a typical example of the ensemble of which it is a part. The Susskind/Linde/Weinberg approach, in particular, flouts this principle. Their approach simply takes refuge in a brute, unfathomable mysterythe conjectured lucky roll of the dice in a crap game of eternal inflationand declines to probe seriously into the possibility of a naturalistic cosmic evolutionary process that has the capacity to yield a life-friendly set of physical laws and constants on a nonrandom basis.
Third, both approaches extravagantly inflate the probabilistic resources required to explain the phenomenon of a life-friendly cosmos. (Think of a googleplex of monkeys typing away randomly until one of them, by pure chance, accidentally composes a set of equations that correspond to the Standard Model.) This should be a hint that something fundamental is being overlooked and that there may exist an unknown natural process, perhaps functionally akin in some manner to terrestrial evolution, capable of effecting the emergence and prolongation of physical states of nature that are, in the abstract, vanishingly improbable.
Hogan (Hogan, 2004) has analogized the quantum theory-inspired sum-over-histories version of the weak anthropic principle to Darwinian theory:
This blending of empirical cosmology and fundamental physics is reminiscent of our Darwinian understanding of the tree of life. The double helix, the four-base codon alphabet and the triplet genetic code for amino acids, any particular gene for a protein in a particular organismall are frozen accidents of evolutionary history. It is futile to try to understand or explain these aspects of life, or indeed any relationships in biology, without referring to the way the history of life unfolded. In the same way that (in Dobzhansky’s phrase), “nothing in biology makes sense except in the light of evolution,” physics in these models only makes sense in the light of cosmology.
Ironically, Hogan misses the key point that neither the branching wavefunction nor the eternal inflation-plus-M-theory versions of the weak anthropic principle hypothesize the existence of anything corresponding to the main action principle of Darwin’s theory: natural selection. Both restatements of the weak anthropic principle are analogous, not to Darwin’s approach, but rather to a mythical alternative history in which Darwin, contemplating the storied tangled bank (the arresting visual image with which he concludes The Origin of Species), had confessed not a magnificent obsession with gaining an understanding of the mysterious natural processes that had yielded “endless forms most beautiful and most wonderful,” but rather a smug satisfaction that of course the earthly biosphere must have somehow evolved in a just-so manner mysteriously friendly to humans and other currently living species, or else Darwin and other humans would not be around to contemplate it.
Indeed, the situation that confronts cosmologists today is reminiscent of that which faced biologists before Darwin propounded his revolutionary theory of evolution through natural selection. Darwin confronted the seemingly miraculous phenomenon of a fine-tuned natural order in which every creature and plant appeared to occupy a unique and well-designed niche. Refusing to surrender to the brute mystery posed by the appearance of nature’s design, Darwin masterfully deployed the art of metaphor[2] to elucidate a radical hypothesisthe origin of species through natural selectionthat explained the apparent miracle as a natural phenomenon.
A significant lesson drawn from Darwin’s experience is important to note at this point. Answering the question of why the most eminent geologists and naturalists had, until shortly before publication of The Origin of Species, disbelieved in the mutability of species, Darwin responded that this false conclusion was “almost inevitable as long as the history of the world was thought to be of short duration.” It was geologist Charles Lyell’s speculations on the immense age of Earth that provided the essential conceptual framework for Darwin’s new theory. Lyell’s vastly expanded stretch of geological time provided an ample temporal arena in which the forces of natural selection could sculpt and reshape the species of Earth and achieve nearly limitless variation.
The central point for purposes of this paper is that collateral advances in sciences seemingly far removed from cosmology (complexity theory and evolutionary theory among them) can help dissipate the intellectual limitations imposed by common sense and naïve human intuition. And, in an uncanny reprise of the Lyell/Darwin intellectual synergy, it is a realization of the vastness of time and history that gives rise to the novel theoretical possibility to be discussed subsequently. Only in this instance, it is the vastness of future time and future history that is of crucial importance. In particular, sharp attention must be paid to the key conclusion of Wheeler: most of the time available for life and intelligence to achieve their ultimate capabilities lie in the distant cosmic future, not in the cosmic past. As Tipler (Tipler, 1994) has stated, “Almost all of space and time lies in the future. By focusing attention only on the past and present, science has ignored almost all of reality. Since the domain of scientific study is the whole of reality, it is about time science decided to study the future evolution of the universe.” The next section of this paper describes an attempt to heed these admonitions.
In a paper published in Complexity (Gardner, 2000), I first advanced the hypothesis that the anthropic qualities which our universe exhibits might be explained as incidental consequences of a cosmic replication cycle in which the emergence of a cosmologically extended biosphere could conceivably supply two of the logically essential elements of self-replication identified by von Neumann (von Neumann, 1948): a controller and a duplicating device. The hypothesis proposed in that paper was an attempt to extend and refine Smolin’s conjecture (Smolin, 1997) that the majority of the anthropic qualities of the universe can be explained as incidental consequences of a process of cosmological replication and natural selection (CNS) whose utility function is black hole maximization. Smolin’s conjecture differs crucially from the concept of eternal inflation advanced by Linde (Linde, 1998) in that it proposes a cosmological evolutionary process with a specific and discernible utility functionblack hole maximization. It is this aspect of Smolin’s conjecture rather than the specific utility function he advocates that renders his theoretical approach genuinely novel.
As demonstrated previously (Rees, 1997; Baez, 1998), Smolin’s conjecture suffers from two evident defects: (1) the fundamental physical laws and constants do not, in fact, appear to be fine-tuned to favor black hole maximization and (2) no mechanism is proposed corresponding to two logically required elements of any von Neumann self-replicating automaton: a controller and a duplicator.[3] The latter are essential elements of any replicator system capable of Darwinian evolution, as noted by Dawkins (Gardner, 2000) in a critique of Smolin’s conjecture:
Note that any Darwinian theory depends on the prior existence of the strong phenomenon of heredity. There have to be self-replicating entities (in a population of such entities) that spawn daughter entities more like themselves than the general population.
Theories of cosmological eschatology previously articulated (Kurzweil, 1999; Wheeler, 1996; Dyson, 1988) predict that the ongoing process of biological and technological evolution is sufficiently robust and unbounded that, in the far distant future, a cosmologically extended biosphere could conceivably exert a global influence on the physical state of the cosmos. A related set of insights from complexity theory (Gardner, 2000) indicates that the process of emergence resulting from such evolution is essentially unbounded.
A synthesis of these two sets of insights yielded the two key elements of the Selfish Biocosm (SB) hypothesis. The essence of that synthesis is that the ongoing process of biological and technological evolution and emergence could conceivably function as a von Neumann controller and that a cosmologically extended biosphere could, in the very distant future, function as a von Neumann duplicator in a hypothesized process of cosmological replication.
In a paper published in Acta Astronautica (Gardner, 2001) I suggested that a falsifiable implication of the SB hypothesis is that the process of the progression of the cosmos through critical epigenetic thresholds in its life cycle, while perhaps not strictly inevitable, is relatively robust. One such critical threshold is the emergence of human-level and higher intelligence, which is essential to the eventual scaling up of biological and technological processes to the stage at which those processes could conceivably exert a global influence on the state of the cosmos. Four specific tests of the robustness of the emergence of human-level and higher intelligence were proposed.
In a subsequent paper published in the Journal of the British Interplanetary Society (Gardner, 2002) I proposed that an additional falsifiable implication of the SB hypothesis is that there exists a plausible final state of the cosmos that exhibits maximal computational potential. This predicted final state appeared to be consistent with both the modified ekpyrotic cyclic universe scenario (Khoury, Ovrut, Seiberg, Steinhardt, and Turok, 2001; Steinhardt and Turok, 2001) and with Lloyd’s description (Lloyd, 2000) of the physical attributes of the ultimate computational device: a computer as powerful as the laws of physics will allow.
The central assertions of the SB hypothesis are: (1) that highly evolved life and intelligence play an essential role in a hypothesized process of cosmic replication and (2) that the peculiarly life-friendly laws and physical constants that prevail in our universean extraordinarily improbable ensemble that Pagels dubbed the cosmic code (Pagels, 1983)play a cosmological role functionally equivalent to that of DNA in an earthly organism: they provide a recipe for cosmic ontogeny and a blueprint for cosmic reproduction. Thus, a key retrodiction of the SB hypothesis is that the suite of physical laws and constants that prevail in our cosmos will, in fact, be life-friendly. Moreoverand alone among the various cosmological scenarios offered to explain the phenomenon of a bio-friendly universethe SB hypothesis implies that this suite of laws and constants comprise a robust program that will reliably generate life and advanced intelligence just as the DNA of a particular species constitutes a robust program that will reliably generate individual organisms that are members of that particular species. Indeed, because the hypothesis asserts that sufficiently evolved intelligent life serves as a von Neumann duplicator in a putative process of cosmological replication, the biophilic quality of the suite emerges as a retrodicted biosignature of the putative duplicator and duplication process within the meaning of Goal 7 of the NASA Astrobiology Roadmap, which provides in pertinent part:
Determine how to recognize signatures of life on other worlds and on early Earth. Identify biosignatures that can reveal and characterize past or present life in ancient samples from Earth, extraterrestrial samples measured in situ, samples returned to Earth, remotely measured planetary atmospheres and surfaces, and other cosmic phenomena.
Does this retrodiction qualify as a valid scientific test of the validity of the SB hypothesis? I propose that it may, provided two additional qualifying criteria are satisfied:
There is a lively literature debating the propriety of employing retrodiction as a tool for testing scientific hypotheses (Cleland, 2002; Cleland, 2001; Gee, 1999; Oldershaw, 1988). Oldershaw (Oldershaw, 1988) has discussed the use of falsifiable retrodiction (as opposed to falsifiable prediction) as a tool of scientific investigation:
A second type of prediction is actually not a prediction at all, but rather a “retrodiction.” For example, the anomalous advance of the perihelion of Mercury had been a tiny thorn in the side of Newtonian gravitation long before general relativity came upon the scene. Einstein found that his theory correctly “predicted,” actually retrodicted, the numerical value of the perihelion advance. The explanation of the unexpected result of the Michelson-Morley experiment (constancy of the velocity of light) in terms of special relativity is another example.
As he went on to note, “Retrodictions usually represent falsification tests; the theory is probably wrong if it fails the test, but should not necessarily be considered right if it passes the test since it does not involve a definitive prediction.” Despite their legitimacy as falsification tests of hypotheses, falsifiable retrodictions are qualitatively inferior to falsifiable predictions, in Oldershaw’s view:
But, in the final analysis, only true definitive predictions can justify the promotion of a theory from being viewed as one of many plausible hypotheses to being recognized as the best available approximation of how nature actually works. A theory that cannot generate definitive predictions, or whose definitive predictions are impossible to test, can be regarded as inherently untestable.”
A less sympathetic view concerning the validity of retrodiction as a scientific tool was offered by Gee (Gee, 1999), who dismissed the legitimacy of all historical hypotheses on the ground that “they can never be tested by experiment, and so they are unscientific…. No science can ever be historical.” This viewpoint, in turn, has been challenged by Cleland (Cleland, 2001) who contends that “when it comes to testing hypotheses, historical science is not inferior to classical experimental science” but simply exploits the available evidence in a different way:
There [are] fundamental differences in the methodology used by historical and experimental scientists. Experimental scientists focus on a single (sometimes complex) hypothesis, and the main research activity consists in repeatedly bringing about the test conditions specified by the hypothesis, and controlling for extraneous factors that might produce false positives and false negatives. Historical scientists, in contrast, usually concentrate on formulating multiple competing hypotheses about particular past events. Their main research efforts are directed at searching for a smoking gun, a trace that sets apart one hypothesis as providing a better causal explanation (for the observed traces) than do the others. These differences in methodology do not, however, support the claim that historical science is methodologically inferior, because they reflect an objective difference in the evidential relations at the disposal of historical and experimental researchers for evaluating their hypotheses.
Cleland’s approach has the merit of preserving as “scientific” some of the most important hypotheses advanced in such historical fields of inquiry as geology, evolutionary biology, cosmology, paleontology, and archaeology. As Cleland has noted (Cleland, 2002):
Experimental research is commonly held up at the paradigm of successful (a.k.a.good) science. The role classically attributed to experiment is that of testing hypotheses in controlled laboratory settings. Not all scientific hypotheses can be tested in this manner, however. Historical hypotheses about the remote past provide good examples. Although fields such as paleontology and archaeology provide the familiar examples, historical hypotheses are also common in geology, biology, planetary science, astronomy, and astrophysics. The focus of historical research is on explaining existing natural phenomena in terms of long past causes. Two salient examples are the asteroid-impact hypothesis for the extinction of the dinosaurs, which explains the fossil record of the dinosaurs in terms of the impact of a large asteroid, and the “big-bang” theory of the origin of the universe, which explains the puzzling isotropic three-degree background radiation in terms of a primordial explosion. Such work is significantly different from making a prediction and then artificially creating a phenomenon in a laboratory.
In a paper presented to the 2004 Astrobiology Science Conference (Cleland, 2004), Cleland extended this analytic framework to the consideration of putative biosignatures as evidence of the past or present existence of extraterrestrial life. Acknowledging that “because biosignatures represent indirect traces (effects) of life, much of the research will be historical (vs. experimental) in character even in cases where the traces represent recent effects of putative extant organisms,” Cleland concluded that it was appropriate to employ the methodology that characterizes successful historical research:
Successful historical research is characterized by (1) the proliferation of alternative competing hypotheses in the face of puzzling evidence and (2) the search for more evidence (a “smoking gun”) to discriminate among them.
From the perspective of the evidentiary standards applicable to historical science in general and astrobiology in particular, the key retrodiction of the SB hypothesisthat the fundamental constants of nature that comprise the Standard Model as well as other physical features of our cosmos (included the number of extended physical dimensions and the extremely low value of dark energy) will be collectively bio-friendlyappears to constitute a legitimate scientific test of the hypothesis. Moreover, within the framework of Goal 7 of the NASA Astrobiology Roadmap, the retrodicted biophilic quality of our universe appears, under the SB hypothesis, to constitute a possible biosignature.
Because the SB hypothesis is radically novel and because the use of falsifiable retrodiction as a tool to test such an hypothesis creates at least the appearance of a “confirmatory argument resemble[ing] just-so stories (Rudyard Kipling’s fanciful stories, e.g., how leopards got their spots)” (Cleland, 2001), it is important (as noted previously) that two additional criteria be satisfied before this retrodiction can be considered a legitimate test of the hypothesis:
As argued at length elsewhere (Gardner, 2003), the SB hypothesis is both consilient with central concepts in these “adjoining” fields and fully capable of generating falsifiable predictions.
In his book The Fifth Miracle (Davies, 1999) Davies offered this interpretation of NASA’s view that the presence of liquid water on an alien world was a reliable marker of a life-friendly environment:
In claiming that water means life, NASA scientists are… makingtacitlya huge and profound assumption about the nature of nature. They are saying, in effect, that the laws of the universe are cunningly contrived to coax life into being against the raw odds; that the mathematical principles of physics, in their elegant simplicity, somehow know in advance about life and its vast complexity. If life follows from [primordial] soup with causal dependability, the laws of nature encode a hidden subtext, a cosmic imperative, which tells them: “Make life!” And, through life, its by-products: mind, knowledge, understanding. It means that the laws of the universe have engineered their own comprehension. This is a breathtaking vision of nature, magnificent and uplifting in its majestic sweep. I hope it is correct. It would be wonderful if it were correct. But if it is, it represents a shift in the scientific world-view as profound as that initiated by Copernicus and Darwin put together.
An emerging consensus among mainstream physicists and cosmologists is that the particular universe we inhabit appears to confirm what Smolin calls the “anthropic observation”: the laws and constants of nature seem to be fine-tuned, with extraordinary precision and against enormous odds, to favor the emergence of life and its byproduct, intelligence. As Dyson put it eloquently more than two decades ago (Dyson, 1979):
The more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense must have known that we were coming. There are some striking examples in the laws of nuclear physics of numerical accidents that seem to conspire to make the universe habitable.
Why this should be so remains a profound mystery. Indeed, the mystery has deepened considerably with the recent discovery of the inexplicably tiny value of dark energy density and the realization that M-theory encompasses an unfathomably vast landscape of possible solutions, only a minute fraction of which correspond to anything resembling the universe that we inhabit.
Confronted with such a deep mystery, the scientific community ought to be willing to entertain plausible explanatory hypotheses that may appear to be unconventional or even radical. However, such hypotheses, to be taken seriously, must:
The SB hypothesis satisfies these criteria. In particular, it generates a falsifiable retrodiction that the physical laws and constants that prevail in our cosmos will be biophilicwhich they are.
Baez, J. 1998 on-line commentary on The Life of the Cosmos (available at http://www.aleph.se/Trans/Global/Omega/smolin.txt).
Barrow, J. and Tipler, F. 1988 The Anthropic Cosmological Principle, Oxford University Press.
Bjorken, J. 2004 “The Classification of Universes,” astro-ph/0404233.
Cleland, C. 2001 “Historical science, experimental science, and the scientific method,” Geology, 29, pp. 978-990.
Cleland, C. 2002 “Methodological and Epistemic Differences Between Historical Science and Experimental Science,” Philosophy of Science, 69, pp. 474-496.
Cleland, C. 2004 “Historical Science and the Use of Biosignatures,” unpublished summary of presentation abstracted in International Journal of Astrobiology, Supplement 2004, p. 119.
Davies, P. 1999 The Fifth Miracle, Simon & Schuster.
Dyson, F. 1979 Disturbing the Universe, Harper & Row.
Dyson, F. 1988 Infinite in All Directions, Harper & Row.
Gardner, J. 2000 “The Selfish Biocosm: Complexity as Cosmology,” Complexity, 5, no. 3, pp. 34-45..
Gardner, J. 2001 “Assessing the Robustness of the Emergence of Intelligence: Testing the Selfish Biocosm Hypothesis,” Acta Astronautica, 48, no. 5-12, pp. 951-955.
Gardner, J. 2002 “Assessing the Computational Potential of the Eschaton: Testing the Selfish Biocosm Hypothesis,” Journal of the British Interplanetary Society 55, no. 7/8, pp. 285-288.
Gardner, J. 2003 Biocosm, Inner Ocean Publishing.
Gee, H. 1999 In Search of Deep Time, The Free Press.
Goldsmith, D. 2004 “The Best of All Possible Worlds,” Natural History, 5, no. 6, pp. 44-49.
Greene, B. 2004 The Fabric of the Cosmos, Knopf.
Hawking, S. and Hertog, T. 2002 “Why Does Inflation Start at the Top of the Hill?” hep-th/0204212.
Henderson, L. 1913 The Fitness of the Environment, Harvard University Press.
Hogan, C. 2004 “Quarks, Electrons, and Atoms in Closely Related Universes,” astro-ph/0407086.
Khoury, J., Ovrut, B. A., Seiberg, N., Steinhardt, P., and Turok, N. 2001 “From Big Crunch to Big Bang,” hep-th/0108187.
Kurzweil, R. 1999 The Age of Spiritual Machines, Viking.
Linde, A. 2002 “Inflation, Quantum Cosmology and the Anthropic Principle,” hep-th/0211048.
Linde, A.1998 “The Self-Reproducing Inflationary Universe,” Scientific American, 9(20), pp. 98-104.
Livio, M. 2003 “Cosmology and Life,” astro-ph/0301615.
Lloyd, S. 2000 “Ultimate Physical Limits to Computation,” Nature, 406, pp. 1047-1054.
Mitton, S. 2005 Conflict in the Cosmos: Fred Hoyle’s Life in Science, Joseph Henry Press.
Oldershaw, R. 1988 “The new physics: physical or mathematical science?” American Journal of Physics, 56(12).
Pagels, H. 1983 The Cosmic Code, Bantam.
Rees, M. 1997 Before the Beginning, Addison Wesley.
Rees, M. 2000 Just Six Numbers, Basic Books.
Smolin, L. 1997 The Life of the Cosmos, Oxford University Press.
Smolin, L. 2004 “Scientific Alternatives to the Anthropic Principle,” hep-th/0407213.
Steinhardt, P. and Turok, N. 2001 “Cosmic Evolution in a Cyclic Universe,” hep-th/0111098.
Susskind, L. 2003 “The Anthropic Landscape of String Theory,” hep-th/0302219.
Tipler, F. 1994 The Physics of Immortality, Doubleday.
Vilenkin, A. 2004 “Anthropic predictions: The Case of the Cosmological Constant,” astro-ph/0407586.
von Neumann, J. 1948 “On the General and Logical Theory of Automata.”
Weinberg, S. 21 October 1999 “A Designer Universe?” New York Review of Books.
Wheeler, J. 1996 At Home in the Universe, AIP Press.
Wilson, E. O. 1998 “Scientists, Scholars, Knaves and Fools,” American Scientist, 86, pp. 6-7.
[1] http://www.edge.org/discourse/landscape.html.
[2] The metaphor furnished by the familiar process of artificial selection was Darwin’s crucial stepping stone. Indeed, the practice of artificial selection through plant and animal breeding was the primary intellectual model that guided Darwin in his quest to solve the mystery of the origin of species and to demonstrate in principle the plausibility of his theory that variation and natural selection were the prime movers responsible for the phenomenon of speciation.
[3] Both defects were emphasized by Susskind in a recent on-line exchange with Smolin which appears at www.edge.org. Smolin has argued that his CNS hypothesis has not been falsified on the first ground (Smolin, 2004) but conceded that his conjecture lacks any hypothesized mechanism that would endow the putative process of proliferation of black-hole-prone universes with a heredity function:
The hypothesis that the parameters p change, on average, by small random amounts, should be ultimately grounded in fundamental physics. We note that this is compatible with string theory, in the sense that there are a great many string vacua, which likely populate the space of low energy parameters well. It is plausible that when a region of the universe is squeezed to Planck densities and heated to Planck temperatures, phase transitions may occur leading to a transition from one string vacua to another. But there have so far been no detailed studies of these processes which would check the hypothesis that the change in each generation is small.
As Smolin noted in the same paper, it is crucial that such a mechanism exist in order to avoid the conclusion that each new universe’s set of physical laws and constants would constitute a merely random sample of the vast parameter space permitted by the extraordinarily large “landscape” of M-theory-allowed solutions:
It is important to emphasize that the process of natural selection is very different from a random sprinkling of universes on the parameter space P. This would produce only a uniform distribution prandom(p). To achieve a distribution peaked around the local maxima of a fitness function requires the two conditions specified. The change in each generation must be small so that the distribution can “climb the hills” in F(p) rather than jump around randomly, and so it can stay in the small volume of P where F(p) is large, and not diffuse away. This requires many steps to reach local maxima from random starts, which implies that long chains of descendants are needed.
[4] Wilson has identified consilience as one of the “diagnostic features of science that distinguishes it from pseudoscience” (Wilson, 1998):
The explanations of different phenomena most likely to survive are those that can be connected and proved consistent with one another.
© 2005 James N. Gardner. Reprinted with permission.
]]>There are two epochs in computer history: Before ENIAC and After ENIAC. The first practical, all-electronic computer was unveiled on February 13, 1946 at the Univ. of Pennsylvania’s Moore School of Electronics. While there are controversies over who invented what, there is universal agreement that the ENIAC was the watershed project that showed electronic computing was possible. It was a masterpiece of electrical engineering, with unprecedented reliability and speed. And the two men most responsible for its success were J. Presper Eckert and John W. Mauchly.
I recorded two days of interviews with J. Presper Eckert in 1989. He was 70 years old. My father was Pres’ best friend from childhood and Id spent my childhood playing with his children. I visited him regularly as an adult. On that day, we spoke in his living room in Gladwyne Pennsylvaniamost of it spent sitting on the floor. We stopped talking about computers only to fiddle with his Nova Chord electronic organ that predated ENIAC and we fiddled with stereo speakers. On a second occasion, I recorded a conversation at his daughter’s home in western Massachusetts.
Randall: How did the calculating machines before ENIAC work?
Eckert: Well, a person with a paper and pencil can add two 10-digit numbers in about 10 seconds. With a hand calculator, the time is down to 4 seconds. The Harvard Mark 1 was the last of the electromechanical computersit could add two 10-digit numbers in 0.3 seconds about 30 times faster than paper and pencil. When I was a graduate student, the Moore School of Electronics had two analyzers that were essentially copies of Vannevar Bush’s machine from MIT.
Randall: What could that machine do?
Eckert: It could solve linear differential equations but only linear equations. It has a long framework divided into sections with a couple dozen shafts buried through it. You could put different gears on the shafts using screwdrivers and hammers and it had "integrators" that gave the product of two shafts coming in on a third shaft coming out. By picking the right gear ratio, you should get the right constants in the equation. We used published tables to pick the gear ratios to get whatever number you wanted. The limit on accuracy of this machine was the slippage of the mechanical wheels on the integrator. That made me say, "Let’s build electronic integrators and stick them into this machine instead of those wheel things." We added several dozen motors and amplifiers and circuits using over 400 vacuum tubes, which as electronic things go, is not trivial. The radio has only five or six tubes, and television sets have up to 30. The Nova Chord organ was built prior to this and it has about 170 tubes. The Bush Analyzer was still essentially a mechanical device.
ENIAC, which debuted 60 years ago, had 18,000 vacuum tubes.
That led me to examine if I could find some way to multiply pulse numbers together so I don’t need gearsthen I could do the whole thing electrically. There’s a theorem in calculus where you can use two integrators to do a multiplication. I talked with John Mauchley about it. Just who put in which part is hard to tell, but the idea of doing the integrations by counters was mine.
"The first real use was Edward Teller using ENIAC to do calculations for the Hydrogen bomb."
The ENIAC (Electrical Numerical Integrator And Calculator) was the first electronic digital computer and could add those two 10 digit numbers in .0002 secondsthat’s 50,000 times faster than a human, 20,000 times faster than a calculator and 1500 times faster than the Mark 1. For specialized scientific calculations it was even faster.
Randall: So it’s a myth that ENIAC could only add, subtract, multiply and divide….
Eckert: No, that’s a calculator. ENIAC could do three-dimensional second-order differential equation. We were calculating trajectory tables for the war effort. In those days the trajectory tables were calculated by hundreds of people operating desk calculatorspeople who were called "computers." So the machine that does that work was called a "Computer."
Randall: So what did they give you? Did they say, "Here’s a room? Here are some tools. Here are some guysgo make it?
Eckert: Un Huh. Pretty much.
Randall: What did ENIAC’s room look like?
Eckert: We built ENIAC in a room that was 30 feet by 50 feet. At the Moore School in West Philadelphia on the first floor.
Randall: There is a story that ENIAC dimmed the lights in Philadelphia when it was in use.
Eckert: That story is total fiction, dreamed up by some journalist. We took power off of the grid. We had voltage regulators to provide 150 kilowatts of regulated supply.
Randall: Did the military guys working on ENIAC salute the machine?
Eckert: Another ENIAC myth.
Randall: You said the largest tube gadget in 1943 was the Nova Chord with 170 tubes… what did ENIAC use?
Eckert: ENIAC had 18,000 vacuum tubes. The tubes were off the shelf; we got whatever the distributor could supply in lots of a thousand. We used 10 tube types, but could have done it with 4 tubes types; we just couldn’t get enough of them. We decided that our tube filaments would last a lot longer if we kept them below their proper voltage. Not too high or too low. A lot of the circuits were off the shelf, but I invented a lot of the circuits as well. Registers were a new idea. So were integrator circuits.
The function of the machine was split into eight basic circuit components: the accumulator, initiator, master programmer, multiplier, divider/square-root, gate, buffer, and the function tables. The accumulator was the basic arithmetic unit of the ENIAC. It consisted of twenty registers, each ten digits wide, which performed addition, subtraction, and temporary storage. The accumulator can be compared to the registers in today’s central processing units.
Randall: Are there any of your circuits still in use in today’s personal computers…
Eckert: No, but that’s true of any first invention. Edison’s original light bulb bears no resemblance to a modern bulb. They do the same thing but with totally different components. Same with the computer. What did survive were the concepts, not the hardware. The idea of a subroutine was original with ENIAC. Mauchly had this idea based on his knowledge of the inner workings of desk calculators and introduced me to his idea for a subroutine in the machine. On Mark 1, if they wanted to do a calculation over and over, they had to feed the same tape in over and over. We invented ways to run the same subroutine without any mechanical input. The idea of using internal memory was also original with ENIAC.
Randall: There’s a story that some guy was running around with a box of tubes and had to change one every few minutes.
Eckert: Another myth. We had a tubes fail every about every two days and we could locate the problem within 15 minutes. We invented a scheme to build the computer on removable chassisplug in componentsso when tubes failed we could swap them out in seconds. We carried out a very radical idea in a very conservative fashion.
Randall: You are talking about many simultaneous innovations. How many inventions went into ENIAC?
Eckert: Hard to say, maybe 100. Some are just good engineering or wrinkles on ideas. We made a memory device where bits were stored as sound waves that propagated down a meter-length tube of mercury. You could input about 1000 pulses at one end before they started to come out the other end, where we re-amplified them and sent them back in again. Sound is so much slower than electricity that we could store 1000 pulses as acoustic waves in short-term memory.
Randall: How many people were working on ENIAC?
Eckert: Total count was about 50 people, 12 of us engineers or technical people. Mauchley was teaching part-time, others had part-time jobs. I was on it full-time as Chief Engineer.
Randall: How old were you?
Eckert: We signed the contract on my 24th birthday, May 9, 1943.
Randall: Was ENIAC programmable?
Eckert: Yes and no. We programmed the machine by plugging wires in from place to place. That’s not hard- wired, it is not software, it is not memory. It’s pluggable programming. And we had switches to set the functions.
Randall: What was the first thing you did with ENIAC?
Eckert: It was designed to calculate trajectory tables, but it came too late to really help with the war effort. The first real use was Edward Teller using ENIAC to do calculations for the Hydrogen bomb.
Randall: What’s the zaniest thing you did while developing ENIAC?
Eckert: The mouse cage was pretty funny. We knew mice would eat the insulation off the wires, so we get samples of all the wires that were available and put them in a cage with a bunch of mice to see which insulation they did not like. We only used wire that passed the mouse test.
Randall: What prepared you for building an electronic computer?
Eckert: Remember, in that era, Philadelphia was Vacuum Tube Valley. Radios and televisions were predominantly made in Philadelphia. I worked on primitive television at Farnsworth back as a teenager and at Penn I had been working on various radar problems trying to measure the time for a pulse to go out and come back. We needed an accuracy of one part to hundred thousandths, which is more accurate than anything we could do at that time. I figured that out with counters. All this is a good lead-in for building an electronic computer.
Randall: Was it you or was it the times?
Eckert: Well, I may have been uniquely prepared. I was very good in math and I was fascinated with all electronics. I was designing electronic gadgets as a kid and I not only did academic math, I studied business math. Maybe I had the right fusion of interests. But every inventor stands on the pedestals built by other people. If I hadn’t done it, someone else would have. All that any inventor does is accelerate the process. The main thing was we made a machine that didn’t fail the first time. If it had failed, we might have discouraged this line of work for a long time. People usually build prototypes, see their errors and try again. We couldnt do that. We had to make it work the first time out.
Randall: You have dozens of patents for your inventions. What motivates you?
Eckert: I am happiest when I am working on the edge of something. Where there are not many people who have done it. When nobody has done it, it is pretty tough. That gets me excited.
Randall: If you were a young engineer today, what would you be working on?
Eckert: I want to develop three-dimensional processors like a sugar cube instead of wafers. I want to make a computer that is specialized to simulate a wind tunnel. I have an idea for a keyboard that is shaped like a basketball on a joystickyour hands would be comfortable… I’ve been thinking a lot about a machine to gang up a few thousand really cheap processors with a commander like "Simon Says ." They’d all do the same procedure in synchrony. I have a lot of acoustic projects that are still not solved. Maybe Id work on a robot that could clear dishes off the table or mow the lawn. The next wave is all about recognition. Making systems that recognize patterns .
Randall: When you were working on ENIAC did you have any inkling these things would be laptop sized and everyone would own one?
"It is shocking to have your life work reduced to a tenth of a square inch of silicon."
Eckert: Mauchley thought the world would need maybe 6 computers. No one had any idea the transistor and chip technologies would come along so quickly. It is shocking to have your life work reduced to a tenth of a square inch of silicon. Jules Verne predicted wed go to the moon, but he never had any idea wed all sit home and watch it on TV. In every technology, there are inventions that go off at a right angle that change the path, there are new ideas that you cant see coming.
Randall: A lot of people have claimed they invented the first computer, what about John Atanasof?
Eckert: In the course of a patent fight, the other side brought up Atanasof and tried to show that he built an electronic computer ahead of us. It’s true he had a lab- bench tabletop kind of thing and John went out to look at it and wrote a memo, but we never used any of it. His thing didnt really work. He didnt have a whole system. Thats a big thing with an invention; you have to have a whole system that works.
John and I not only built ENIAC. It worked. And it worked for a decade doing what it was designed to do. We went on to build BINAC and UNIVAC and hundreds of other computers. And the company we started is still in operation after many name changes as Unisys and I am still working for that company. Atanasof may have won a point in court, but he went back to teaching and we went on to build the first real electronic programmable computers, the first commercial computers. We made a lot of computers and we still are.
Randall: And John von Neumann?
Eckert: He came and looked at our stuff and went back to Princeton and wrote a long document about the principles. He gets a lot of credit but the inventions were ours. Someday I’ll write a book on who really invented the computer. It wasn’t Atanasof or von Neumann… we did it.
© 2005 Alexander Randall 5th.
]]>An exciting revolution in health care and medical technology looms large on the horizon. Yet the agents of change will be microscopically small, future products of a new discipline known as nanotechnology. Nanotechnology is the engineering of molecularly precise structurestypically 0.1 microns or smallerand, ultimately, molecular machines. Nanomedicine1-4 is the application of nanotechnology to medicine. It is the preservation and improvement of human health, using molecular tools and molecular knowledge of the human body. Present-day nanomedicine exploits carefully structured nanoparticles such as dendrimers,5 carbon fullerenes (buckyballs)6 and nanoshells7 to target specific tissues and organs. These nanoparticles may serve as diagnostic and therapeutic antiviral, antitumor or anticancer agents. But as this technology matures in the years ahead, complex nanodevices and even nanorobots will be fabricated, first of biological materials but later using more durable materials such as diamond to achieve the most powerful results.
Can it be that someday nanorobots will be able to travel through the body searching out and clearing up diseases, such as an arterial atheromatous plaque?8 The first and most famous scientist to voice this possibility was the late Nobel physicist Richard P. Feynman. In his remarkably prescient 1959 talk “There’s Plenty of Room at the Bottom,” Feynman proposed employing machine tools to make smaller machine tools, these to be used in turn to make still smaller machine tools, and so on all the way down to the atomic level, noting that this is “a development which I think cannot be avoided.”9
Feynman was clearly aware of the potential medical applications of this new technology. He offered the first known proposal for a nanorobotic surgical procedure to cure heart disease: “A friend of mine (Albert R. Hibbs) suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and looks around. (Of course the information has to be fed out.) It finds out which valve is the faulty one and takes a little knife and slices it out. …[Imagine] that we can manufacture an object that maneuvers at that level!… Other small machines might be permanently incorporated in the body to assist some inadequately functioning organ.”9
There are ongoing attempts to build microrobots for in vivo medical use. In 2002, Ishiyama et al at Tohoku University developed tiny magnetically-driven spinning screws intended to swim along veins and carry drugs to infected tissues or even to burrow into tumors and kill them with heat.10 In 2003, the “MR-Sub” project of Martel’s group at the NanoRobotics Laboratory of Ecole Polytechnique in Montreal tested using variable MRI magnetic fields to generate forces on an untethered microrobot containing ferromagnetic particles, developing sufficient propulsive power to direct the small device through the human body.11 Brad Nelson’s team at the Swiss Federal Institute of Technology in Zurich continued this approach. In 2005 they reported the fabrication of a microscopic robot small enough (~200 microns) to be injected into the body through a syringe. They hope this device or its descendants might someday be used to deliver drugs or perform minimally invasive eye surgery.12 Nelson’s simple microrobot has successfully maneuvered through a watery maze using external energy from magnetic fields, with different frequencies able to vibrate different mechanical parts on the device to maintain selective control of different functions. Gordon’s group at the University of Manitoba has also proposed magnetically-controlled “cytobots” and “karyobots” for performing wireless intracellular and intranuclear surgery.13
The greatest power of nanomedicine will emerge, perhaps in the 2020s, when we can design and construct complete artificial nanorobots using rigid diamondoid nanometer-scale parts like molecular gears and bearings.14 These nanorobots will possess a full panoply of autonomous subsystems including onboard sensors, motors, manipulators, power supplies, and molecular computers. But getting all these nanoscale components to spontaneously self-assemble in the right sequence will prove increasingly difficult as machine structures become more complex. Making complex nanorobotic systems requires manufacturing techniques that can build a molecular structure by what is called positional assembly. This will involve picking and placing molecular parts one by one, moving them along controlled trajectories much like the robot arms that manufacture cars on automobile assembly lines. The procedure is then repeated over and over with all the different parts until the final product, such as a medical nanorobot, is fully assembled.
The positional assembly of diamondoid structures, some almost atom by atom, using molecular feedstock has been examined theoretically14,15 via computational models of diamond mechanosynthesis (DMS). DMS is the controlled addition of carbon atoms to the growth surface of a diamond crystal lattice in a vacuum manufacturing environment. Covalent chemical bonds are formed one by one as the result of positionally constrained mechanical forces applied at the tip of a scanning probe microscope apparatus, following a programmed sequence. Mechanosynthesis using silicon atoms was first achieved experimentally in 2003.16 Carbon atoms should not be far behind.17
To be practical, molecular manufacturing must also be able to assemble very large numbers of medical nanorobots very quickly. Approaches under consideration include using replicative manufacturing systems or massively parallel fabrication, employing large arrays of scanning probe tips all building similar diamondoid product structures in unison.18
For example, simple mechanical ciliary arrays consisting of 10,000 independent microactuators on a 1 cm2 chip have been made at the Cornell National Nanofabrication Laboratory for microscale parts transport applications, and similarly at IBM for mechanical data storage applications.19 Active probe arrays of 10,000 independently-actuated microscope tips have been developed by Mirkin’s group at Northwestern University for dip-pen nanolithography20 using DNA-based “ink”. Almost any desired 2D shape can be drawn using 10 tips in concert. Another microcantilever array manufactured by Protiveris Corp. has millions of interdigitated cantilevers on a single chip. Martel’s group has investigated using fleets of independently mobile wireless instrumented microrobot manipulators called NanoWalkers to collectively form a nanofactory system that might be used for positional manufacturing operations.21 Zyvex Corp of Richardson TX has a $25 million, five-year, National Institute of Standards and Technology (NIST) contract to develop prototype microscale assemblers using microelectromechanical systems. This research may eventually lead to prototype nanoscale assemblers using nanoelectromechanical systems.
The ability to build complex diamondoid medical nanorobots to molecular precision, and then to build them cheaply enough in sufficiently large numbers to be useful therapeutically, will revolutionize the practice of medicine and surgery.1 The first theoretical design study of a complete medical nanorobot ever published in a peer-reviewed journal (in 1998) described a hypothetical artificial mechanical red blood cell or “respirocyte” made of 18 billion precisely arranged structural atoms.22 The respirocyte is a bloodborne spherical 1-micron diamondoid 1000-atmosphere pressure vessel with reversible molecule-selective surface pumps powered by endogenous serum glucose. This nanorobot would deliver 236 times more oxygen to body tissues per unit volume than natural red cells and would manage carbonic acidity
Surgical nanorobots could be introduced into the body through the vascular system or at the ends of catheters into various vessels and other cavities in the human body. A surgical nanorobot, programmed or guided by a human surgeon, could act as an semi-autonomous on-site surgeon inside the human body. Such a device could perform various functions such as searching for pathology and then diagnosing and correcting lesions by nanomanipulation, coordinated by an on-board computer while maintaining contact with the supervising surgeon via coded ultrasound signals. The earliest forms of cellular nanosurgery are already being explored today. For example, a rapidly vibrating (100 Hz) micropipette with a <1 micron tip diameter has been used to completely cut dendrites from single neurons without damaging cell viability.24 Axotomy of roundworm neurons was performed by femtosecond laser surgery, after which the axons functionally regenerated.25 A femtolaser acts like a pair of “nano-scissors” by vaporizing tissue locally while leaving adjacent tissue unharmed. Femtolaser surgery has performed: (1) localized nanosurgical ablation of focal adhesions adjoining live mammalian epithelial cells,26 (2) microtubule dissection inside yeast cells,27 (3) noninvasive intratissue nanodissection of plant cell walls and selective destruction of intracellular single plastids or selected parts of them,28 and even (4) the nanosurgery of individual chromosomes (selectively knocking out genomic nanometer-sized regions within the nucleus of living Chinese hamster ovary cells29). These procedures don’t kill the cells upon which the nanosurgery was performed. Atomic force microscopes have also been used for bacterium cell wall dissection in situ in aqueous solution, with 26 nm thick twisted strands revealed inside the cell wall after mechanically peeling back large patches of the outer cell wall.30
1. Freitas RA Jr. Nanomedicine, Vol. I: Basic Capabilities. Georgetown (TX): Landes Bioscience; 1999. Also available at: http://www.nanomedicine.com/NMI.htm
2. Robert A. Freitas Jr. Nanodentistry. J Amer Dent Assoc 2000; 131:1559-66.
3. Freitas RA Jr. Current status of nanomedicine and medical nanorobotics (invited survey). J Comput Theor Nanosci 2005; 2:1-25. Also available at: http://www.nanomedicine.com/Papers/NMRevMar05.pdf.
4. Freitas RA Jr. What is nanomedicine? Nanomedicine: Nanotech Biol Med 2005; 1:2-9. Also available at: http://www.nanomedicine.com/Papers/WhatIsNMMar05.pdf.
5. Borges AR, Schengrund CL. Dendrimers and antivirals: a review. Curr Drug Targets Infect Disord 2005; 5:247-54.
6. Mashino T, Shimotohno K, Ikegami N, Nishikawa D, Okuda K, Takahashi
K, Nakamura S, Mochizuki M. Human immunodeficiency virus-reverse transcriptase
inhibition and hepatitis C virus RNA-dependent RNA polymerase inhibition
activities of fullerene derivatives. Bioorg Med Chem Lett 2005;
15:1107-9.
7. O’Neal DP, Hirsch LR, Halas NJ, Payne JD, West JL. Photo-thermal tumor ablation in mice using near infrared-absorbing nanoparticles. Cancer Lett 2004; 209:171-6.
8. Dewdney AK. Nanotechnologywherein molecular computers control tiny circulatory submarines. Sci Am 1988 Jan; 258:100-3.
9. Feynman RP. There’s plenty of room at the bottom. Eng Sci 1960 Feb; 23:22-36. Also available at: http://www.zyvex.com/nanotech/feynman.html.
10. Ishiyama K, Sendoh M, Arai KI. Magnetic micromachines for medical applications. J Magnetism Magnetic Mater 2002; 242-245:1163-5.
11. Mathieu JB, Martel S, Yahia L, Soulez G, Beaudoin G. MRI systems as a mean of propulsion for a microdevice in blood vessels. Proc. 25th Ann. Intl. Conf., IEEE Engineering in Medicine and Biology; 2003 Sep 17-21; Cancun, Mexico; 2003. Also available at: http://www.nano.polymtl.ca/Articles/2003/MRI%20Syst%20Mean%20Prop%20Microdev%20Blood%20Vess%20proceedings%20P3419.pdf
12. Nelson B, Rajamani R. Biomedical micro-robotic system. 8th Intl. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI 2005/ www.miccai2005.org), Palm Springs CA, 26-29 October 2005.
13. Chrusch DD, Podaima BW, Gordon R. Cytobots: intracellular robotic micromanipulators. In: Kinsner W, Sebak A, eds. Conf. Proceedings, 2002 IEEE Canadian Conference on Electrical and Computer Engineering; 2002 May 12-15; Winnipeg, Canada. Winnipeg: IEEE; 2002.
14. Drexler KE. Nanosystems: Molecular Machinery, Manufacturing, and Computation. New York: John Wiley & Sons; 1992.
15. Merkle RC, Freitas RA Jr. Theoretical analysis of a carbon-carbon dimer placement tool for diamond mechanosynthesis. J Nanosci Nanotechnol 2003; 3:319-24. Also available at: http://www.rfreitas.com/Nano/JNNDimerTool.pdf.
16. Oyabu N, Custance O, Yi I, Sugawara Y, Morita S. Mechanical vertical manipulation of selected single atoms by soft nanoindentation using near contact atomic force microscopy. Phys Rev Lett 2003; 90:176102.
17. Freitas RA Jr. A Simple Tool for Positional Diamond Mechanosynthesis, and its Method of Manufacture. U.S. Provisional Patent Application No. 60/543,802, filed 11 February 2004; U.S. Patent Pending, 11 February 2005. Also available at: http://www.MolecularAssembler.com/Papers/DMSToolbuildProvPat.htm.
18. Freitas RA Jr., Merkle RC. Kinematic Self-Replicating Machines.Georgetown (TX): Landes Bioscience; 2004. Also available at: http://www.molecularassembler.com/KSRM.htm.
19. Vettiger P, Cross G, Despont M, Drechsler U, Duerig U, Gotsmann B, Haeberle W, Lantz M, Rothuizen H, Stutz R, Binnig G. The Millipedenanotechnology entering data storage. IEEE Trans Nanotechnol 2002 Mar; 1:39-55.
20. Bullen D, Chung S, Wang X, Zou J, Liu C, Mirkin C. Development of parallel dip pen nanolithography probe arrays for high throughput nanolithography. (Invited) Symposium LL: Rapid Prototyping Technologies, Materials Research Society Fall Meeting; 2-6 Dec 2002; Boston, MA. Proc. MRS, Vol. 758, 2002. Also available at: http://mass.micro.uiuc.edu/publications/papers/84.pdf.
21. Martel S, Hunter I. Nanofactories based on a fleet of scientific instruments configured as miniature autonomous robots. Proc. of the 3rd Intl. Workshop on Microfactories; 16-18 Sep 2002; Minneapolis, MN, USA; 2002, pp. 97-100.
22. Freitas RA Jr. Exploratory design in medical nanotechnology: a mechanical artificial red cell. Artif Cells Blood Subst Immobil Biotech 1998; 26:411-30. Also available at: http://www.foresight.org/Nanomedicine/Respirocytes.html.
23. Freitas RA Jr. Microbivores: artificial mechanical phagocytes using digest and discharge protocol. J Evol Technol 2005 Apr; 14:1-52. http://jetpress.org/volume14/Microbivores.pdf.
24. Kirson ED, Yaari Y. A novel technique for micro-dissection of neuronal processes. J Neurosci Methods 2000; 98:119-22.
25. Yanik MF, Cinar H, Cinar HN, Chisholm AD, Jin Y, Ben-Yakar A.Neurosurgery: functional regeneration after laser axotomy. Nature 2004; 432:822.
26. Kohli V, Elezzabi AY, Acker JP. Cell nanosurgery using ultrashort (femtosecond) laser pulses: Applications to membrane surgery and cell isolation. Lasers Surg Med 2005; 37:227-30.
27. Sacconi L, Tolic-Norrelykke IM, Antolini R, Pavone FS. Combined intracellular three-dimensional imaging and selective nanosurgery by a nonlinear microscope. J Biomed Opt 2005; 10:14002.
28. Tirlapur UK, Konig K. Femtosecond near-infrared laser pulses as a versatile non-invasive tool for intra-tissue nanoprocessing in plants without compromising viability. Plant J 2002; 31:365-74.
29. Konig K, Riemann I, Fischer P, Halbhuber KJ. Intracellular nanosurgery with near infrared femtosecond laser pulses. Cell Mol Biol 1999; 45:195-201.
30. Firtel M, Henderson G, Sokolov I. Nanosurgery: observation of peptidoglycan strands in Lactobacillus helveticus cell walls. Ultramicroscopy 2004 Nov;101:105-9.
31. Freitas RA Jr. Nanomedicine, Vol. IIA: Biocompatibility. Georgetown (TX): Landes Bioscience; 2003. Also available at: http://www.nanomedicine.com/NMIIA.htm.
]]>Question 1: Tell us about your background. When did you first work with a computer? When did you first begin studying computer/technological trends?
I had the idea that I wanted to be an inventor since I was five. I first got involved with computers when I was twelve, programming some early computers, such as the 1401 and the 1620. I also built computers out of telephone relays.
I began seriously modeling technology trends around 1980. I quickly realized that timing is the critical factor in the success of inventions. Most technology projects fail not because the technology doesn’t work, but because the timing is wrongnot all of the enabling factors are at play where they are needed. So I began to study these trends in order to anticipate what the world would be like in 3–5 or 10 years and make realistic assessments. That continued to be the primary application of this study. I used these methodologies to guide the development plans of my projects, in particular when to launch a particular project, so that the software would be ready when the underlying hardware was needed, the needs of the market, and so on.
These methodologies had the side benefit of allowing us to project development 20 or 30 years in the future. There is a strong common wisdom that you can’t predict the future, but that wisdom is incorrect. Some key measures of information technologyprice-performance, capacity, bandwidthfollow very smooth exponential trends. I have been making predictions going back into the 1980s, when I wrote The Age of Intelligent Machines. That book had hundreds of predictions about the 1990s and 21st century based on these models, which have turned out to be quite accurate. If we know how much it will cost per million-instruction-per-second (MIPs) of computing in future points in time, or how much it will cost to sequence a base-pair of DNA or to model a protein, or any other measure of information technology at different points in time, we can build scenarios of what will be feasible. The capability of these technologies grows exponentially, essentially doubling every year (depending on what you measure). There is even a slow second level of exponential growth.
We will increase the price-performance of computing, which is already formidable and deeply influential, by a factor of a billion in 25 years, and we will also shrink the technology at a predictable pace of over one hundred in 3D volume per decade. So these technologies will be very small and widely distributed, inexpensive, and extremely powerful. Look at what we can do already, and multiply that by a billion.
Question 2: When did you first become aware of the term “singularity?” Did you use that term in your first book, The Age of Intelligent Machines?
No. I first became familiar with it probably around the late 1990s. In my latest book, The Singularity is Near, I have really focused on the point in time where these technologies become quite explosive and profoundly transformative. In my earlier book, The Age of Spiritual Machines, I touched on that, and wrote about computers achieving human levels of intelligence and what that would mean. My main focus in this new book is on the merger of biological humanity with the technology that we are creating. Once nonbiological intelligence gets a foothold in our bodies and brains, which we have arguable already done in some people, but will do significantly in the 2020s, it will grow exponentially. We have about 1026 calculations per second (cps) (at most 1029) in biological humanity, and that figure won’t change much in the next fifty years. Our brains use a form of electro-chemical signaling that travels a few hundred feet per second, which is a million times slower than electronics. The inter-neuronal connections in our brains compute at about 200 calculations per second, which is also about a million times slower than electronics. We communicate our knowledge and skills using language, which is similarly a million times slower than computers can transmit information.
So biological intelligence, while it could be better educated and better organized, is not going to significantly change. Nonbiological intelligence, however, is multiplying by over 1,000 per decade in less than a decade. So once we can achieve the software of intelligence, which we will achieve through reverse-engineering the human brain, non-biological intelligence will soar past biological intelligence. But this isn’t an alien invasion, it is something that will literally be deeply integrated in our bodies and brains. By the 2040s, the nonbiological intelligence that we create that year will be a billion times more powerful than the 1026 CPS that all biological humanity represents. The word “singularity” is a metaphor, and the metaphor that we are using isn’t really infinity, because these exponentials are finite. The real meaning of “singularity” is similar to the concept of the “event horizon” in physics. A black hole as physicists envision it has an event horizon around it, and you can’t easily see past it. Similarly, it is difficult to see beyond this technological event horizon, because it is so profoundly transformative.
Question 3: Has there been one writer or researcher, such as Marvin Minsky or Vernor Vinge, who has had a predominant influence on your thinking?
Both those individuals have been influential. Vernor Vinge has had some really key insights into the singularity very early on, There were others, such as John Von Neuman, who talked about a singular event occurring, because he had the idea of technological acceleration and singularity half a century ago. But it was simply a casual comment, and Vinge worked out some of the key ideas.
Marvin Minsky was actually my mentor, and I corresponded with him and visited with him when I was in high school. We remain close friends and colleagues, and many of his writings on artificial intelligence, such as Society of Mind and some of his more technical work, have deeply influenced me.
Question 4: Many semiconductor analysts are predicting that the field of robotics will become the next major growth industry. When do you predict that the robotics industry will become a major, thriving industry?
In the GNR revolutions I write about, R nominally stands for robotics, but the real reference is to strong AI. By strong AI, I mean artificial Intelligence at human levels, some of which will be manifested in robots, and some of which will be manifested in virtual bodies and virtual reality. We will go into virtual reality environments, and have nanobots in our brain that will shut down the signals coming from our nerves and sense organs, and replace them with the signals that we would be receiving if we were in the virtual environment. We can be actors in this virtual environment, and have a virtual body. But this virtual body doesn’t need to be the same as our real body. We will encounter other people in similar situations in this VR. There will also be forms of AI which perform specific tasks, like narrow AI programs do today in our economic infrastructure. Our economic infrastructure would collapse if all these current narrow AI programs stopped functioning, but this wasn’t true 25 years ago. So these task specific AI programs will become very intelligent in the coming decades.
So strong AI won’t just be robots; that is only one manifestation. The R revolution really is the strong AI revolution. Billions of dollars of financial transactions are done every day, in the form of intelligent algorithms, automatic detection of credit card fraud, and so forth. Every time you send an email or make a telephone call, intelligent algorithms route the information. Algorithms automatically diagnose electrocardiograms and blood cell images, fly airplanes, guide “smart” weapons, and so forth. I give dozens of examples in the book. These applications will become increasingly intelligent in the decades ahead. Machines are already performing tasks that previously could only be done by humans, and the tasks that this covers will increase in the coming years.
In order to achieve strong AI, we need to understand how the human brain works, and there are two fundamental requirements. One is the hardware requirement, which you mentioned. It is relatively uncontroversial today that we will achieve computer hardware equivalent to the human brain’s computing capacityjust look at the semiconductor industry’s own roadmap. This is a roadmap into which the semiconductor industry has put enormous effort. By 2020, a single chip will provide 1016 instructions per second, sufficient to emulate a single human brain. We will go to the third dimension, effectively superseding the limits of Moore’s law, which deals only in 2-d integrated circuits. These ideas were controversial notions when my last book (The Age of Spiritual Machines) was published in 1999, but is relatively uncontroversial today.
The more controversial issue is whether we will have the software, because it is not sufficient to simply have powerful computers, we need to actually understand how human intelligence works. That doesn’t necessarily mean copying every single pattern of every dendrite and ion channel. It really means understanding the basic principles of how the human brain performs certain tasks, such as remembering, reasoning, recognizing patterns and so on. That is a grand project, which I refer to as reverse-engineering the human brain, which is far further along than many people realize. We see exponential growth in every aspect of it. For instance, the spatial resolution of brain scanning is doubling every year in 3D volume. For the first time we can actually see individual interneuronal connections in living brains, and see them signaling in real time. This capability was not feasible a few years ago. The amount of data that we are obtaining on the brain is doubling every year, and we are showing that we can turn this data into working models, and in the book I highlight a couple of dozen simulations of different regions of the brain. For example, there is now a simulation of the cerebellum, which is an important region of the brain devoted to skill formation. This region comprises over half of the neurons of the brain.
I make the case that we will have the principles of operation understood well within twenty years. At the end of the 2020s, we will have both the hardware and software to create human levels of intelligence. This includes emotional intelligence, which is really the cutting edge of intelligence, in a machine. Given that machines are already superior to humans in certain aspects, the human-intelligent computer combination will be quite formidable, and this combination will continue to grow exponentially. Nonbiological intelligence will be able to examine its own source code and improve it in an iterative design cycle. We are doing something like that now with biotechnology, by reading our genes. So in the GNR revolutions I write about, R really stands for intelligence, which is the most powerful force in the universe. It is therefore the most influential of the revolutions.
Question 5: Nanotechnology plays a key role in your forecasts. What advice would you give to someone wanting to invest today in nanotechnology corporations?
Nanotechnology developments are currently in their formative stages. There are early applications of nanotechnology, but these do not represent the full vision of nanotechnology, the vision that Eric Drexler articulated in 1986. No one was willing to supervise this radical and interdisciplinary thesis except for my mentor Marvin Minsky. We have shown the feasibility of manipulating matter at the molecular level, which is what biology does. One of the ways to create nanotechnology is to start with biological mechanisms and modify them to extend the biological paradigmto go beyond proteins. That vision of molecular nanotechnology assemblyof using massively parallel, fully programmable processes to grow objects with remarkable propertiesis about twenty years away. There will be a smooth progression, and early adaptor applications, many of which I discuss in the book.
There are early applications in terms of nanoparticles. These nanoparticles have unique features due to nanosize components, but this is a slightly different concept. We are using the special properties of nanoscale objects, but we are not actually building objects molecule by molecule. So the real revolutionary aspect of nanotechnology is a couple of decades away, and it is too early to say which companies will be the leaders of that. Intel sees that the future of electronics is nanotechnology, and by some definitions today’s electronics are already nanotechnology. Undoubtedly, there will be small corporations that will dominate. When search engines were formative, it would have been difficult to foresee that two Stanford undergrads would dominate that field. Nanotechnology is already a multi-billion dollar industry which will continue growing as we get closer to molecular manufacturing. When we actually have molecular manufacturing, it will be transformingwe will be able to inexpensively manufacture almost anything we need from feedstock materials and these information processes.
Question 6: You write in The Singularity is Near of feeling somewhat alone in your beliefs. How has the mainstream scientific community responded to your prognostications?
Actually quite well. The book has been very well received; it has gotten very positive reviews in mainstream publications such as the New York Times and the Wall Street Journal. It has done very well, it has been #1 on the science list at Amazon, and ended up the fourth best selling science book of 2005 despite coming out at the end of the year. The New York Times cited it as the 13th most blogged about book of 2005. In terms of group intellectual debate, I believe that it has gotten a lot of respect, and has been well received. There are individuals who don’t read the arguments and just read the conclusions. For some of these individuals, the conclusions are so distant from the conventional wisdom on these topics that they reject it out of hand. But for those who carefully read the arguments, the response is generally positive. This is not to say that everyone agrees with everything, but it has gotten a lot of serious response and respect. I do believe that these ideas are getting more widely distributed and accepted, I am obviously not the only person articulating these concepts. Nevertheless, the common wisdom is quite strongeven among friends and associates, the common wisdom regarding life cycle and the concept that life won’t be much different in the future than it is todaystill permeates people’s thinking. Thoughts and statements regarding life’s brevity and senescence are still quite influential. The deathist meme (that death gives meaning to life) is alive and well.
The biggest issue, which I put out in the beginning of Singularity, is linear vs. exponential thinking. It is remarkable how thoughtful people, including leading scientists, think linearly. This is just wrong, and I make this case, showing dozens of examples. But even though someone may be an expert regarding one aspect of technology or science, doesn’t mean that they have studied technology forecasting. Relatively few futurists/prognosticators really have well-grounded methodologies. The common wisdom is to think linearly, to assume that the current pace of change will continue indefinitely. But this attitude is gradually changing, as more and more people understand the exponential perspective and how explosive an exponential can be. That is the true nature of these technology trends.
Question 7: What about other technologies and industries, such as the textile, aerospace, or automotive industries? Are all technology fields experiencing exponential growth?
The key issue is that information technology and information processes progress at an exponential pace. Biological evolution itself was an information processthe backbone is the genetic code, which is a digital code. I show in my book how that has accelerated very smoothly, in terms of the growth of complexity. The same thing is true of technological evolution, when it has to do with information. If we can measure the information content, which we can readily do with things like computation and communication, then we can discern that it progresses in this exponential fashion and subject to the law of accelerating returns.
The information technology needs to get to a point where it is capable of transforming an industry, and biology is a good example. Biology was not an information technology until recentlyit was basically hit or miss. Drug development was called drug discovery, which meant that we didn’t know why a drug worked and we had no theory of its operation. These drugs and tools were relatively crude and had many negative side effects. 99.9% of the drugs on the market were designed in this haphazard pre-information era fashion.
The new paradigm in biology is to understand these processes as information processes, and to develop the tools to reprogram these processes and actually change our genes. We still have these genetic programs that are obsolete. The fat insulin receptor gene tells the body to hold on to every calorie, since it is programmed to anticipate that the next hunting season may be a failure. That was a good program 10,000 years ago, but is not a good program today. We have shown in experimental studies with mice that we can change those programs. There are many genes that we would like to turn off, and there is also new genetic information that we would like to insert. New gene therapy techniques are now beginning to work. We can turn enzymes on and off, which are the workhorses of biology, and there are many examples of that. Most current drug development is through this rational drug design. So biology is becoming an information technology, and we can see the clear exponential growth. The amount of genetic data we sequence is doubling every year, the speed with which we can sequence DNA is doubling every year, and the cost has come down by half every year. It took 15 years to sequence the HIV virus, but we sequenced the SARS virus in 31 days. AIDs drugs cost $30,000 per patient per year fifteen years ago, but didn’t work very well. Now they’re down to $100 per patient per year in poor countries and work much better.
Fields such as energy are still not information technologies, but that is going to change as well. For instance, in Singularity I describe how we could meet 100% of our energy needs through renewable energy with nanoengineered solar panels and fuel cells within twenty years, by capturing only 3% of 1% of the sunlight that hits the Earth. That will happen within twenty years, and it will be related to information technology, since it will be able to meet our energy needs in a highly distributed, renewable, clean fashion with nanoengineered devices. We will ultimately transform transportation in a similar way, with nanoengineered devices that can provide personal flying vehicles at very low cost. The transportation and energy industries are currently pre-information fields. Ultimately, however, information technologies will comprise almost everything of value, because we will be able to build anything at extremely low cost using nanoengineered materials and processes. We will have new methods of doing things like flying and creating energy.
Question 8: You have emphasized the superior mechanical and electronic property of carbon nanotubes. When do you anticipate nanotubes being embedded in materials? When will we see the first computers with nanotube components?
There is actually a nanotube-based memory that may hit the market next year. This is a dense, two-dimensional device that has attractive properties. But three-dimensional devices are still about one and a half decades away. There are alternatives to nanotubes, such as DNA itself. DNA has potential uses outside of biology, because of its affinity for linking to itself. DNA could also be used structurally. But the full potential of three-dimensional structures based on either carbon nanotubes or DNA, is a circa 2020 technology.
Question 9: Most predictions of future technological developments have been inaccurate. What techniques do you use to improve the accuracy of your prognostications?
I have a team of people that gathers data on many different industries and phenomena, and we build mathematical models. More and more areas of science and technology are now measurable in information terms. I use a data-driven approach, and I endeavor to build theoretical models of why these technologies progress. I have this theory of the law of accelerating returns, which is a theory of evolution. I then try to build mathematical models of how that applies to different phenomena and industries. Most futurists don’t use this type of methodology, and some just make guesses. Many futurists are simply unaware of these trendsthey make linear models. It is often said that we overestimate what can be done in the short term, because developing technologies turns out to be more difficult than we expect, but dramatically underestimate what can be achieved in the long term, because people think linearly.
Question 10: The Government has traditionally played a pivotal role in developing new technologies. Is the U.S. Government doing enough to support the nascent nanotechnology or the AI industries? Do these industries require Government support at this point?
These industries will both be propelled forward by the enormous economic incentive. Nanotechnology will be able to create almost any physical product we need at very low cost. These devices will be quite powerful because they will have electronics and communications embedded throughout the device. So there is tremendous economic incentive to develop nanotechnology, and the same is true of artificial intelligence. Basic research has an important role to playthe Internet, for instance, came out of the Arpanet. The new world wide mesh conceptof having every device not simply connected to the net but actually become a node on the net, sending and receiving both its own and other people’s messagesthis arose out of a department of defense concept. It is now being adopted by civilian, commercial corporations. DARPA is actually playing a forward-looking role in such technologies as speech recognition and other AI fields.
In terms of national competitiveness, the key issue is that we are not graduating enough scientists and engineers. The figures regarding numbers of individuals receiving advanced technical degrees are dramatically growing in China, Japan, Korea, and India. These figures actually resemble exponential curves. China in particular is greatly outpacing the U.S., producing scientists and engineers, both at the undergraduate and doctoral level, in every scientific field. Although this is a real concern, there is now one integrated world economy, so we shouldn’t see this problem as simply the U.S. vs. China. I am glad to see China and India economically engaged, and this isn’t a zero-sum gameChinese engineers are creating value. But to the extent that we care about issues such as national competitiveness, this is a concern. In the end, however, this is about what fields teenagers choose to enter.
The U.S. does lead in the application of these technologies. I speak at many conferences each year, including music conferences, graphic arts conferences, library conferences, and so on. Yet, every conference I attend reads like a computer conference, because they are so heavily engaged in computer technology. The level of computer technology used in any of a great diversity of fields is quite impressive.
Question 11: How do you envision the world in 2015? What economic and technological predictions would you make for that year?
By 2015, computers will be largely invisible, and will be very small. We will be dealing with a mesh of computing and communications that will be embedded in the environment and in our clothing. People in 2005 face a dilemma because, on the one hand, they want large, high-resolution displays. They can obtain these displays by buying expensive 72” flat-panel plasma monitors. But they also want portable devices, which have limited display capabilities. By 2015, we will have images input directly onto our retinas. This allows for a very high-resolution display that encompasses the entire visual field of view yet is physically tiny. These devices exist in 2005, and are used in high-performance applications, such as putting a soldier or a surgeon into a virtual reality environment. So in 2015, if we want a large, high-resolution computer image, it will just appear virtually in the air. We will have augmented reality, including pop-up displays explaining what is happening in the real world. We will be able to go into full-immersion, visual auditory virtual reality environments.
We will have useable language technologies. These are beginning to emerge, and by 2015 they will be quite effective. In this visual field of view, we will have virtual personalities with which you can interact. Computers will have virtual assistants with sufficient command of speech recognition that you can discuss subjects with them. Search engines won’t wait to be askedthey will track your conversation and attempt to anticipate your needs and help you with routine transactions. These virtual assistants won’t be at the human level, that won’t happen until we have strong AI. But they will be useful, and many transactions will be mediated by these assistants. Computing will be very powerful, and it will be a mesh of computing. Individuals who need the power of a million computers for 25 milliseconds will be able to obtain that as needed.
By 2015, we will have real traction with nanotechnology. I believe that we will be well on the way to overcoming major diseases, such as cancer, heart disease, and diabetes through the biotechnology revolution that we talked above. We will also make progress in learning how to stop and even reverse the aging process.
This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN. Reprinted with permission.
]]>It is, in the view of Columbia physicist Brian Greene, the deepest question in all of science. Renowned cosmologist Paul Davies agrees, calling it the biggest of the Big Questions.
And just what is this momentous question?
Not the mystery of lifes origin, though the profundity of that particular puzzle prompted Charles Darwin to remark that it was probably forever beyond the pale of human comprehension. A dog, Darwin commented famously, might as easily contemplate the mind of Newton.
Not the inscrutable manner in which consciousness emerges from the interaction and interconnection of neurons in the human skull, though a cascade of Nobel prizes will undoubtedly reward the teams of neuroscientists who achieve progress in understanding this phenomenon.
And not even the future course of biological and cultural evolution on planet Earth, though the great Darwinian river is surely carving a course that todays most visionary evolutionary theorist will have difficulty even imagining.
No, the question is more profound, more fundamental, less tractable than any of these. It is thiswhy is the universe life-friendly?
Life-friendly, you might ask incredulously? The universe is life-friendly? The heck it is!
We have been taught since childhood that the universe is a horrifyingly hostile place. Violent black holes, planets and moons searing with unbearable heat or deep-frozen at temperatures that make Antarctica look tropical, and the vastness of interstellar space dooming us to perpetual physical isolation from our nearest starry neighborsthis is the depressing picture of the cosmos beyond Earth that dominates the popular imagination.
This vision is profoundly wrong at a fundamental level. As scientists are now beginning to realize to their astonishment, the truly amazing thing about our universe is how strangely and improbably life-friendly or anthropic it is. As Cambridge evolutionary biologist Simon Conway Morris puts it in his new book Lifes Solution, On a cosmic scale, it is now widely appreciated that even trivial differences in the starting conditions [of the cosmos] would lead to an unrecognizable and uninhabitable universe.
Simply put, if the Big Bang had detonated with slightly greater force, the cosmos would be essentially empty by now. If the primordial explosion had propelled the initial payload of cosmic raw materials outward with slightly lesser force, the universe would long ago have recollapsed in a Big Crunch. In neither case would human beings or other life forms have had time to evolve.
As Stephen Hawking asks, Why is the universe so close to the dividing line between collapsing again and expanding indefinitely? In order to be as close as we are now, the rate of expansion early on had to be chosen fantastically accurately.
It is not only the rate of cosmic expansion that appears to have been selected, with phenomenal precision, in order to render our universe fit for carbon-based life and the emergence of intelligence. A multitude of other factors are fine-tuned with fantastic exactitude to a degree that renders the cosmos almost spookily bio-friendly. Some of the universes life-friendly attributes include the odd proclivity of stellar nucleosynthesisthe process by which simple elements like hydrogen and helium are transmuted into heavier elements in the hearts of giant supernovaeto yield copious quantities of carbon, the chemical epicenter of life as we know it.
As British astronomer Fred Hoyle pointed out, in order for carbon to exist in the abundant quantities that we observe throughout the cosmos, the mechanism of stellar nucleosynthesis must be exquisitely fine-tuned in a very special way.
Yet another bio-friendly feature of the cosmos is the physical dimensionality of our universe: why are there just three extended dimensions of space rather one or two or even the ten spatial dimensions contemplated by M-theory? As has been known for more than a century, in any other dimensional setup, stable planetary orbits would be impossible and life would not have time to get started before planets skittered off into deep space or plunged into their suns.
For centuries, it seemed that the dimensionality of the universethree dimensions of space plus one dimension of timewas a matter of axiomatic truth. Rather like the propositions of geometry. In fact, precisely like the propositions of geometry. That was before the birth of superstring theory, and its successor, M-theory. I am going to get into M-theory more deeply in a moment but for now I want to highlight its insistence on the fact that there are, in fact, ten dimensions of space and one dimension of time. The mystery is why only three of the spatial dimensions got inflated into cosmic proportions by the Big Bang while the remaining seven stayed inconceivably minuscule. If anything else had happenedif only two spatial dimensions had been inflated or if four had been inflatedthen the universe would not have been set up to allow the emergence of life and mind as we know them.
Collectively, this stunning set of coincidences render the universe eerily fit for life and intelligence. And the coincidences are built into the fundamental fabric of our reality. As British Astronomer Royal Sir Martin Rees says, There are deep connections between stars and atoms, between the cosmos and the microworld . . . . Our emergence and survival depend on very special tuning of the cosmos. Or, as the eminent Princeton physicist John Wheeler put it, It is not only that man is adapted to the universe. The universe is adapted to man. Imagine a universe in which one or another of the fundamental dimensionless constants of physics is altered by a few percent one way or the other? Man could never come into being in such a universe.
Scientists have been aware of this set of puzzles for decades and have given it namethe anthropic cosmological principlebut there is a new urgency to the quest for a plausible explanation because of two very recent discoveriesthe first at natures largest scale and the second at its tiniest.
The first was the discovery of dark energy, which resulted from the observations of supernovae at extreme distances. Contrary to all expectations, the evidence showed that the expansion of the universe was speeding up, not slowing down. No one knows what is causing this phenomenon, although speculative explanations like leakage of gravity into extra unseen dimensions are beginning to show up in the scientific literature.
But for our purposes, what is particularly puzzling is why the strength of dark energywhich the new Wilkinson microwave probe has revealed to be the predominant constituent of our cosmosis so vanishingly small, yet not quite zero. If it were even a tad stronger, you see, the universe would have been emptied long ago, scrubbed clean of stars and galaxies well before life and intelligence could evolve.
The second discovery occurred in the realm of M-theory, whose previous incarnation was known as superstring theory. Those of you who have read Brian Greenes terrific book The Elegant Universe or watched the Nova series based on it will know that M-theory posits that subatomic particles like quarks, electrons and neutrinos are really just different modes of vibration of tiny one-dimensional strings of energy. But what is truly strange about M-theory is that it allows a vast landscape of possible vibration modes of superstrings, only a tiny fraction of which correspond to anything like the sub-atomic particle world we observe and that is described by what is known as the Standard Model of particle physics.
Just how big is this landscape of possible alternative models of particle physics allowed by M-theory? According to Stanford physicist and superstring pioneer Leonard Susskind, the mathematical landscape is horrifyingly gigantic, permitting 10500 power different and distinct environments, none of which appears to be mathematically favored, let alone foreordained by the theory. And in virtually none of those other mathematically permissible environments would matter and energy have possessed the qualities that are necessary for stars, galaxies and carbon-based living creatures to have emerged from the primordial chaos.
This is, as Susskind says, an intellectual cataclysm of the first magnitude because it seems to deprive our most promising new theory of fundamental physicsM-theoryof the power to uniquely predict the emergence of anything remotely resembling our universe. As Susskind puts it, the picture of the universe that is emerging from the deep mathematical recesses of M-theory is not an elegant universe at all! Its a Rube Goldberg device, cobbled together by some unknown process in a supremely improbable manner that just happens to render the whole ensemble miraculously fit for life. In the words of University of California theoretical physicist Steve Giddings, No longer can we follow the dream of discovering the unique equations that predict everything we see, and writing them on a single page. Or a tee-shirt! Predicting the constants of nature becomes a messy environmental problem. It has the complications of biology. Note the key word Giddings usesbiologybecause we will be coming back to it shortly.
This really is, as Brian Greene says, the deepest problem in all of science. It really is, as Paul Davies says, the biggest of the Big Questions: why is the universe life-friendly?
If we put to one side theological approaches to this ultimate issue, what rational pathways forward are on offer from the scientific community? I suggest that three basic approaches are available. Two are familiar while the third is radically novel.
The first approach is to continue searching patiently for a unique final theorysomething that you really could write on your tee-shirt like E = mc2which might yet, against the odds, emerge from M-theory or one of its competitors (like loop quantum gravity) aspiring to the status of a so-called theory of everything. This is the fond hope of virtually every professional theoretical physicist, including those who have been driven to desperation by the horrendously messy and complex landscape of theoretically possible M-theory-allowed universes that distresses Susskind and other superstring theorists. Perhaps the laws and constants of naturean ensemble the late New York Academy of Sciences president and physicist Heinz Pagels dubbed the cosmic codewill, in the end, turn out to be uniquely specified by mathematics and thus subject to no conceivable variation. Perhaps the ultimate equations will someday slide out of the mind of a new colossus of physics as slickly and beautifully as E = mc2 emerged from Einsteins brain. Perhaps, but that appears to be an increasingly unlikely prospect.
A second approach, born of desperation on the part of Susskind and others, is to overlay a refinement of Big Bang inflation theory called eternal chaotic inflation with an explanatory approach that has been traditionally reviled by most scientists which is known as the weak anthropic principle. The weak anthropic principle merely states in tautological fashion that since human observers inhabit this particular universe, it must perforce be life-friendly or it would not contain any observers resembling ourselves. Eternal chaotic inflation, invented by Russian-born physicist Andrei Linde, asserts that instead of just one Big Bang there are, always have been, and always will be, zillions of Big Bangs going off in inaccessible regions all the time. These Big Bangs create zillions of new universes constantly and the whole ensemble constitutes a multiverse.
Now heres what happens when these two ideaseternal chaotic inflation and the weak anthropic principleare joined together. In each Big Bang, the laws, constants and the physical dimensionality of nature come out differently. In some, dark energy is stronger. In some, dark energy is weaker. In some, gravity is stronger. In some, gravity is weaker. This happens, according to M-theory-based cosmology, because the 10-dimensional physical shapes in which superstrings vibrateknown as Calabi-Yau shapesevolve randomly and chaotically at the moment of each new Big Bang. The laws and constants of nature are constantly reshuffled by this process, like a cosmic deck of cards.
And heres the crucial part. Once in a blue moon, this random process of eternal chaotic inflation will yield a winning hand, as judged from the perspective of whether a particular new universe is life-friendly. That outcome will be pure chanceone lucky roll of the dice in an unimaginably vast cosmic crap shoot with 10500 unfavorable outcomes for every winning turn.
Our universe was a big winner, of course, in the cosmic lottery. Our cosmos was dealt a royal flush. Here is how the eminent Nobel laureate Steve Weinberg explained this scenario in a New York Review of Books essay a couple of years ago: The expanding cloud of billions of galaxies that we call the big bang may be just one fragment of a much larger universe in which big bangs go off all the time, each one with different values for the fundamental constants. It is no more a mystery that our particular branch of the multiverse exhibits life-friendly characteristics, according to Weinberg, than that life evolved on the hospitable Earth rather than some horrid place, like Mercury or Pluto.
If you find this scenario unsatisfactorythe weak anthropic principle superimposed on Andrei Lindes theory of eternal chaotic inflationI can assure you that you are not alone. To most scientists, offering the tautological explanation that since human observers inhabit this particular universe, it must necessarily be life-friendly or else it would not contain any observers resembling ourselves is anathema. It just sounds like giving up.
In my view, there are two primary problems with the Weinberg/Susskind approach. First, universes spawned by Big Bangs other than our own are inaccessible from our own universe, at least with the experimental techniques currently available to scientists. So the approach appears to be untestable, perhaps untestable in principle. And testability is the hallmark of genuine science, distinguishing it from fields of inquiry like metaphysics and theology.
Second, the Weinberg/Susskind approach extravagantly violates the mediocrity principle. The mediocrity principle, a mainstay of scientific theorizing since Copernicus, is a statistically based rule of thumb that, absent contrary evidence, a particular sample (Earth, for instance, or our particular universe) should be assumed to be a typical example of the ensemble of which it is a part. The Weinberg/Susskind approach flagrantly flouts the mediocrity principle. Instead, their approach simply takes refuge in a brute, unfathomable mysterythe conjectured lucky roll of the dice in a crap game of eternal chaotic inflationand declines to probe seriously into the possibility of a naturalistic cosmic evolutionary process that has the capacity to yield a life-friendly set of physical laws and constants on a nonrandom basis. It is as if Charles Darwin, contemplating the famous tangled bank (the arresting visual image with which he concludes The Origin of Species), had confessed not a magnificent obsession with gaining an understanding of the mysterious natural processes that had yielded endless forms most beautiful and most wonderful, but rather a smug satisfaction that of course the earthly biosphere must have somehow evolved in a just-so manner mysteriously friendly to humans and other currently living species, or else Darwin and other humans would not be around to contemplate it!
Indeed, the situation that confronts cosmologists today is eerily reminiscent of that which faced biologists before Charles Darwin propounded his revolutionary theory of evolution. Darwin confronted the seemingly miraculous phenomenon of a fine-tuned natural order in which every creature and plant appeared to occupy a unique and well-designed niche. Refusing to surrender to the brute mystery posed by the appearance of natures design, Darwin masterfully deployed the art of metaphor to elucidate a radical hypothesisthe origin of species through natural selectionthat explained the apparent miracle as a natural phenomenon.
The metaphor furnished by the familiar process of artificial selection was Darwins crucial stepping stone. Indeed, the practice of artificial selection through plant and animal breeding was the primary intellectual model that guided Darwin in his quest to solve the mystery of the origin of species and to demonstrate in principle the plausibility of his theory that variation and natural selection were the prime movers responsible for the phenomenon of speciation. So, too, today a few venturesome cosmologists have begun to use the same poetic tool utilized by Darwinthe art of metaphorical thinkingto develop novel intellectual models that might offer a logical explanation for what appears to be an unfathomable mystery: the apparent fine-tuning of the cosmos.
The cosmological metaphor chosen by these iconoclastic theorists is life itself. What if life, they ask in the spirit the great Belgian biologist and Nobel laureate Christian de Duve, were not a cosmic accident but the essential reality at the very heart of the elegant machinery of the universe? What if Darwins principle of natural selection were merely a tiny fractal embodiment of a universal life-giving principle that drives the evolution of stars, galaxies, and the cosmos itself?
This, as you may have guessed, is the headline summarizing the third (and radically novel) approach to answering the biggest of the Big Questions: why is the universe life-friendly? It is the approach outlined at length in my new book BIOCOSM.
Before I get into this third approach in more detail, I want to say something upfront about scientific speculation. The approach I am about to outline for you is intentionally and forthrightly speculative. Following the example of Darwin, I have attempted to crudely frame a radically new explanatory paradigm well before all of the required building materials and construction tools are at hand. Darwin had not the slightest clue, for instance, that DNA is the molecular device used by all life-forms on Earth to accomplish the feat of what he called inheritance. Indeed, as cell biologist Kenneth R. Miller noted in Finding Darwins God, Charles Darwin worked in almost total ignorance of the fields we now call genetics, cell biology, molecular biology, and biochemistry. Nonetheless, Darwin managed to put forward a plausible theoretical framework that succeeded magnificently despite the fact that it was utterly dependent on hypothesized but completely unknown mechanisms of genetic transmission.
As Darwins example shows, plausible and deliberate speculation plays an essential role in the advancement of science. Speculation is the means by which new scientific paradigms are initially constructed, to be either abandoned later as wrong-headed detours or vindicated as the seeds of scientific revolutions.
Another important lesson drawn from Darwins experience is important to note at the outset. Answering the question of why the most eminent geologists and naturalists had, until shortly before publication of The Origin of Species, disbelieved in the mutability of species, Darwin responded that this false conclusion was almost inevitable as long as the history of the world was thought to be of short duration. It was geologist Charles Lyells speculations on the immense age of Earth that provided the essential conceptual framework for Darwins new theory. Lyells vastly expanded stretch of geological time provided an ample temporal arena in which the forces of natural selection could sculpt and reshape the species of Earth and achieve nearly limitless variation.
The central point is that collateral advances in sciences seemingly far removed from cosmology can help dissipate the intellectual limitations imposed by common sense and naïve human intuition. And, in an uncanny reprise of the Lyell/Darwin intellectual synergy, it is a realization of the vastness of time and history that gives rise to the crucial insight. Only in this instance, the vastness of which I speak is the vastness of future time and future history. In particular, sharp attention must be paid to the key conclusion of Princeton physicist John Wheeler: most of the time available for life and intelligence to achieve their ultimate capabilities lie in the distant cosmic future, not in the cosmic past. As cosmologist Frank Tipler bluntly stated, Almost all of space and time lies in the future. By focusing attention only on the past and present, science has ignored almost all of reality. Since the domain of scientific study is the whole of reality, it is about time science decided to study the future evolution of the universe.
That is exactly what I have attempted to do in BIOCOSM in order to explore, in a tentative way, a possible third pathway to an answer to the biggest of the Big Questions. I call that third pathway the Selfish Biocosm hypothesis.
Originally presented in peer-reviewed scientific papers published in Complexity, Acta Astronautica, and the Journal of the British Interplanetary Society, my Selfish Biocosm hypothesis suggests that in attempting to explain the linkage between life, intelligence and the anthropic qualities of the cosmos, most mainstream scientists have, in essence, been peering through the wrong end of the telescope. The hypothesis asserts that life and intelligence are, in fact, the primary cosmological phenomena and that everything elsethe constants of nature, the dimensionality of the universe, the origin of carbon and other elements in the hearts of giant supernovas, the pathway traced by biological evolutionis secondary and derivative. In the words of Martin Rees, my approach is based on the proposition that what we call the fundamental constantsthe numbers that matter to physicistsmay be secondary consequences of the final theory, rather than direct manifestations of its deepest and most fundamental level.
I began developing the Selfish Biocosm hypothesis as an attempt to supply two essential elements missing from a novel model of cosmological evolution put forward by astrophysicist Lee Smolin. Smolin had come up with the intriguing suggestion that black holes are gateways to new baby universes and that a kind of Darwinian population dynamic rewards those universes most adept at producing black holes with the greatest number of progeny. Proliferating populations of baby universes emerging from the loins (metaphorically speaking) of black hole-rich mother universes thus come to dominate the total population of the multiversea theoretical ensemble of all mother and baby universes. Black hole-prone universes also happen to coincidentally exhibit anthropic qualities, according to Smolin, thus accounting for the bio-friendly nature of the average cosmos in the ensemble, more or less as an incidental side-effect.
This was a thrilling conjecture because for the first time it posited a cosmic evolutionary process endowed with what economists call a utility function (i.e., a value that was maximized by the hypothesized evolutionary process, which in the case of Smolins conjecture was black hole maximization).
However, Smolins approach was seriously flawed. As the computer genius John von Neumann demonstrated in a famous 1948 Caltech lecture entitled On the General and Logical Theory of Automata, any self-reproducing object (mouse, bacterium, human or baby universe) must, as a matter of inexorable logic, possess four essential elements:
1. A blueprint, providing the plan for construction of offspring;
2. A factory, to carry out the construction;
3. A controller, to ensure that the factory follows the plan; and
4. A duplicating machine, to transmit a copy of the blueprint to the offspring.
In the case of Smolins hypothesis, one could logically equate the collection of physical laws and constants that prevail in our universe with a von Neumann blueprint and the universe at large with a kind of enormous von Neumann factory. But what could possibly serve as a von Neumann controller or a von Neumann duplicating machine? My goal was to rescue Smolins basic innovationa cosmic evolutionary model that incorporated a discernible utility functionby proposing scientifically plausible candidates for the two missing von Neumann elements.
The hypothesis I developed was based on a set of conjectures put forward by Martin Rees, John Wheeler, Freeman Dyson, John Barrow, Frank Tipler, and Ray Kurzweil. Their futuristic visions suggested collectively that the ongoing process of biological and technological evolution was sufficiently robust, powerful, and open-ended that, in the very distant future, a cosmologically extended biosphere could conceivably exert a global influence on the physical state of the entire cosmos. Think of this idea as the Gaia principle extended universe-wide.
A synthesis of these insights lead me directly to the central claim of the Selfish Biocosm hypothesis: that the ongoing process of biological and technological emergence, governed by still largely unknown laws of complexity, could function as a von Neumann controller and that a cosmologically extended biosphere could serve as a von Neumann duplicating machine in a conjectured process of cosmological replication.
I went on to speculate that the means by which the hypothesized cosmological replication process could occur was through the fabrication of baby universes by highly evolved intelligent life forms. These hypothesized baby universes would themselves be endowed with a cosmic codean ensemble of physical laws and constantsthat would be life-friendly so as to enable life and ever more competent intelligence to emerge and eventually to repeat the cosmic reproduction cycle. Under this scenario, the physical laws and constants serve a cosmic function precisely analogous to that of DNA in earthly creatures: they furnish a recipe for the birth and evolution of intelligent life and a blueprint, which provides the plan for construction of offspring.
I should add that if the fabrication of baby universes, which is the key step in the hypothesized cosmic reproductive cycle that I just outlined, sounds to you like outrageous science fictionan X-file too far, in the words of one of my criticsyou should be aware that the topic has begun to be rigorously explored by such eminent physicists as Andrei Linde of Stanford, Alan Guth of MIT (who is the father of inflation theory), Martin Rees of Cambridge, eminent astronomer Edward Harrison, and physicists Lawrence Krauss and Glenn Starkman.
This central claim of the Selfish Biocosm hypothesis offered a radically new and quite parsimonious explanation for the apparent mystery of an anthropic or bio-friendly universe. If highly evolved intelligent life is the von Neumann duplicating machine that the cosmos employs to reproduce itselfif intelligent life is, in effect, the reproductive organ of the universethen it is entirely logical and predictable that the laws and constants of nature should be rigged in favor of the emergence of life and the evolution of ever more capable intelligence. Indeed, the existence of such propensity is a falsifiable prediction of the hypothesis.
Now, at this point you are probably saying to yourself, Wow, with a theory that crazy and radical, this Gardner fellow must have been shunned by the scientific establishment. And indeed some mainstream scientists have commented that the ideas advanced in my book BIOCOSM are impermissibly speculative or impossible to verify. A few have hurled what scientists view as the ultimate epithetthat my theory constitutes metaphysics instead of genuine science.
On the other hand, some of the brightest and most far-sighted scientists have been extremely encouraging. John Barrow and Freeman Dyson have offered favorable comments and reviews. In particular, BIOCOSM has received outspoken endorsements from Sir Martin Rees (the UK Astronomer Royal and winner of the top scientific prize in the world for cosmology) and Paul Davies (the prominent astrophysicist and author and winner of the Templeton Prize).
As I continue to explore this hypothesis in the future, what will be of utmost interest to me and my sympathizers is whether it can generate what scientists call falsifiable implications. Falsifiabiliy or testability of claims, remember, is the hallmark of genuine science, distinguishing it from metaphysics and faith-based belief systems.
I believe that the Selfish Biocosm hypothesis does qualify as a genuine scientific conjecture on this ground. A key implication of the hypothesis is that the process of progression of the cosmos through critical thresholds in its life cycle, while perhaps not strictly inevitable, is relatively robust. One such critical threshold is the emergence of human-level and higher intelligence, which is essential to the scaling up of biological and technological processes to the stage at which those processes could conceivably exert an influence on the global state of the cosmos.
The conventional wisdom among evolutionary theorists, typified by the thinking of the late Stephen Jay Gould, is that the abstract probability of the emergence of anything like human intelligence through the natural process of biological evolution was vanishingly small. According to this viewpoint, the emergence of human-level intelligence was a staggeringly improbable contingent event. A few distinguished contrarians like Simon Conway Morris, Robert Wright, E. O. Wilson, and Christian de Duve take an opposing position, arguing on the basis of the pervasive phenomenon of convergent evolution and other evidence that the appearance of human-level intelligence was highly probable, if not virtually inevitable. The latter position is consistent with the Selfish Biocosm hypothesis while the Gould position is not.
In my book BIOCOSM and in a preceding scientific paper delivered at the International Astronautical Congress, I suggest that the issue of the robustness of the emergence of human-level and higher intelligence is potentially subject to experimental resolution by means of at least three realistic tests: SETI research, artificial life evolution, and the emergence of transhuman computer intelligence predicted by computer science theorist Ray Kurzweil and others. The discovery of extraterrestrial intelligence, the discovery of an ability on the part of artificial life forms that exist and evolve in software environments to acquire autonomy and intelligence, and the emergence of a capacity on the part of advanced self-programming computers to attain and then exceed human levels of intelligence are all falsifiable implications of the Selfish Biocosm hypothesis because they are consistent with the notion that the emergence of ever more competent intelligence is a robust natural phenomenon. These tests dont, of course, conclusively answer the question of whether the hypothesis correctly describes ultimate reality. But such a level of certainty is not demanded of any scientific hypothesis in order to qualify it as genuine science.
Let me conclude by asking whether the Selfish Biocosm hypothesis promotes or demotes the cosmic role of humanity. Have I introduced a new anthropocentrism into the science of cosmology? If so, then you should be suspect on this basis alone of my new approach because, as Sigmund Freud pointed out long ago, new scientific paradigms must meet two distinct criteria to be taken seriously: they must reformulate our vision of physical reality in a novel and plausible way and, equally important, they must advance the Copernican project of demoting human beings from the centerpiece of the universe to the results of natural processes.
At first blush, the Selfish Biocosm hypothesis may appear to be hopelessly anthropocentric. Freeman Dyson once famously proclaimed that the seemingly miraculous coincidences exhibited by the physical laws and constants of inanimate naturefactors that render the universe so strangely life-friendlyindicated to him that the more I examine the universe and study the details of its architecture, the more evidence I find that the universe in some sense knew we were coming. This strong anthropic perspective may seem uplifting and inspiring at first blush but a careful assessment of the new vision of a bio-friendly universe revealed by the Selfish Biocosm hypothesis yields a far more sobering conclusion.
To regard the pageant of lifes origin and evolution on Earth as a minor subroutine in an inconceivably vast ontogenetic process through which the universe prepares itself for replication is scarcely to place humankind at the epicenter of creation. Far from offering an anthropocentric view of the cosmos, the new perspective relegates humanity and its probable progeny species (biological or mechanical) to the functional equivalents of mitochondriaformerly free-living bacteria whose special talents were harnessed in the distant past when they were ingested and then pressed into service as organelles inside eukaryotic cells.
The essence of the Selfish Biocosm hypothesis is that the universe we inhabit is in the process of becoming pervaded with increasingly intelligent lifebut not necessarily human or even human-successor life. Under the theory, the emergence of life and increasingly competent intelligence are not meaningless accidents in a hostile, largely lifeless cosmos but at the very heart of the vast machinery of creation, cosmological evolution, and cosmic replication. However, the theory does not require or even suggest that the life and intelligence that emerge be human or human-successor in nature.
The hypothesis simply asserts that the peculiarly life-friendly laws and constants that prevail in our universe serve a function precisely equivalent to that of DNA in living creatures on Earth, providing a recipe for development and a blueprint for the construction of offspring.
Finally, the hypothesis implies that the capacity for the universe to generate life and to evolve ever more capable intelligence is encoded as a hidden subtext to the basic laws and constants of nature, stitched like the finest embroidery into the very fabric of our universe. A corollaryand a key falsifiable implication of the Selfish Biocosm theoryis that we are likely not alone in the universe but are probably part of a vast, yet undiscovered transterrestrial community of lives and intelligences spread across billions of galaxies and countless parsecs. Under the theory, we share a possible common fate with that hypothesized communityto help shape the future of the universe and transform it from a collection of lifeless atoms into a vast, transcendent mind.
The inescapable implication of the Selfish Biocosm hypothesis is that the immense saga of biological evolution on Earth is one tiny chapter in an ageless tale of the struggle of the creative force of life against the disintegrative acid of entropy, of emergent order against encroaching chaos, and ultimately of the heroic power of mind against the brute intransigence of lifeless matter.
In taking full measure of the seeming miracle of a bio-friendly universe we should obviously be skeptical of wishful thinking and just-so stories. But we should not be so dismissive of new approaches that we fail to relish the sense of wonder at the almost miraculous ability of science to fathom mysteries that once seemed impenetrablea sense perfectly captured by the great British innovator Michael Faraday when he summarily dismissed skepticism about his almost magical ability to summon up the genie of electricity simply by moving a magnet in a coil of wire.
As Faraday said, Nothing is too wonderful to be true if it be consistent with the laws of nature.
]]>This article is a response to Richard Eckersley’s comments on Kurzweil’s article, Reinventing Humanity. You can also read other responses to Kurzweil’s article by Terry Grossman, John Smart, J. Storrs Hall, and Damien Broderick.
Click here to read a PDF of the full feature.
Richard Eckersleys idyllic notion of human life hundreds of years ago belies our scientific knowledge of history. Two hundred years ago, there was no understanding of sanitation so bacterial infections were rampant. There were no antibiotics and no social safety nets so an infectious disease was a disaster plunging a family into desperation. Thomas Hobbes characterization in 1651 of human life as solitary, poor, nasty, brutish, and short was on the mark. Even ignoring infant mortality, life expectancy was in the 30s only a couple of hundred years ago. Schubert and Mozarts death at 31 and 35 respectively was typical.
Eckersley bases his romanticized idea of ancient life on communication and the relationships fostered by communication. But much of modern technology is directed at just this basic human need. The telephone allowed people to be together even if far apart geographically. The Internet is the quintessential communication technology. Social networks and the panoply of new ways to make connection are creating communities based on genuine common interests rather than the accident of geography. This decentralized electronic communication is also highly democratizing. In a book I wrote in the mid 1980s I predicted the demise of the Soviet Union from the impact of the then emerging communication networks, and that is indeed what happened in the early 1990s. The democracy movement we saw in the 1990s and since is similarly fueled by our unprecedented abilities to stay in touch.
If Eckersley really sticks to his own philosophy, he wont be around for very long to influence the debate. I suspect, however, that he will take advantage of the life extensionand enhancementtechnologies that will emerge in the decades ahead. And I hope that he does so that we can continue this dialogue through this century and beyond.
© 2006 Ray Kurzweil. Reprinted with permission.
]]>This article is a response to Ray Kurzweil’s feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil’s article by Terry Grossman, John Smart, J. Storrs Hall, and Damien Broderick. Ray Kurzweil’s response to this article can be found here.
Click here to read a PDF of the full feature.
I have sometimes asked audiences if they are inspired or excited by the sort of techno-utopian vision represented by the Singularity; almost no one is. In my surveys over the past decade, I found dwindling minorities of young people (one-fifth to one-quarter) believed in the sort of technical fixes to human problems that Ray Kurzweil champions, while an increased majority (about three-quarters) believe science and technology are alienating people from each other and from nature.
The question I ask is, why? Why pursue this future? I dont pose this question dismissively, or derogatorily, but out of genuine curiosity and a desire for an open, honest conversation. Im skeptical of arguments that say pre-technological humans led short, nasty and brutish lives. Yes, life expectancy was lowermainly because of high rates of infant mortalitybut those who survived often lived socially and spiritually rich lives. It doesnt make evolutionary sense to believe humans lived in misery until we discovered technological progress. Animals in the wild dont live that way humans have been, for most of their history, animals in the wild.
The future world that Ray Kurzweil describes bears almost no relationship to human well-being that I am aware of. In essence, human health and happiness comes from being connected and engaged, from being suspended in a web of relationships and interestspersonal, social and spiritual that give meaning to our lives. The intimacy and support provided by close personal relationships seem to matter most; isolation exacts the highest price. The need to belong is more important than the need to be rich. Meaning matters more than money and what it buys.
We are left with the matter of destiny: it is our preordained fate, Kurzweil suggests, to advance technologically until the entire universe is at our fingertips. The question then becomes, preordained by whom or what? Biological evolution has not set this course for us; Is technology itself the planner? Perhaps it will eventually be, but not yet. Is God the entity doing the ordaining? A lot of religious people would have something to say about that, and are likely to strenuously, and even violently, oppose what the Singularity promises, as I have argued before (The Futurist, November-December 2001).
We are left to conclude that we will do this because it is we who have decided it is our destiny. But we have made no such decision, really as the observations with which I began this commentary show.
On February 2 2006, Richard wrote KurzweilAI.net with this followup:
A key issue is this (taken from a 1997 paper of mine in futures):
… Young people are not so much against science and technology: they acknowledge their importance in achieving a preferred future, and almost 70% said science and technology offered the best hope for meeting the challenges ahead. But they are astute enough to realise
science and technology are tools, and their impacts depend on who controls them and whose interests they serve.
They expect to see new technologies used further to entrench and concentrate wealth, power and privilege: for example, they were almost twice as likely to believe that governments would use new technologies to watch and regulate people more as they were that these technologies would empower people and strengthen democracy. They want to see new technologies used to help create closer-knit communities of people living a sustainable lifestyle: for example, they recognised the potential for advances in information and communication technologies to facilitate the creation of overlapping communitiesvirtual and real, global and localand the possibility of a sustainable way of life through greater use of alternative energy technologies and renewable resources….
© 2006 Richard Eckersley. Reprinted with permission.
]]>This article is a response to Ray Kurzweil’s feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil’s article by Terry Grossman, John Smart, J. Storrs Hall, and Richard Eckersley. Ray Kurzweil’s response to Eckersley’s comments can be found here.
Click here to read a PDF of the full feature.
A quarter century ago, we’d have laughed at the prospect of "Dick Tracy" cell-phones with cameras; now they’re everywhere, and nobody noticed after the first few days. So the jump to the idea of a Singularity is not really extraordinary. But, should we really expect ever more substantial changes to follow the same accelerating, headlong pace?
It’s reasonable to expect affordable computers to be smaller and more powerful, 1,000 times improved in a decade, one million times in 20 years, one billion in 30. By then, some machines might have capabilities to rival the human mind. A new intelligent species might share the planet with us.
In addition, developing technologies such as molecular manufacturenanotechnologywill allow the very engines of productivity to be copied cheaply and distributed widely. If that happens the gap between rich and "poor" might diminish. However, it will only occur if we find ways to prevent portable nano-factories from making lethal weapons available to any child or psychopath. We’ll be able to solve most of the problems that currently vex usglobal warming, (to the extent that it’s caused by humans,) water and food shortages, provision of clean, cheap power, and so on.
There is a scary downside that I discussed nearly a decade ago in my book The Spike: Dirt-cheap molecular manufacture may end poverty and strife, but there exists a risk that a world of lotus-eaters will degenerate into gang wars among those for whom life retains no discipline or meaning outside of arbitrary local status and violence. People (young men especially) with full bellies gained effortlessly, but lacking meaning in their lives, often find purpose in ganging up on each other in fits of murderous primate chest-pounding. Making Huxleian soma, or "feelies," the opiate of the people might help, but that, too, is a sickening prospect.
On the other hand, those strictly unforeseeable and mysterious changes captured in the word "Singularity" are likely to overwhelm and surpass such predictable downsides of any technological utopia or dystopia. The eeriest aspect of accelerating change is that we ourselves, and our children, will be the ones soaking in it. The sooner we start thinking seriously about the prospect, the better prepared we’ll be.
© 2006 Damien Broderick. Reprinted with permission.
]]>This article is a response to Ray Kurzweil’s feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil’s article by Terry Grossman, John Smart, Damien Broderick, and Richard Eckersley. Ray Kurzweil’s response to Eckersley’s comments can be found here.
Click here to read a PDF of the full feature.
Some years ago, I reviewed Kurzweil’s earlier book, The Age of Spiritual Machines, for the Foresight Nanotech Institute’s newsletter. Shortly thereafter I met him in person at a Foresight event, and he had read the review. He told me, "Of all the people who reviewed my book, you were the only one who said I was too conservative!"
The Singularity is Near is very well researched, and I think that in general, Kurzweil’s predictions are about as good as it’s possible to get for things that far in advance. I still think he’s too conservative in one specific area: Synthetic computer-based artificial intelligence will become available well before nanotechnology makes neuron-level brain scans possible in the 2020s.
What’s happening is that existing technologies like functional MRI are beginning to give us a high-level functional block diagram of the brains processes. At the same time, the hardware capable of running a strong, artificially intelligent computer, by most estimates, is here now, though it’s still pricey.
Existing AI software techniques can build programs that are experts in any well-defined field. The breakthroughs necessary for such programs to learn for themselves could happen easily in the next decadeone or two decades before Kurzweil predicts.
Kurzweil finesses the issue of runaway AI by proposing a pathway where machine intelligence is patterned after human brains, so that they would have our morals and values built in. Indeed, this would clearly be the wise and prudent course. Unfortunately, it seems all too likely that a shortcut exists without that kind of safeguard. Corporations already use huge computer systems for data mining and decision support that employ sophisticated algorithms no human manager understands. It’s a very short step to having such a system make better decisions than the managers do, as far as the corporation’s bottom line is concerned.
The Singularity may mean different things to different people. To me, it is that point where intelligences significantly greater than our own control so many of the essential processes that figure in our lives that mere humans can’t predict what happens next. This future may be even nearer than Ray Kurzweil has predicted.
© 2006 J. Storrs Hall. Reprinted with permission.
]]>This article is a response to Ray Kurzweil’s feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil’s article by Terry Grossman, J. Storrs Hall, Damien Broderick, and Richard Eckersley. Ray Kurzweil’s response to Eckersley’s comments can be found here.
Click here to read a PDF of the full feature.
I have a few differences of opinion with Kurzweil about the coming Singularity.
I think he is being overly optimistic about biotechnologys ability to create substantially better biological human beings. While we’ll certainly learn to push human capacities to their natural limits in coming decades, I see nothing on the horizon that would allow us to exceed those limits. Biology seems far too frail, slow, complex, and well defended (both at the molecular level and with regard to social custom) for that to be plausible within any reasonable time frame. Furthermore, by the time we are able to substantially improve our biology, we probably wont want to, as there will be far more interesting and powerful technological environments available to us instead. This points to the importance of understanding the relative accelerations of various technologies (in this case, biological vs. technological).
Kurzweil makes a major contribution to the literature on acceleration studies by clearly explaining technological acceleration curves. These acceleration curves show that the longer we use a technology, the more we get out of it: We use less energy, space, and time, and we get more capacity for less cost. Technological acceleration curves are a little-understood area, but thanks to pioneers like Kurzweil, interest and research in the field are advancing.
The notion that the future can’t be predicted" is demonstrably false with regard to a wide number of accelerating physical-computational trends, even though we do not yet know specifically how those technologies will be implemented. We can no longer ignore the profound technological changes occurring all around us.
It’s also time we acknowledged the slowness of human biology compared to our technological progeny. Our machines are increasingly exceeding us in the performance of more and more tasks, from guiding objects like missiles or satellites to assembling other machines. They are merging with us ever more intimately, andare learning how to reconfigure our biology in new and significantly faster technological domains.
Something very interesting is happening, and human beings are selective catalysts, not absolute controllers, of this process. Let us face this openly, and investigate it actively, so that we may guide these developments as wisely as possible.
© 2006 John Smart. Reprinted with permission.
]]>This article is a response to Ray Kurzweil’s feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil’s article by John Smart, J. Storrs Hall, Damien Broderick, and Richard Eckersley. Ray Kurzweil’s response to Eckersley’s comments can be found here.
Click here to read a PDF of the full feature.
I first met Ray Kurzweil in 1999 at a Foresight Institute meting in Palo Alto. I was there to get some background information on nanotechnology for a new book I was writing. As I stood in the lunch line, a healthy appearing man in front of me was engaged in animated conversation with a not nearly so healthy looking second man. Their topic of conversation was vitamins and nutritional supplementation, a topic of great interest to me, a nutritionally oriented M.D.
I joined the conversation, and the healthy looking man introduced himself as Ray Kurzweil. Ray and I continued our dialog via email after the conference ended, and a few months later, he flew from his home in Boston to Frontier Medical Institute, my longevity clinic in Denver, for a comprehensive longevity medical evaluation. In Denver we performed a comprehensive battery of tests designed to uncover any health risks he might still have so that together we could better optimize Rays already very sophisticated program for health and longevity.
From the beginning, it was obvious that Ray would be a unique patient. I have many engineer patients in my practice (and Ray is an engineer by training), so I am not surprised when a patient comes to see me with a notebook of spreadsheets detailing various data extracted from their daily lives: blood pressure, weight, cholesterol, blood sugar levels, amount of exercise, etc. carefully tabulated for several years. But all previous data collections I had seen, even those organized into Excel and meticulously graphed, paled in comparison to Rays. His data collection was so thorough and meticulous that he could tell me what he ate for lunch on June 23, 1989 (as well as for every other day for several years before that date or since). And not only what he ate, but the number of grams of each serving and calories consumed, as well as the number of calories he burned that day through exercise every day for decades!
As a result, it came as less of a surprise for me to learn that Ray was taking over 200 supplement pills a day. Rays approach had been to accurately assess his personal health risks and then quite simply to reprogram his biochemistry. Ongoing testing indicates that he is doing a remarkable job, as measurement of his biological age in my clinic indicates that he is now almost two decades younger than his chronological age, and all of his health risks appear under optimal control.
Ray was already working on his new book, The Singularity is Near, at that time, and I had just completed my first book, The Baby Boomers Guide to Living Forever. It was natural that our email dialog moved into discussion of the prospects for truly radical life extension for people of all ages, including older boomers like ourselves. As our emails multiplied into the many thousands, we decided to organize the information and see if we had the makings of a new book that we would coauthor. I created a preliminary table of contents, Ray organized the information from our emails and another 10,000 emails or so later, our joint book, Fantastic Voyage: Live Long Enough to Live Forever was written in the midst of Rays writing of The Singularity Is Near.
Ray felt that he was writing these books together as a unit and that there was synergy between them. The Singularity Is Near details Rays vision of the astounding possibilities of the world of the near future as the singularity unfolds sometime within the next few decades. In Fantastic Voyage we provide readers with the information they need to live long enough and remain healthy enough to fully experience the wonders of life in the post singularity world. In writing these two books has Ray painted a clear picture of the future and provided a blueprint for how to get there.
© 2006 Terry Grossman. Reprinted with permission.
]]>Continued from Interview with Robert Freitas: Part 1.
Robert A. Freitas Jr., J.D., published the first detailed technical design study of a mechanical nanorobot ever published in a peer-reviewed mainstream biomedical journal and is the author of nanomedicine, the first book-length technical discussion of the medical applications of nanotechnology and medical nanorobotics.
Yes, of course. Genetic engineering is a very powerful technology. Pre-nanotechnology treatments for some forms of cancer already exist. The emerging discipline of tissue engineering is already heading in the direction of building tissues and organs using special scaffolds that are impregnated with appropriate cells which grow into the matrix to form cohesive new tissues. Single-organ cloning is also on the horizon. But all of these treatments and organ substitutions could be accomplished with greater reliability, executed with greater speed, and completed in a side-effect free manner, using the tools of nanorobotic medicine. There are also many kinds of treatments, particularly those related to physical trauma, that can only be dealt with efficiently using advanced nanorobotic medicine.
The way I like to think about all this is to recognize that “nanomedicine” is most simply and generally defined as the preservation and improvement of human health, using molecular tools and molecular knowledge of the human body. Nanomedicine involves the use of three conceptual classes of molecularly precise structures: nonbiological nanomaterials and nanoparticles, biotechnology-based materials and devices, and nonbiological devices including nanorobotics.
In the near term, say, the next 5 years, the molecular tools of nanomedicine will include biologically active materials with well-defined nanoscale structures, including those produced by the methods of genetic engineering. For example, one of the first uses of “nanotechnology” in treating cancer employs engineered nanoparticles of various kinds to attempt a general cure while staying within the usual drug-treatment paradigm. Kopelman’s group at the University of Michigan has developed dye-tagged nanoparticles that can be inserted into living cells as biosensors. This quickly led to nanomaterials incorporating a variety of plug-in modules, creating molecular nanodevices for the early detection and therapy of brain cancer. One type of particle is attached to a cancer cell antibody that adheres to cancer cells, and is also affixed with a contrast agent to make the particle highly visible during MRI, while also enhancing the selective cancer-killing effect during subsequent laser irradiation of the treated brain tissue.
Another example from the University of Michigan is thedendrimers, tree-shaped synthetic molecules with a regular branching structureemanating outward from a core. Theoutermost layer can be functionalized with other useful molecules such asgenetic therapy agents, decoys for viruses, or anti-HIV agents. The next stepis to create dendrimercluster agents,multi-component nanodevicescalledtecto-dendrimersbuilt up from a numberof single-dendrimermodules.These modules performspecialized functionssuch asdiseased cell recognition,diagnosisof disease state,therapeutic drug delivery,location reporting, and therapy outcome reporting.Theframework can be customizedto fight a particular cancersimplyby substitutingany one of many possibledistinct cancer recognitionor “targeting” dendrimers. Thelarger trend in medical nanomaterials is to migrate from single-functionmolecules to multi-module entities that can do many things, but only at certaintimes or under certain conditions exemplifying a continuing, and, in my view,inevitable, technological evolution toward a device-oriented nanomedicine.
In the mid-term, the next 5 or 10 years or so, knowledge gained from genomics and proteomics will make possible new treatments tailored to specific individuals, new drugs targeting pathogens whose genomes have now been decoded, and stem cell treatments to repair damaged tissue, replace missing function, or slow aging. We will see genetic therapies and tissue engineering, and many other offshoots of biotechnology, becoming more common in medical practice. We should also see artificial organic devices that incorporate biological motors or self-assembled DNA-based structures for a variety of useful medical purposes. And we’ll also see biological robots, derived from bacteria or other motile cells, that have had their genomes re-engineered and re-programmed.
So yes, there is a lot that pre-nanotechnology, or, more properly, pre-nanorobotic medicine can do to improve human health. But the advent of medical nanorobotics will represent a huge leap forward.
If we combine the benefits of a human physiology maintained at the level of effectiveness possessed by our bodies when we were children (e.g., dechronification), along with the ability to deal with almost any form of severe trauma (via nanosurgery), then there are very few diseases or conditions that cannot be cured using nanomedicine. The only major class of incurable illness which nanorobots can’t handle is the case of brain damage in which portions of your brain have been physically destroyed. This condition might not be reversible if unique information has been irrevocably lost (say, because you neglected to make a backup copy of this information). There are several other minor “incurable” conditions, but all of these similarly relate to the loss of unique information.
As noted in the previous interview, my view is that this change of emphasis is unlikely to affect the conduct of research in the field, or the activities of those few of us who are actually doing the research involved, because the distinction between “molecular assemblers” and “nanofactories” is largely cosmetic and because both approaches require almost exactly the same set of enabling technologies. At present we’re concentrating our efforts mostly on developing these component enabling technologies, not on integration of these technologies into larger systems. Systems analysis will come next.
Medical nanorobots small enough to go into the human bloodstream will be very complex machines. We don’t know exactly how to build them yet, but the overall pathway from here to there is slowly starting to come into focus. Building and deploying nanorobotic systems will require first the ability to build diamondoid structures to molecular precision, using atomic force microscopy or similar means along with the techniques of diamond mechanosynthesis. My early work on diamond mechanosynthesis is described in a lecture I gave at the 2004 Foresight Conference in Washington DC, the text of which (plus many images) is available online. I’m currently involved in 6 collaborations with university groups in the U.S, U.K. and Russia (including both theoretical and experimental efforts) to push forward the technology in this area, and I have several new papers nearing completion for journal submission very soon on this work.
This must be followed by developing the ability to design and manufacture rigid machine parts and then to assemble them into larger machine systems, up to and including nanorobots. My forthcoming book with Josh Hall (Fundamentals of Nanomechanical Engineering) and the development of the NanoEngineer software by Nanorex should advance our ability to design nanomechanical components, and further simulations and experiments will be required to learn how to build these systems and then assemble them into larger structures.
Once diamond mechanosynthesis and the fabrication of nanoparts becomes feasible, we will also need a massively parallel manufacturing capability to assemble nanorobots cheaply, precisely, and in vast quantities. My recently published technical book, co-authored with Merkle and titled Kinematic Self-Replicating Machines (Landes Bioscience, 2004), surveys all known current work in the field of self-replication and replicative manufacturing, including concepts of molecular assemblers and nanofactories. (This book is freely available online at the Molecular Assembler website.)
Finally, the reliable mass-production of medical nanorobots must be followed by a period of testing and approval for biocompatibility and safety by the FDA or its equivalent in other countries. I would not be surprised if the first deployment of such systems occurred during the 2020s. But until we can build these devices experimentally, we are limited to theoretical analyses and computational chemistry simulations (some of which are now so good that their accuracy rivals the results of actual experiments).
So we can take two approaches, both of which I’m pursuing. First, we can use our knowledge of the laws of physics and the principles of good engineering to create exemplar designs of nanorobots, and to analyze potential capabilities and uses of these devices, and determine which applications are likely to be possible and which seem not to be feasible. This helps to establish a clear long-term goal. Second, we can examine the implementation pathways that could lead from where we are today to the future time when we may be able to build nanorobotic devices. As noted above, this may require diamond mechanosynthesis and massively parallel nanofabrication capabilities. Earlier this year I submitted the first-ever U.S. patent on diamond mechanosynthesis that describes one possible specific experimental process for achieving molecularly precise diamond structures in a practical way.
Nanorobots constructed of diamondoid materials cannot be destroyed by our immune system. They can be made to be essentially impervious to chemical attack. However, the body may react to their presence in a way that may interfere with their function. This raises the issue of nanorobot biocompatibility.
The biocompatibility of medical nanorobots is a complex and important issue. That’s why I expanded my original discussion in the Nanomedicine book series from a single chapter (Chapter 15, Nanomedicine Vol. II) to an entire book-length treatment (Nanomedicine, Vol. IIA) (NMIIA). My exploration of the particular problem you raise, nanorobot immunoreactivity, spans 16 pages in NMIIA. There is not enough space here to go into details, so interested readers should refer to that extended discussion. The short answer to your question is that the immune system invokes several different responses to foreign objects placed within the body, including complement activation and antibody response. Phagocytosis and foreign-body granulomatous reaction are additional major immune system issues for medical nanorobots intended to remain in the body for extended durations. The NMIIA book discusses all of these issues and suggests numerous methods by which antigenic reactions to nanorobots can be prevented or avoided, including (but not limited to) camouflage, chemical inhibition, decoys, active neutralization, tolerization, and clonal deletion. NMIIA also has an extensive discussion of nanorobotic phagocytosis, including details of all steps in the phagocytic process and possible techniques for phagocyte avoidance and escape by medical nanorobots. To summarize: the problems appear arduous but surmountable with good design.
Yes of course. I first described the foundational concepts necessary for this in Nanomedicine, Vol. I (1999), including noninvasive neuroelectric monitoring (i.e., nanorobots monitoring neuroelectric signal traffic without being resident inside the neuron cell body, using >5 different methods), neural macrosensing (i.e., nanorobots eavesdropping on the body’s sensory traffic, including auditory and optic nerve taps), modification of natural cellular message traffic by nanorobots stationed nearby (including signal amplification, suppression, replacement, and linkage of previously disparate neural signal sources), inmessaging from neurons (nanorobots receiving signals from the neural traffic), outmessaging to neurons (nanorobots inserting signals into the neural traffic), direct stimulation of somesthetic, kinesthetic, auditory, gustatory, auditory, and ocular sensory nerves (including ganglionic stimulation and direct photoreceptor stimulation) by nanorobots, and the many neuron biocompatibility issues related to nanorobots in the brain, with special attention to the blood-brain barrier.
The key issue for enabling full-immersion reality is obtaining the necessary bandwidth inside the body, which should be available using the in vivo fiber network I first proposed in Nanomedicine, Vol. I (1999). Such a network can handle 1018 bits/sec of data traffic, capacious enough for real-time brain-state monitoring. The fiber network has a 30 cm3 volume and generates 4-6 watts waste heat, both small enough for safe installation in a 1400 cm3 25-watt human brain. Signals travel at most a few meters at nearly the speed of light, so transit time from signal origination at neuron sites inside the brain to the external computer system mediating the upload are ~0.00001 millisec which is considerably less than the minimum ~5 millisec neuron discharge cycle time. Neuron-monitoring chemical sensors located on average ~2 microns apart can capture relevant chemical events occurring within a ~5 millisec time window, since this is the approximate diffusion time for, say, a small neuropeptide across a 2-micron distance. Thus human brain state monitoring can probably be “instantaneous,” at least on the timescale of human neural response, in the sense of “nothing of significance was missed.”
I believe Ray was relying upon these earlier analyses, among others, when making his proposals.
The availability of practical molecular manufacturing is an obvious and necessary precursor to the widespread use of medical nanorobotics. I would not be surprised if the 2020’s are eventually dubbed the “Decade of Medical Nanorobots.”
It will probably not be possible to eradicate all infectious disease. The current bacterial population of Earth may be ~1031 organisms and so the chances are good that most of them are going to survive in some host reservoir, somewhere on the planet, for as long as life exists here, despite our best efforts to eradicate them. However, it should be possible to eliminate all harmful effects, and all harmful natural disease organisms, from the human body, allowing us to lead lives that are free of pathogen-mediated illness (at least most of the time). A simple antimicrobial nanorobot like the microbivore should be able to eliminate even the most severe bloodborne infections in treatment times on the order of an hour; more sophisticated devices could be used to tackle more difficult infection scenarios.
Regarding microbial adaptability, it makes no difference if a bacterium has acquired multiple drug resistance to antibiotics or to any other traditional treatment the microbivore will eat it anyway, achieving complete clearance of even the most severe septicemic infections in minutes to hours, as compared to weeks or even months for antibiotic-assisted natural phagocytic defenses, without increasing the risk of sepsis or septic shock. Hence microbivores, each 2-3 microns in size, appear to be up to ~1000 times faster-acting than either unaided natural or antibiotic-assisted biological phagocytic defenses, and can extend the doctor’s reach to the entire range of potential bacterial threats, including locally dense infections.
The greatest power of nanomedicine will emerge in a decade or two as we learn to design and construct complete artificial nanorobots using diamondoid nanometer-scale parts and subsystems including sensors, motors, manipulators, power plants, and molecular computers. The development pathway will be lengthy and difficult. First, theoretical scaling studies must be used to assess basic concept feasibility. These initial studies would then be followed by more detailed computational simulations of specific nanorobot components and assemblies, and ultimately full systems simulations, all thoroughly integrated with additional simulations of massively parallel manufacturing processes from start to finish consistent with a design-for-assembly engineering philosophy. Once molecular manufacturing capabilities become available, experimental efforts may progress from component fabrication and testing, to component assembly, and finally to prototypes and mass manufacture, ultimately leading to clinical trials.
As of 2005, progress in medical nanorobotics remains largely at the concept feasibility stage since 1998, the author has published four theoretical nanorobot scaling studies, including the respirocytes (artificial red cells), microbivores (artificial white cells), clottocytes (artificial platelets), and the vasculoid (an artificial vascular system). These studies have not been intended to yield an actual engineering design for a future nanomedical product. Rather, the purpose was merely to examine a set of appropriate design constraints, scaling issues, and reference designs to assess whether or not the core idea might be feasible, and to determine key limitations of such designs.
The basic diamondoid structure of the respirocyte, the simplest nanorobot designed to date, includes 18 billion atoms. Molecular mechanics simulations of systems including 10-40 billion atoms have recently been reported using cluster supercomputers. So it is now possible, at least in principle, to attempt a basic simulation of an entire working medical nanorobot. The problems with actually doing this are many, and include the lack of a detailed atomic-level description of the respirocyte, a lack of reliable nanopart designs for components comprising the respirocyte, the difficulties of preparing input files and running massive simulations, and access to the personnel and computer time necessary to run the simulation. Such a simulation might well be attempted sometime in the next 5-10 years. Meanwhile we must content ourselves with molecular mechanics simulations of molecularly precise nanocomponents, starting with structures of up to 100,000 atoms using, for instance, the new NanoEngineer software produced by Nanorex.
I think the biggest impact so far has been in solidifying the long-term vision of where the technology can go. Typically articles describing future medicine, especially nanotechnology-based medicine, will lead off with a mention of “nanorobots in the bloodstream” as an idea that lies out there somewhere in the distant future, before moving on to a more substantive discussion of the latest news in medical nanoparticle research. This is entirely understandable and logical. Doctors are faced with the immediacy of sick or dying patients, and can only employ the instruments at their command today. Realistically, there will only be some small fraction of the traditional medical community that “gets it” right off the bat. The intended audience of my Nanomedicine book series is technical and professional people who are seriously interested in the future of medical technology. Many practicing physicians do not and quite correctly should not fit this description. But I know I’m having an impact. I’ve received dozens of emails from students and young researchers thanking me for inspiring them to consider new career directions. (I’ve also been told, only partly tongue-in-cheek, that my Nanomedicine books are often used by postdocs to help prepare their grant proposals because of all the relevant literature references collected in each volume.)
As medical nanorobotics proceeds along the development pathway I’ve outlined above moving from drawing board, to computer simulation, to laboratory demonstration of mechanosynthesis, to component design and fabrication, to parts assembly and integration, and finally to device performance and safety testing members of the mainstream medical community will naturally pay increasing attention to it, because it will become more directly relevant to them. By mid-century, medical nanorobotics will completely dominate medical practice. By writing the Nanomedicine book series, KSRM, and the rest, I hope to accelerate the process of technological development and adoption of nanorobotics in modern medicine. To this end, the Nanomedicine book series and my other books are being made freely available online, with the generous consent of my publisher, Landes Bioscience. Such generosity is still almost unheard of among conventional book publishers. The main reason we’re doing this is to promote a broader discussion of the technical issues and a rapid assessment of the possibilities by the worldwide biomedical and engineering community.
I’ve been writing the Nanomedicine book series since 1994. It was originally conceived as a single book, then became a trilogy until I realized I needed an entire volume devoted solely to biocompatibility, whereupon it became a tetralogy. Volume I was published by Landes Bioscience in 1999 and Volume IIA came out in 2003, also published by Landes Bioscience. I’m still writing the last 2 volumes (NMIIB, NMIII) of this book series, an ongoing effort that will continue during 2005-2010. Earlier this year I published two reviews on the current status of nanomedicine, available online at http://www.nanomedicine.com/Papers/WhatIsNMMar05.pdf and http://www.nanomedicine.com/Papers/NMRevMar05.pdf. The first of these papers was the leadoff article for the premier issue of the new journal Nanomedicine (the first journal exclusively devoted to this field, published by Elsevier), on whose Editorial Board I also serve.
In a recent major collaborative effort, artist Gina Miller has finished work on a 3-minute long animation that nicely illustrates the workings of my proposed programmable dermal display (essentially, a video-touchscreen nano-tattoo that reports real-time medical information to the user, as reported back by numerous nanorobots stationed in various locations inside the body). I think this is a very cool animation. And of course you can always visit my Nanomedicine Art Gallery (hosted for me by Foresight Institute) with all the nice nanorobot images, where I continue on as curator.
©2006 Sander Olson. Reprinted with permission.
]]>Robert A. Freitas Jr. has written pioneering books on nanomedicine,
nanorobots, and molecular manufacturing. What’s next? The last two books in the Nanomedicine series and a book on fundamentals of nanomechanical engineering, extending Eric Drexler’s classic Nanosystems, he reveals in this interview.
Originally published on Nanotech.biz November 4, 2005. Reprinted on KurzweilAI.net February 2, 2006.
Robert A. Freitas Jr., J.D., published the first detailed technical design study of a mechanical nanorobot ever published in a peer-reviewed mainstream biomedical journal and is the author of nanomedicine, the first book-length technical discussion of the medical applications of nanotechnology and medical nanorobotics.
I received an undergraduate B.S. degree from Harvey Mudd College (dual major, physics and psychology) in 1974 and a Juris Doctor (J.D.) graduate degree from University of Santa Clara School of Law in 1978. In the late 1970s and early 1980s I published numerous editions of Lobbying for Space, the first space program political advocacy handbook ever published, and conducted three separate observational SETA/SETI programs with a colleague, using both optical and radio telescopes. I co-edited the 1980 NASA feasibility analysis of self-replicating space factories and in 1996 authored the first detailed technical design study of a medical nanorobot ever published in a peer-reviewed mainstream biomedical journal. After a stint as Research Scientist at Zyvex Corp. from 2000-2004, I’m now back with the Institute for Molecular Manufacturing, my previous and current primary affiliation, as their Senior Research Fellow.
The first time I ever thought about atomic-scale engineered objects was probably in 1977-78, when I was working on my first treatise-length book project (Xenology). In Chapter 16 of that book, I hypothesized that “using molecular electronics with components on the order of 10 Å in size, 1010 microneurons could be packed into a space of a few microns” which would be “small enough to hide inside a bacterium.” During my NASA work on self-replicating machines in the summer of 1980, I wondered how small machine replicators might be made. I briefly studied the emerging micromachine technology, but by the time Engines of Creation came out in 1986 I had temporarily left the field in pursuit of more pragmatic opportunities. In early 1994 I happened to pick up and read a copy of Unbounding the Future. This was my first exposure to what has come to be known as molecular nanotechnology (MNT). I studied the detailed technical arguments presented in Nanosystems, which confirmed what I had already suspected based on my own knowledge—namely, that the technical case for molecular nanotechnology was very solid.
Having fully absorbed the MNT paradigm, I immediately realized that medicine would be the single most important application area of this new technology. In particular, nanomedicine offered a chance for significant healthspan (healthy lifespan) extension. It also appeared that this objective could possibly be achieved within the several decades of life actuarially remaining to me and others of my generation. But was anyone pushing it forward? I contacted the Foresight Institute and learned that nobody had yet written any systematic treatment of this area, nor was anyone planning to do so in the near future. So I took up the challenge of writing Nanomedicine, the first book-length technical discussion of the potential medical applications of molecular nanotechnology and medical nanorobotics.
I’ve been writing the Nanomedicine book series since 1994. This technical book is my attempt to rationally assess various possible nanorobotic capabilities and medical systems to determine which ones might be plausible (and which ones not) if we could build nanorobots at some point in the future. The first volume (I) was published by Landes Bioscience in 1999 and is freely available online at http://www.nanomedicine.com/NMI.htm. The second volume (IIA) was also published by Landes Bioscience, in 2003, and is also freely available online at http://www.nanomedicine.com/NMIIA.htm. I’m still writing the last 2 volumes (IIB, III) of this book series, an ongoing effort that will continue during 2005-2010.
While I’m not involved in the decisions of the Foresight Institute, I believe the shift occurred primarily as an attempt to redirect the often rancorous scientific debate away from the growing fears of runaway motile free-range replicators, and away from the seeming impossibility of building self-replicating machines (a prejudice common to many ill-informed scientists), and towards a more rational consideration of the underlying technologies and their benefits. The civility of the public discourse may improve as a result, and to the extent that the mainstream scientific community begins to pay attention, it is possible that the research funding situation might also improve. I’m all for it.
However, the change probably won’t much affect the actual research in the field, nor the achievement of useful results per dollar spent, because in truth the distinction between “molecular assemblers” and “nanofactories” is largely cosmetic. That is, if you possess either one, you can use it either to replicate itself or to build the other in very short order. Either one can be used equally well to build life-saving medical nanorobots or life-denying nanoweapons, including everyone’s favorite bugaboo, the marauding ecophages. Both assemblers and nanofactories are examples of molecular manufacturing, which depends at its core on some form of replicative or massively-parallel fabrication and assembly capability in order to be able to economically generate macroscale quantities of useful end products. The two approaches differ mainly in their technical design/performance tradeoffs. Each approach has different strengths and weaknesses (as manufacturing systems) that can be readily enumerated. I’ve been writing about both approaches since the 1980 NASA replicating factory study – wherein I was actually the main proponent for the factory approach. The key thing is that molecular assemblers and nanofactories are both molecular manufacturing systems. Each requires almost exactly the same set of enabling technologies. Developing those enabling technologies as soon as possible should be our primary focus right now.
I would not be surprised if the fabrication of medical nanorobots (and other useful nanorobotic systems) via molecular manufacturing – whether via molecular assemblers or nanofactories – arrives during the decade of the 2020s.
As noted earlier, I undertook the Nanomedicine book series in an attempt to establish a solid foundation for the single most important future application of MNT. The book introduces a long-term vision for nanorobotic medicine and articulates the technical underpinnings of that vision, so that when the day arrives that we have the technology to build such devices, we’ll have a clearer idea what can be done with them, and how.
More recently, to answer those who remain skeptical of the entire MNT enterprise, including the possibility of medical nanorobotics, I’ve turned my attention to figuring out how to build the nanorobots – the issue of implementation of the long-term vision. My early work on diamond mechanosynthesis is described in a lecture I gave at the 2004 Foresight Conference in Washington DC, the text of which (plus many images) is available online. I’m now involved in 6 research collaborations with various university and corporate groups in the U.S, U.K. and Russia in an effort to push forward the technology in this area as fast as possible. These collaborations include a variety of computational chemistry simulations of plausible mechanosynthetic tooltips and reaction sequences, coupled with a nascent experimental effort that is just starting up. I have several new papers on diamond mechanosynthesis nearing completion for journal submission, for publication in 2006. Earlier this year I also filed the first-ever U.S. patent on diamond mechanosynthesis that describes a specific process for achieving molecularly precise diamond structures in a practical way.
Ralph Merkle and I are also writing an entire book-length discussion of diamond mechanosynthesis, entitled Diamond Surfaces and Diamond Mechanosynthesis (DSDM), to be published in 2006 or 2007. The first half of this book is an extensive review of all that is presently known about diamond surfaces, and has been mostly written for several years. The second half describes specific tools and reaction pathways for building those surfaces using positionally controlled mechanosynthetic tools, and methods for building those tools. This part has been about 50% written for several years. But finishing this part has been put on hold because our many current research collaborations involving ab initio and DFT-based quantum chemistry simulations are providing so much new information that we think it’s better to wait and incorporate this new material into the book. (Otherwise we could’ve published DSDM in 2005.) Until then, I’ve put together a brief technical bibliography of research on positional mechanosynthesis (including diamond). Watch the Molecular Assembler website for updates and further news about DSDM.
Yes it was, though of course the subject of self-replicating machines has been a long-standing professional interest of mine, across 3 decades. For instance, I published the first quantitative closure analysis for a self-replicating machine system in 1979-1980 and participated in (and edited) the first comprehensive technical analysis of a self-replicating lunar factory for NASA in 1980.
But focusing again on molecular manufacturing: Once diamond mechanosynthesis and the fabrication of nanoparts becomes feasible, we will also need a massively parallel molecular manufacturing capability in order to assemble nanorobots cheaply, precisely, and in vast quantities. Kinematic Self-Replicating Machines (KSRM) (Landes Bioscience, 2004, and freely available online), co-authored with Ralph Merkle, surveys all known current work in the field of self-replication and replicative manufacturing, including all known concepts of molecular assemblers and nanofactories. It is intended as a general introduction to the systems-level analysis of self-replicative manufacturing machinery. With 200+ illustrations and 3200+ literature references, KSRM describes all proposed and experimentally realized self-replicating systems that were publicly disclosed as of 2004, ranging from nanoscale to macroscale systems. The book extensively describes the historical development of the field. It presents for the first time a detailed 137-dimensional map of the entire kinematic replicator design space to assist future engineering efforts. It includes a primer on the mathematics of self-replication, and has an extensive discussion of safety issues and implementation issues related to molecular assemblers and nanofactories. KSRM has been cited in two articles appearing in Nature this year (Zykov et al, Nature 435, 163 (12 May 2005) and Griffith et al, Nature 437, 636 (29 September 2005)) and appears well on its way to becoming the classic reference in this field.
Perhaps the most salutary effect of KSRM is that it provides a number of physical examples of self-replicating systems (beyond the relatively simple autocatalytic-type replicators from the 1950s by Penrose, Jacobson and Morowitz and the more recent related examples by Lohn and Griffith) that have already been built and operated in a laboratory environment. This provides a ready answer to the tedious and recurring objection by the ill-informed that such things are “impossible”: The machines have actually been built. Interestingly, one of these experimental replicators is a fully autonomous machine that runs around on a table and procures its own parts, which it then assembles into a working copy of itself, a crude analog of the molecular assembler approach. Another of these experimental replicators is a computer-controlled manipulator arm anchored to a surface, that grabs its parts from a “warehouse” area and assembles these parts into a working copy of itself, a crude analog of the nanofactory approach (where the nanofactories are being used to make more nanofactories, rather than nonfactory product).
Fundamentals will be sharply focused on nanomechanical design, with a concentration on diamondoid molecular machine systems, intended for use in a mechanical engineering curriculum at the 2nd- or 3rd-year undergraduate level. We hope the book will be widely used in nanotechnology courses and will help to train the first generation of nanomechanical design engineers.
This is the primary purpose of the book. A second purpose is to extend and complement the analyses already published in Drexler’s Nanosystems, providing more design details and engineering analysis of how to build structures with molecular precision, and understanding how molecular machines might function (and the limitations on them). Once an experimental ability to build diamondoid gears, bearings, rods, and the like has been demonstrated in the laboratory, I think the development of nanorobotics will move very rapidly from that point, because the potential payoff to human welfare is so large and the design space will have become accessible to active experimentation.
If you mean college degrees in MNT, in the sense of diamondoid molecular machine systems, I think this will begin to occur as soon as the field gains scientific credibility – i.e., as soon as it becomes clear that there is a “there,” there. The key here, in my opinion, will be the first experimental demonstration of positionally controlled diamond mechanosynthesis in the laboratory. Once this has been done, it will no longer be possible for critics to deny that such a thing is possible, though they still may claim that perhaps such a thing is not very useful for anything important. But with the newfound ability to tinker with real atoms, I expect legions of graduate students to rise up and prove the critics wrong on that score too, and the results of this revolution will rapidly trickle down to the undergraduate level as well.
How long until the first simple experimental demonstration of positionally controlled diamond mechanosynthesis in the laboratory? Perhaps not as far off as you might think. I’d be shocked if it was longer than 10 years, and 5 years would not surprise me. It depends on how fast we’re able to push it forward.
Nanorex is creating an incredibly cool piece of software called NanoEngineer that allows the user to quickly and easily design molecular machine systems of up to perhaps 100,000 atoms in size, then perform various computational simulations on the system such as energy minimization (geometry optimization) or a quantitative analysis of applied forces and torques. It’s a CAD system for molecules, with a special competence in the area of diamondoid structures. Once this software is released, users anywhere in the world will be able to begin creating designs for relatively complex nanomachine components. We’d expect the library of designed machine systems to rapidly expand from the current 1-2 dozen items (including mostly just a few bearings, gears, and joints) into the hundreds or thousands in just a few years. The existence of this expanded library of nanoparts will then make it easier to begin thinking about designs for more complex systems that may be built from thousands or more of these parts, containing millions or even billions of atoms. It’s a big step along the molecular machine design and development pathway.
I’m a member of the Scientific Advisory Board of Nanorex. The Board provides feedback in the development of the NanoEngineer software, especially including our respective “wish lists” of what the ideal molecular machine design package should include – most of which features were then incorporated into the software. Thus the new software reflects the collective experience of those few of us in the world who have ever actually designed a molecular machine component the hard way – laboriously, atom by atom, using some previously existing (inadequate) software package.
Nanorex is also directly supporting the writing of Fundamentals of Nanomechanical Engineering, which we hope will be used to train the first generation of serious nanomechanical design engineers.
I think this feasibility assessment may be slowly changing, but this change is probably being driven mainly by published experimental results, especially in the field of STM/AFM(scanning probe)-moderated single-atom chemistry. For example, the first experimental demonstration of pure mechanosynthesis of any kind was reported in 2003 and is just now becoming more widely known. Publication of credible high-accuracy theoretical results, demonstrating the feasibility of diamond mechanosynthesis, will also help change this assessment. (For instance, in 2006 Merkle and I will be publishing a key theoretical paper in diamond mechanosynthesis representing ~10,000 CPU-hours of quantum chemistry simulations on 2600+ molecular structures to elucidate possible reaction pathways for building diamond. Watch for it.)
Smalley has marginalized himself in this area by taking such an extreme and indefensible position, using fallacious arguments. One by one, his arguments are slowly melting away under the hot glare of brilliant (but hard-won) experimental results.
Actually, I don’t know anything about their roadmap. They haven’t consulted me at all on this, and I have no idea what they’re up to except what I “read in the newspaper”. I believe it is an attempt to involve mainstream players in an assessment of possible development pathways leading toward some flavor(s) of molecular manufacturing. Whether these flavor(s) will include some or all of biological systems, protein systems, polymer systems, MEMS systems, metal systems, diamondoid systems, or something else, I cannot say.
Meanwhile, over the last two years Ralph Merkle and I have worked hard to establish a small independent network of research (both theoretical and experimental) collaborators with a sharp focus on the implementation of diamondoid molecular machine systems. Last June we put together a simple draft implementation flowchart that has about 100 boxes and lots of arrows, that starts from where we are today and ends with the manufacturing of complex molecular machine systems, including simple nanorobots. The plan includes specific theoretical and experimental milestones in a particular sequence. So we’re in the process of simplifying (and implementing) our own “roadmap”. Perhaps this (or something like it) will eventually be incorporated in the broader Foresight nanotechnology roadmap. Watch the Molecular Assembler website for more details in the months ahead.
Continued in Interview with Robert Freitas: Part 2.
©2006 Sander Olson. Reprinted with permission.
]]>This is the complete original document describing the “Freitas process” to the level of detail that was known on 12 January 2004, following its initial conception on 1 November 2003. The actual Provisional Patent Application, prepared subsequently with the assistance of legal counsel, was abstracted from (and thus differs in some particulars from) this complete original document. A full utility patent on this process (containing numerous claims and some additional material, running a total of 133 pages in length) was subsequently filed on 11 February 2005. This patent is now pending before the USPTO. It is the first known patent ever filed on positional mechanosynthesis, and the first known patent ever filed on positional diamond mechanosynthesis.
Note: Philip Moriarty at the University of Nottingham (U.K.) has posted online several technical objections to one of the two proposed toolbuilding pathways, which Freitas says he is currently working through, point by point, with Moriarty via private correspondence in the manner of a friendly collaboration.
Abstract. A method is described for building a mechanosynthesis tool intended to be used for the molecularly precise fabrication of physical structures–as for example, diamond structures. The exemplar tool consists of a bulk-synthesized dimer-capped triadamantane tooltip molecule which is initially attached to a deposition surface in tip-down orientation, whereupon CVD or equivalent bulk diamond deposition processes are used to grow a large crystalline handle structure around the tooltip molecule. The large handle with its attached tooltip can then be mechanically separated from the deposition surface, yielding an integral finished tool that can subsequently be used to perform diamond mechanosynthesis in vacuo. The present disclosure is the first description of a complete tool for positional diamond mechanosynthesis, along with its method of manufacture. The same toolbuilding process may be extended to other classes of tooltip molecules, other handle materials, and to mechanosynthetic processes and structures other than those involving diamond.
1.1 Conventional Diamond Manufacturing
1.2 Diamond Manufacturing via Positional Diamond Mechanosynthesis
2. Description of the Invention
2.1 STEP 1: Synthesis of Capped Tooltip Molecule
2.2 STEP 2: Attach Tooltip Molecule to Deposition Surface in Preferred Orientation
2.2.1 Surface Nucleation and Choice of Deposition Substrate
2.2.2 Tooltip Attachment Method A: Ion Bombardment in Vacuo
2.2.3 Tooltip Attachment Method B: Surface Decapping in Vacuo
2.2.4 Tooltip Attachment Method C: Solution Chemistry
2.3 STEP 3: Attach Handle Structure to Tooltip Molecule
2.3.1 Handle Attachment Method A: Nanocrystal Growth
2.3.2 Handle Attachment Method B: Direct Handle Bonding
2.4 STEP 4: Separate Finished Tool from Deposition Surface
The properties of diamond, such as its extraordinary hardness, coefficient of friction, tensile strength and low compressibility, electrical resistivity, electrical carrier (electron and hole) mobility, high energy bandgap and saturation velocity, dielectric breakdown strength, low neutron cross-section (radiation-hardness), thermal conductivity, thermal expansion resistance, optical transmittance and refractive index, and chemical inertness allow this material to serve a vital role in a wide variety of industrial and technical applications.
The present invention relates generally to methods for the manufacture of synthetic diamond. More particularly, the invention is concerned with the physical structure and method of manufacture of a tool, which can itself subsequently be employed in the mechanosynthetic manufacture of other molecularly precise diamond structures. However, the same toolbuilding process is readily extended to other classes of tooltip molecules, handle materials, and mechanosynthetic processes and structures other than diamond.
All prior art methods of manufacturing diamond are bulk processes in which the diamond crystal structure is manufactured by statistical processes. In such processes, new atoms of carbon arrive at the growing diamond crystal structure having random positions, energies, and timing. Growth extends outward from initial nucleation centers having uncontrolled size, shape, orientation and location. Existing bulk processes can be divided into three principal methods – high pressure, low pressure hydrogenic, and low pressure nonhydrogenic.
(A) In the first or high pressure bulk method of producing diamond artificially, powders of graphite, diamond, or other carbon-containing substances are subjected to high temperature and high pressure to form crystalline diamond. High pressure processes are of several types [1]:
(1) Impact Process. The starting powder is instantaneously brought under high pressure by applying impact generated by, for example, the explosion of explosives and the collision of a body accelerated to a high speed. This produces granular diamond by directly converting the starting powder material having a graphite structure into a powder composed of grains having a diamond structure. This process has the advantage that no press as is required, as in the two other processes, but there is difficulty in controlling the size of the resulting diamond products. Nongraphite organic compounds can also be shock-compressed to produce diamond [2].
(2) Direct Conversion Process. The starting powder is held under a high static pressure of 13-16 GPa and a high temperature of 3,000-4,000 oC in a sealed high pressure vessel. This establishes stability conditions for diamond, so the powder material undergoes direct phase transition from graphite into diamond, through graphite decomposition and structural reorganization into diamond. In both direct conversion and flux processes, a press is widely used and enables single crystal diamonds to be grown as large as several millimeters in size.
(3) Flux Process. As in direct conversion, a static pressure and high temperature are applied to the starting material, but here fluxes such as Ni and Fe are added to allow the reaction to occur under lower pressure and temperature conditions, accelerating the atomic rearrangement which occurs during the conversion process. For example, high-purity graphite powder is heated to 1500-2000 oC under 4-6 GPa of pressure in the presence of iron catalyst, and under this extreme, but equilibrium, condition of pressure and temperature, graphite is converted to diamond: The flux becomes a saturated solution of solvated graphite, and because the pressure inside the high pressure vessel is maintained in the stability range for diamond, the solubility for graphite far exceeds that for diamond, leading to diamond precipitation and dissolution of graphite into the flux. Every year about 75 tons of diamond are produced industrially this way [14].
(B) In the second or low pressure hydrogenic bulk method of producing diamond artificially, widely known as CVD or Chemical Vapor Deposition, hydrogen (H2) gas mixed with a few percent of methane (CH4) is passed over a hot filament or through a microwave discharge, dissociating the methane molecule to form the methyl radical (CH3) and dissociating the hydrogen molecule into atomic hydrogens (H). Acetylene (C2H2) can also be used in a similar manner as a carbon source in CVD. Diamond or diamond-like carbon films can be grown by CVD epitaxially on diamond nuclei, but such films invariably contain small contaminating amounts (0.1-1%) of hydrogen which gives rise to a variety of structural, electronic and chemical defects relative to pure bulk diamond. Currently, diamond synthesis from CVD is routinely achieved by more than 10 different methods [163].
As noted by McCune and Baird [3], a diamond particle is a special cubic lattice grown from a single nucleus of four-coordinated carbon atoms. The diamond-cubic lattice consists of two interpenetrating face-centered cubic lattices, displaced by one quarter of the cube diagonal. Each carbon atom is tetrahedrally coordinated, making strong, directed sp3 bonds to its neighbors using hybrid atomic orbitals. The lattice can also be visualized as planes of six-membered saturated carbon rings stacked in an ABC ABC ABC sequence along <111> directions. Each ring is in the “chair” conformation and all carbon-carbon bonds are staggered. A lattice with hexagonal symmetry, lonsdaleite, can be constructed with the same tetrahedral nearest neighbor configuration. In lonsdaleite, however, the planes of chairs are stacked in an AB AB AB sequence, and the carbon-carbon bonds normal to these planes are eclipsed. In simple organic molecules, the eclipsed conformation is usually less stable than the staggered because steric interactions are greater. Thermodynamically, diamond is slightly unstable with respect to crystalline graphite. At 298 K and 1 atm the free energy difference is 0.026 eV per atom, only slightly greater than kBT, where kB is the Boltzmann constant and T is the absolute temperature in degrees Kelvin.
The basic obstacle to crystallization of diamond at low pressures is the difficulty in avoiding co-deposition of graphite and/or amorphous carbon when operating in the thermodynamically stable region of graphite [3]. In general, the possibility of forming different bonding networks of carbon atoms is understandable from their ability to form different electronic configurations of the valence electrons. These bond types are classified as sp3 (tetrahedral), sp2 (planar), and sp1 (linear), and are related to the various carbon allotropes including cubic diamond and hexagonal diamond or lonsdaleite (sp3), graphite (sp2), and carbenes (sp1), respectively.
Hydrogen is generally regarded as an essential part of the reaction steps in forming diamond film during CVD, and atomic hydrogen must be present during low pressure diamond growth to: (1) stabilize the diamond surface, (2) reduce the size of the critical nucleus, (3) “dissolve” the carbon in the feedstock gas, (4) produce carbon solubility minimum, (5) generate condensable carbon radicals in the feedstock gas, (6) abstract hydrogen from hydrocarbons attached to the surface, (7) produce vacant surface sites, (8) etch (regasify) graphite, hence suppressing unwanted graphite formation, and (9) terminate carbon dangling bonds [4, 6]. Both diamond and graphite are etched by atomic hydrogen, but for diamond, the deposition rate exceeds the etch rate during CVD, leading to diamond (tetrahedral sp3 bonding) growth and the suppression of graphite (planar sp2 bonding) formation. (Note that most potential atomic hydrogen substitutes such as atomic halogens etch graphite at much higher rates than atomic hydrogen [4].)
Low pressure or CVD hydrogenic metastable diamond growth processes are of several types [3–5]:
(1) Hot Filament Chemical Vapor Deposition (HFCVD). Filament deposition involves the use of a dilute (0.1-2.5%) mixture of hydrocarbon gas (typically methane) and hydrogen gas (H2) at 50-1000 torr which is introduced via a quartz tube located just above a hot tungsten filament or foil which is electrically heated to a temperature ranging from 1750-2800 oC. The gas mixture dissociates at the filament surface, yielding dissociation products consisting mainly of radicals including CH3, CH2, C2H, and CH, acetylene, and atomic hydrogen, as well as unreacted CH4 and H2. A heated deposition substrate placed just below the hot tungsten filament is held in a resistance heated boat (often molybdenum) and maintained at a temperature of 500-1100 oC, whereupon diamonds are condensed onto the heated substrate. Filaments of W, Ta, and Mo have been used to produce diamond. The filament is typically placed within 1 cm of the substrate surface to minimize thermalization and radical recombination, but radiation heating can produce excessive substrate temperatures leading to nonuniformity and even graphitic deposits. Withdrawing the filament slightly and biasing it negatively to pass an electron current to the substrate assists in preventing excessive radiation heating.
(2) High Frequency Plasma-Assisted Chemical Vapor Deposition (PACVD). Plasma deposition involves the addition of a plasma discharge to the foregoing filament process. The plasma discharge increases the nucleation density and growth rate, and is believed to enhance diamond film formation as opposed to discrete diamond particles. There are three basic plasma systems in common use: a microwave plasma system, a radio frequency or RF (inductively or capacitively coupled) plasma system, and a direct current or DC plasma system. The RF and microwave plasma systems use relatively complex and expensive equipment which usually requires complex tuning or matching networks to electrically couple electrical energy to the generated plasma. The diamond growth rate offered by these two systems can be quite modest, on the order of ~1 micron/hour. Diamonds can also be grown in microwave discharges in a magnetic field, under conditions where electron cyclotron resonance is considerably modified by collisions. These “magneto-microwave” plasmas can have significantly higher densities and electron energies than isotropic plasmas and can be used to deposit diamond over large areas.
(3) Oxyacetylene Flame-Assisted Chemical Vapor Deposition. Flame deposition of diamond occurs via direct deposit from acetylene as a hydrocarbon-rich oxyacetylene flame. In this technique, conducted at atmospheric pressure, a specific part of the flame (in which both atomic hydrogen (H) and carbon dimers (C2) are present [19]) is played on a substrate on which diamond grows at rates as high as >100 microns/hour [7].
(C) In the third or low pressure nonhydrogenic bulk method of producing diamond artificially [8–17], a nonhydrogenic fullerene (e.g., C60) vapor suspended in a noble gas stream or a vapor of mixed fullerenes (e.g., C60, C70) is passed into a microwave chamber, forming a plasma in the chamber and breaking down the fullerenes into smaller fragments including isolated carbon dimer radicals (C2) [6]. (Often a small amount of H2, e.g., ~1%, is added to the feedstock gas.) These fragments deposit onto a single-crystal silicon wafer substrate, forming a thickness of good-quality smooth nanocrystalline diamond (15 nm average grain size, range 10-30 nm crystallites [8–10]) or ultrananocrystalline diamond (UNCD) diamond films with intergranular boundaries free from graphitic contamination [9], even when examined by high resolution TEM [16] at atomic resolution [10]. Fullerenes are allotropes of carbon, containing no hydrogen, so diamonds produced from fullerene precursors are hydrogen-defect free [11] – indeed, the Ar/C60 film is close in both smoothness and hardness to a cleaved single crystal diamond sample [10]. The growth rate of diamond film is ~1.2 microns/hour, comparable to the deposition rate observed using 1% methane in hydrogen under similar system deposition conditions [9, 10]. Diamond films can, using this process, be grown at relatively low temperatures (<500 oC) [10] as opposed to conventional diamond growth processes which require substrate temperatures of 800-1000 oC.
Ab initio calculations indicate that C2 insertion into carbon-hydrogen bonds is energetically favorable with small activation barriers, and that C2 insertion into carbon-carbon bonds is also energetically favorable with low activation barriers [15]. A mechanism for growth on the diamond C(100) (2×1):H reconstructed surface with C2 has been proposed [16]. A C2 molecule impinges on the surface and inserts into a surface carbon-carbon dimer bond, after which the C2 then inserts into an adjacent carbon-carbon bond to form a new surface carbon dimer. By the same process, a second C2 molecule forms a new surface dimer on an adjacent row. Then a third C2 molecule inserts into the trough between the two new surface dimers, so that the three C2 molecules incorporated into the diamond surface form a new surface dimer row running perpendicular to the previous dimer row. This C2 growth mechanism requires no hydrogen abstraction reactions from the surface and in principle should proceed in the absence of gas phase atomic hydrogen.
The UNCD films were grown on silicon (Si) substrates polished with 100 nm diamond grit particles to enhance nucleation [16]. Deposition of UNCD on a sacrificial release layer of SiO2 substrate is very difficult because the nucleation density is 6 orders of magnitude smaller on SiO2 than on Si [18]. However, the carbon dimer growth species in the UNCD process can insert directly into either the Si or SiO2 surface, and the lack of atomic hydrogen in the UNCD fabrication process permits both a higher nucleation density and a higher renucleation rate than the conventional H2/CH4 plasma chemistry [18], so it is therefore possible to grow UNCD directly on SiO2.
Besides fullerenes, it has been proposed that “diamondoids” or polymantanes, small hydrocarbons made of one or more fused cages of adamantane (C10H16, the smallest unit cell of hydrogen-terminated crystalline diamond) could be used as the carbon source in nonhydrogenic diamond CVD [20–22]. Dahl, Carlson and Liu [22] suggest that the injection of diamondoids could facilitate growth of CVD-grown diamond film by allowing carbon atoms to be deposited at a rate of about 10-100 or more at a time, unlike conventional plasma CVD in which carbons are added to the growing film one atom at a time, possibly increasing diamond growth rates by an order of magnitude or better. However, Plaisted and Sinnott [23] used atomistic simulations to study thin-film growth via the deposition of very hot (119-204 eV/molecule; 13-17 km/sec) beams of adamantane molecules on hydrogen-terminated diamond (111) surfaces, with forces on the atoms in the simulations calculated using a many-body reactive empirical potential for hydrocarbons. During the deposition process the adamantane molecules react with one another and the surface to form hydrocarbon thin films that are primarily polymeric with the amount of adhesion depending strongly on incident energy. Despite the fact that the carbon atoms in the adamantane molecules are fully sp3 hybridized, the films contain primarily sp2 hybridized carbon with the percentage of sp2 hybridization increasing as the incident velocity goes up. However, cooler beams might allow more consistent sp3 diamond deposition, and other techniques [24] have deposited diamond-like carbon (DLC) films with a higher percentage of sp3 hybridization from adamantane.
A new non-bulk non-statistical method of manufacturing diamond, called positional diamond mechanosynthesis, was proposed theoretically by Drexler in 1992 [32]. In this method, positionally controlled carbon deposition tools are manipulated to sub-Angstrom tolerances via SPM (Scanning Probe Microscopy) or similar atomic-resolution manipulator mechanisms to build diamond in vacuo. Each carbon deposition tool includes a tooltip molecule attached to a larger handle structure which is grasped by the atomic-resolution manipulator mechanism. One or more carbon atoms having one or more dangling bonds are relatively loosely bound to the tip of the tooltip molecule. When the tip is brought into contact with the substrate surface at a specific location and sufficient mechanical forces (compression, torsion, etc.) are applied, a stronger covalent bond is formed between the tip-bound carbon atom(s) and the surface, via the dangling bonds, than previously existed between the tip-bound carbon atom(s) and the tooltip structure. As a result, the tool may subsequently be retracted from the substrate and the tip-bound carbon atom(s) will be left behind on the substrate surface at the specific location and orientation desired. By repeating this process of positionally-constrained chemistry or mechanosynthesis, using a succession of similar tools, a large variety of molecularly precise diamond structures can be fabricated, placing one or a few atoms at a time on the growing workpiece.
Several analyses using the increasingly accurate methods of computational chemistry have confirmed the theoretical validity of the proposed process of positional diamond mechanosynthesis for hydrogen abstraction [25–33] and hydrogen donation [32, 33], in respect to the surface passivating hydrogen atoms, and carbon deposition [32–38], in respect to diamond surfaces and the body of diamond nanostructures. While positional diamond mechanosynthesis has not yet been demonstrated experimentally, early experiments [39] have demonstrated single-molecule positional covalent bond formation on surfaces via SPM, though in these cases bond formation was not purely mechanochemical but included electrochemical or other means. Mechanosynthesis of the Si(111) lattice has been studied theoretically [40, 41] and the first laboratory demonstration of nonelectrical, purely mechanical positional covalent bond formation on a silicon surface using a simple SPM tip was reported in 2003 [42]. In this demonstration, Osaka University researchers lowered a silicon AFM tip toward the silicon Si(111)-(7×7) surface and pushed down on a single atom. The focused pressure forced the atom free of its bonds to neighboring atoms, which allowed it to bind to the AFM tip. After lifting the tip and imaging the material, there was a hole where the atom had been (Figure 1). Pressing the tip back into the vacancy redeposited the tip-bound selected single atom, this time using the pressure to break the bond with the tip. These manipulation processes were purely mechanical since neither bias voltage nor voltage pulse was applied between probe and sample [42].
Figure 1. Mechanosynthesis of a single silicon atom on the silicon Si(111)-(7×7) surface
Phys. Rev. Lett. 90, 176102 (2003)
Existing mechanosynthetic tools can only be used at ultralow temperatures near absolute zero, and hold the atom or molecule to be deposited only very weakly, and can be employed only very slowly (minutes or hours per mechanosynthetic operation). These tools include the simple diamond stylus [43] and other crude tools such as nanocrystalline diamond grown (a) on standard silicon [44, 48] AFM tips with a 30 nm radius [48], (b) on silicon cantilever tips [46, 47], (c) on tungsten STM tips [45], or (d) on 12 nm radius doped-diamond STM tips [49], using CVD [44–49] including HFCVD [44, 46] or PACVD [45] diamond deposition processes. There is a need for improved mechanosynthetic tools with a molecularly precise <0.3 nm tip radius that can operate at liquid nitrogen or even room temperatures, and can perform mechanosynthetic operations in seconds or even faster cycle times, and can conveniently be precisely manipulated to sub-Angstrom positional accuracy using conventional SPM instruments.
In 2002, Merkle and Freitas [36] proposed the first design for a class of precision tooltip molecules intended to positionally deposit individual carbon dimers on a growing diamond substrate via diamond mechanosynthesis (Figure 2), and subsequent theoretical analysis [37, 38, 235] has verified that this class of tooltip molecules should be useful for depositing carbon dimers on a dehydrogenated diamond C(110) crystal surface, for the purpose of building additional C(110) surface or other molecularly precise structures at liquid nitrogen or room temperatures.
Figure 2. DCB6-Si dimer placement tooltip molecule [36]
(A) Wire frame view of tooltip molecule
(B) Overlapping spheres view of (A)
(C) Iceane
No specific proposals for attaching tooltip molecules such as the one illustrated in Figure 2 A/B to larger tool handles, or complete tools for positional diamond mechanosynthesis, have previously been reported in the scientific, engineering or patent literature. While others have previously noted the need for a handle structure to manipulate the active mechanosynthetic tooltip [32, 33, 36, 38], this invention is the first practical description of how to manufacture and to attach tooltips to such a handle structure, and thus to manufacture a complete mechanosynthetic tool.
The present invention is not limited to a method for the manufacture of a complete tool which can be used for diamond mechanosynthesis. The same toolbuilding process is readily extended to other classes of tooltip molecules, handle materials, and mechanosynthetic processes and structures other than diamond. As examples, which in no way limit or exhaust the possible applications of this invention, the same method as described herein can be used to build complete mechanosynthetic tools and attach handles to: (1) other possible C2 dimer deposition tooltips proposed by Drexler [32] and Merkle [33, 34] for the building of molecularly precise diamond structures; (2) other possible carbon deposition tooltips, including but not limited to carbene tooltips as proposed by Drexler [32] and Merkle [33, 34] and monoradical methylene tooltips as proposed by Freitas [234], for the deposition of carbon or hydrocarbon moieties during the building of molecularly precise diamond structures, or other tooltips that may be used for the removal of individual carbon atoms, C2 dimers [38], or other hydrocarbon moieties from a growing diamond surface; (3) tooltips for the abstraction [25–33] and donation [32, 33] of hydrogen atoms, for the purpose of positional surface passivation or depassivation during the building of molecularly precise diamond structures, or during the building of molecularly precise structures other than diamond, or of other atoms similarly employed for passivation purposes; or (4) tooltips for the deposition or abstraction of atoms, dimers, or other moieties, to or from materials including, but not limited to, covalent solids other than diamond, silicon, germanium or other semiconductors, intermetallics, ceramics, and metals.
The present invention is concerned with the physical structure and method of manufacture of a complete tool for positional diamond mechanosynthesis, which can subsequently be employed in the mechanosynthetic manufacture of other molecularly precise diamond structures, including other tools for positional diamond mechanosynthesis.
The present invention is the first description of a complete tool for positional diamond mechanosynthesis, along with its method of manufacture. The subject mechanosynthetic tool is constructed using only bulk chemical and mechanical processes, and yet, once fabricated, is capable of molecularly precise carbon dimer deposition to produce molecularly precise diamond structures. The present invention provides a tool by which the trajectory and timing of each new carbon atom added to a growing diamond nanostructure can be precisely controlled, thus allowing the manufacture of molecularly precise three-dimensional diamond structures of specified size, shape, orientation, location, and chemical composition, a significant improvement over all known bulk methods for fabricating synthetic diamond and a significant improvement over all existing mechanosynthetic SPM tips or styluses.
The positional diamond mechanosynthesis tool described herein enables the convenient manufacture of large numbers and varieties of diamond mechanosynthesis tools of similar or improved types, and also enables the convenient manufacture of a wide variety of molecularly precise nanoscale, microscale, and other diamond structures that cannot be fabricated by any known bulk process, including, but not limited to, molecularly-sharp scanning probe tips, shaped nanopores and custom binding sites, complex nanosensors, interleaved nanomechanical structures, compact mechanical nanocomputer components, nanoelectronic and quantum computational devices, aperiodically nanostructured optical materials, and many other complex nanodevices, nanomachines, and nanorobots. The tool can also be used in the fabrication of additional tools for the positional mechanosynthetic manufacture of molecularly precise structures made of materials other than diamond, employing either carbon (e.g., nanotubes and other graphene sheet structures) or carbon together with elements other than carbon, such as nanostructured nondiamond hydrocarbons, nanostructured fluorocarbons, nanostructured sapphire/alumina, and even DNA and other organic polymeric materials.
The positional diamond mechanosynthesis tool consists of two distinct parts which are covalently joined.
The first part of the positional diamond mechanosynthesis tool is the tooltip molecule (Figure 2). In the preferred embodiment the tooltip molecule consists of one or more adamantane molecules arranged in a polymantane or lonsdaleite (iceane; Figure 2C) configuration making a triadamantane base molecule. One or more dimerholder atoms (most preferably the Group IV elements Si, Ge, Sn, and Pb with three bonds into the base, but Group V elements N, P, As, Sb and Bi and Group III elements B, Al, Ga, In, and Tl with two bonds into the base may also be used [36]) are substituted into each of the adamantane molecules composing the triadamantane base molecule. A single carbon dimer (C2) molecule is bonded to two dimerholder atoms integral to the triadamantane base molecule; the carbon dimer is held by the tooltip but is later mechanically released during a mechanosynthetic dimer placement operation. Finally, a capping group is temporarily bonded to the two dangling bonds of the carbon dimer, passivating the dangling bonds and chemically stabilizing the tooltip molecule for a solution-phase environment. The capping group must be removed from the tooltip, exposing the dimer dangling bonds and activating the tooltip molecule, prior to use in a diamond mechanosynthesis operation.
The second part of the positional diamond mechanosynthesis tool is the handle structure (e.g., Figure 17). The handle structure may be a large rigid molecule, consisting in the preferred embodiment of a regular crystal, or a rod, or a cone, of pure hydrogen-terminated diamond, thus providing the greatest possible mechanical rigidity and thermal stability. At the base of the handle, the handle structure is sufficiently wide (0.1-10 microns in diameter) to be securely grasped by, or bonded to, a conventional SPM tip, a MEMS robotic end-effector, or other similarly rigid and well-controlled microscale manipulator device. Near the apex of the handle structure, the tooltip molecule is covalently bonded to the handle structure, forming an intimate and permanent connection thereto. The tooltip molecule is oriented coaxially with the handle structure, with the carbon dimer (whether capped or uncapped) of the tooltip molecule occupying the location most distal from the base of the handle structure, just as the writing tip of a sharpened pencil is most distal from the pencil eraser end.
The manufacture of the complete positional diamond mechanosynthesis tool requires four distinct steps, including (1) synthesis of capped tooltip molecule (Section 2.1), (2) attachment of tooltip molecule to deposition surface in a preferred orientation (Section 2.2), (3) attaching handle structures onto the tooltip molecules (Section 2.3), and finally (4) separating the finished tools from the deposition surface (Section 2.4). The concept of seeded growth of a useful nanoscale tool has previously been employed in the CVD growth of carbon nanotube tips for AFM [50–52].
STEP 1. Synthesize the triadamantane tooltip molecule, with its active C2 dimer tip appropriately capped, using methods of bulk chemical synthesis derived from known synthesis pathways for functionalized polyadamantanes as found in the existing chemical literature.
While an explicit synthesis of the exact DCB6-X (X = Si, Ge, Sn, Pb) capped tooltip molecule has not yet been located in the chemical literature, the sila-adamantanes have been investigated since at least the early 1970s [53–55] and multiply-substituted adamantanes such as 1,3,5,7-tetramethyl-tetrasilaadamantane [53, 56] and other 1,3,5,7-tetrasilaadamantanes [57] have been synthesized. Adamantanes are readily functionalized with alkene C=C bonds, e.g., 2,2-divinyladamantane, a colorless liquid at room temperature [161]. Polymantanes as a class of molecules can be functionalized [58, 60] and assembled to a limited extent, including biadamantanes [63], diadamantanes [64–66] and diamantanes [67], triamantanes [68, 69], and tetramantanes [70, 71]. The Beilstein database lists over 20,000 adamantane variants and there are several excellent literature reviews of adamantane chemistry [59–63]. The molecular geometries of diamantane, triamantane, and isotetramantane have been investigated theoretically using molecular mechanics, semiempirical and ab initio approaches [72]. The core of the DCB6-X (X = Si, Ge, Sn, Pb) class of adamantane-based tooltip molecules is a single iceane molecule (Figure 2C), the smallest unit cage of lonsdaleite or hexagonal diamond (the counterpart to adamantane which is the unit cage for the more common cubic diamond lattice). The iceane molecule was first synthesized experimentally in 1974 [73–75] and more recently has been studied using the customary methods of computational chemistry [77–80]; commercial sources for hexagonal diamond (lonsdaleite) powder already exist [76].
A crucial decision to be made in a particular application of this invention is the choice of capping group to be used to passivate the two dangling bonds of the C2 dimer that is held by the tooltip molecule. The presence of the capping group converts the otherwise highly reactive C2 dimer radical into a chemically stable moiety in solution phase for the duration of the synthesis process. Only when the capping group is later removed (Section 2.2), in vacuo, does the C2 dimer resume its status as a chemically active radical. Note that for some choices of capping group it may be simpler to synthesize the capped tooltip molecule in the configuration of a double-capped single-bonded C-C dimer, then employ a subsequent process to alkenate the dimer bond to C=C which would include removing half of the capping groups.
Many possible capping groups could in principle provide electronic closed-shell termination of the C2 dangling bonds, thus maximizing tooltip molecule chemical stability during conventional solution synthesis in Step 1 and during tooltip molecule attachment in Step 2 (Section 2.2). In some procedures, attachment is facilitated if the chemical structure of the capping group is highly dissimilar to the adamantane structure of the tooltip molecule, so that the capping group may be conveniently removed, e.g., by selective bond resonance excitation, during the tooltip attachment process. (Thus purely hydrocarbon-based and some other organic radicals may be problematic as capping groups.) For simplicity of analysis, ease of tooltip molecule synthesis, and ease of capping group removal, the capping group should have as few atoms as possible, all else equal. An enumeration of 400 potentially useful capping groups fulfilling the above requirements is given in Table 1, though the present invention is not limited to this partial list of illustrative exemplar moieties. As the number of atoms in the capping group increases, the combinatoric possibilities expand enormously. Some of the groups listed in Table 1 may yield tooltip molecules that are stable only at very low temperatures or only in particular chemical environments, and a few may not yet have been verified as experimentally available or even chemically stable.
The precise choice of capping group is determined by the desired interactions of tooltip molecules with the selected deposition surface (as described in Step 2 (Section 2.2) and Step 4 (Section 2.4)), but also by the desired interactions of tooltip molecules with themselves, e.g., during synthesis. There are at least four relevant factors which must be considered.
First, from the standpoint of basic utility the ideal capping group: (1) should be loosely bound to the dimer, thus easily released in order to uncap (and activate) the tooltip; (2) should form only a single bond with carbon; and (3) should be very simple, hence relatively easy to synthesize in a polymantane system. A few capping atoms that meet these criteria are given in Table 2.
|
||
Possible Tooltip Molecule Capping Atoms |
Bond Energy to Carbon (kcal/mole) |
Bond Energy to Diamond* (kcal/mole) |
Iodine (I) Sulfur (S) Bromine (Br) Silicon (Si) Nitrogen (N) Methoxy (OCH3) Chlorine (Cl) Carbon (C) Oxygen (O) Hydroxyl (OH) Hydrogen (H) Fluorine (F) |
52 65 68 72 73 — 81 83 86 — 99 116 |
49.5 — 63 — — 78 78.5 80 — 90.5 91 103 |
* Values given are the binding energies of tertiary carbon atoms to the capping atoms, i.e., the bonding energy between capping atoms and a carbon atom which is bound to three other carbon atoms. |
For ease of release alone, Table 2 implies that a preferred embodiment is to use two iodine atoms as the C2 dimer capping group of the tooltip molecule, as shown in Figure 3 below, right, though other capping groups may also serve in this capacity.
Figure 3. DCB6-Ge tooltip molecule, uncapped (left), and capped (right) with iodine atoms
(A) uncapped
(B)) capped with iodine atoms
Second, during bulk chemical synthesis using conventional techniques in solution phase, the capped tooltip molecule should not spontaneously dimerize across the C2 working tips. Dimerization can occur between two tooltip molecules across one bond or two bonds, as shown in Figure 4. Table 3 shows the results of geometry optimization energy minimization calculations using semi-empirical AM1 for the DCB6-Ge capped tooltip molecule [235] in various stages of “tip-on-tip” dimerization, for a variety of capping groups, in vacuo.
With no protective capping group in place, tip-to-tip dimerization is very energetically favorable. Tooltip molecule dimerization is energetically unfavorable to varying degrees for 1-atom capping groups consisting of, for example, -I, -Cl, -F, -Na, and -Li, and also for several 2-atom capping groups including hydroxyl (-OH), amine (-NH2), oxylithyl (-OLi), oxyiodinyl (-OI), and sulfiodinyl (-SI). In the case of some 2-atom oxyl (-OF), sulfyl (-SS-, -SH, -SF), and selenyl (-SeH) capping groups, dimerization is energetically unfavorable for direct =C-C= bonds linking the two tooltip molecules but appears likely to occur if dimerization occurs through an oxygen, sulfur (e.g., =C-S-C= or =C-S-S-C=) or selenium atom in the dimerization bond(s) linking the two tooltip molecules. Single-bond dimerization of an H-capped tooltip molecule with release of H2 is also energetically favorable, though double-bond dimerization for H-capped tooltips with the release of 2H2 appears unfavorable.
These analyses should be repeated using ab initio techniques, and should be extended to include a calculation of activation energy barriers (which could be substantial), weak ionic forces that could lead to crystallization (in the case of capping groups containing metal or semi-metal atoms), and solvent effects, all of which could affect the results. As a limited example of one such study, Mann et al [38] found that the dimerization reaction enthalpies of uncapped DCB6-Si and DCB6-Ge tooltip molecules are -1.64 eV and -1.84 eV, but that the energy barriers to the dimerization reaction were 1.93 eV and 1.86 eV, respectively. Therefore the dimerization of uncapped DCB6-Si and DCB6-Ge tooltip molecules “is thermodynamically favored but not kinetically favored. Due to the electron correlation errors in DFT these barrier heights may be considerably overestimated, therefore both reactions may be kinetically accessible at room temperature.” Subsequent work [235] appears to have confirmed that both tooltips work well as expected on the diamond C(110) surface, with the DCB6-Ge structure emerging as the preferred dimer placement tooltip molecule [235].
Figure 4. Progressive stages of possible “tip-on-tip” dimerization of capped tooltip molecules
(A) undimerized
(B) dimerized (1-bond)
(C) dimerized (2-bond)
|
|||
Tooltip Molecule Capping Group |
Undimerized Tooltip Molecule (eV) |
Lowest-E Dimerized Tooltip Molecule (1-bond) (eV) | Lowest-E Dimerized Tooltip Molecule (2-bond) (eV) |
Dioxyl (=C-O-O-C=) | forms unstable cyclic peroxides (ozonides) | ||
Diberyl (=C-Be-Be-C=) Be in dimerizing bond(s): no Be in dimerizing bond(s): Oxygen (=C-O-C=) including ozonides: excluding ozonides: O in dimerizing bond(s): no O in dimerizing bond(s): Beryllium (=C-Be-C=) Sulfur (=C-S-C=) S in dimerizing bond(s): no S in dimerizing bond(s): Imide (=C-NH-C=) Diselenyl (=C-Se-Se-C=)* Se in dimerizing bond(s): no Se in dimerizing bond(s): Diamine (=C-NHHN-C=) N in dimerizing bond(s): no N in dimerizing bond(s): Selenium (=C-Se-C=)* Se in dimerizing bond(s): no Se in dimerizing bond(s): NO CAPPING GROUP Nitrodiiodinyl (I2N-C=C-NI2) N in dimerizing bond(s): no N in dimerizing bond(s): Disulfyl (=C-S-S-C=) S in dimerizing bond(s): no S in dimerizing bond(s): Selenohydryl (H-Se-C=C-Se-H)* Se in dimerizing bond(s): no Se in dimerizing bond(s): Magnesium (=C-Mg-C=)* Mg in dimerizing bond(s): no Mg in dimerizing bond(s): Oxybromyl (Br-O-C=C-O-Br) O in dimerizing bond(s): no O in dimerizing bond(s): Phosphohydryl (H2P-C=C-PH2) P in dimerizing bond(s): no P in dimerizing bond(s): Oxyfluoryl (F-O-C=C-O-F) O in dimerizing bond(s): no O in dimerizing bond(s): Dimagnesyl (=C-Mg-Mg-C=)* Mg in dimerizing bond(s): no Mg in dimerizing bond(s): Nitrodifluoryl (F2N-C=C-NF2) N in dimerizing bond(s): no N in dimerizing bond(s): Fluorosulfyl (F-S-C=C-S-F) S in dimerizing bond(s): no S in dimerizing bond(s): Sulfobromyl (Br-S-C=C-S-Br) S in dimerizing bond(s): no S in dimerizing bond(s): Hydrogen (H-C=C-H) Bromine (Br-C=C-Br) Sulfhydryl (H-S-C=C-S-H) S in dimerizing bond(s): no S in dimerizing bond(s): Amine (H2N-C=C-NH2) N in dimerizing bond(s): no N in dimerizing bond(s): Iodine (I-C=C-I) Chlorine (Cl-C=C-Cl) Sulfiodinyl (I-S-C=C-S-I) S in dimerizing bond(s): no S in dimerizing bond(s): Borohydryl (H2B-C=C-BH2) B in dimerizing bond(s): no B in dimerizing bond(s): Oxyiodinyl (I-O-C=C-O-I) O in dimerizing bond(s): no O in dimerizing bond(s): Hydroxyl (H-O-C=C-O-H) O in dimerizing bond(s): no O in dimerizing bond(s): Berylfluoryl (F-Be-C=C-Be-F) Be in dimerizing bond(s): no Be in dimerizing bond(s): Seleniodinyl (I-Se-C=C-Se-I)* Se in dimerizing bond(s): no Se in dimerizing bond(s): Berylchloryl (Cl-Be-C=C-Be-Cl) Be in dimerizing bond(s): no Be in dimerizing bond(s): Oxylithyl (Li-O-C=C-O-Li) O in dimerizing bond(s): no O in dimerizing bond(s): Selenobromyl (Br-Se-C=C-Se-Br)* Se in dimerizing bond(s): no Se in dimerizing bond(s): Fluorine (F-C=C-F) Sodium (Na-C=C-Na)** Lithium (Li-C=C-Li) |
+ 11.256 + 11.256
+ 9.214 + 9.214 + 9.214 + 9.214 + 7.293
+ 7.089 + 7.089 + 7.015
+ 6.563 + 6.563
+ 6.004 + 6.004
+ 6.346 + 6.346 + 4.585 – + 3.702 + 3.702
+ 3.545 + 3.545
+ 3.320 + 3.320
+ 2.886 + 2.886
+ 2.271 + 2.271 – + 1.322 + 1.322
+ 1.242 + 1.242
+ 1.206 + 1.206 – + 1.160 + 1.160
+ 0.648 + 0.648
+ 0.425 + 0.425 + 0.379 + 0.070
+ 0.075 + 0.075 – 0 0 0 0
0 0 – 0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0 0 0 0 |
+ 5.013 + 12.874
+ 7.520 + 10.775 + 7.520 —- + 2.472
+ 2.843 —- + 5.173
+ 2.141 + 5.870
+ 1.438 + 0.923
+ 3.565 —- —- – + 4.881 0
+ 0.612 + 3.871
+ 1.545 + 5.463
+ 1.544 —-
0 + 5.662 – + 1.398 0
+ 0.786 + 2.479
—- + 1.229 – + 0.642 + 2.023
+ 0.593 + 1.349
0 + 0.426 0 0
+ 0.317 + 0.856 – + 0.166 + 0.969 + 0.171 + 0.236
+ 0.212 + 0.525 – + 0.239 + 0.270
+ 0.631 + 2.705
+ 0.607 + 2.839
+ 1.417 + 1.092
+ 1.418 + 7.294
+ 1.524 + 1.633
+ 1.705 + 4.539
+ 2.077 + 4.826 + 3.048 + 3.753 + 10.941 |
0 —-
0 + 0.492 0 + 5.466 0
0 + 6.661 0
+ 1.969 0
0 + 6.315
0 + 6.173 0 – + 3.594 + 1.471
0 + 4.799
0 + 10.295
0 + 2.012
+ 0.771 + 10.001 – + 0.936 + 1.926
0 + 6.467
0 + 3.204 – 0 + 6.597
0 + 5.509
+ 0.742 + 5.733 + 3.193 + 3.426
0 + 5.415 – + 0.512 + 5.598 + 3.621 + 4.089
+ 0.166 + 5.175 – + 0.926 + 4.153
+ 0.467 + 5.475
+ 0.576 + 6.830
+ 2.680 + 4.375
+ 7.364 + 9.901
+ 2.625 + 5.260
+ 3.803 + 11.752
+ 6.670 + 8.683 + 9.682 + 11.766 + 23.698 |
* energy minimization computed using PM3 instead of AM1 ** energy minimization computed using MNDO/d instead of AM1 |
In the case of bromine, and to a lesser extent in several other cases, the undimerized and 1-bond dimerized forms appear energetically almost equivalent, although 2-bond dimerization is energetically unlikely. Application of the process described in Step 2 using a capping group having this characteristic could result in a mixture of undimerized and 1-bond dimerized tooltips attached to the deposition surface. In the event that some 1-bond dimerizations occur and that a few dimerized tooltip molecules are subsequently inserted into the deposition surface during Step 2, the distinctive two-lobed geometric signature of these dimerized nucleation seeds can be detected and mapped via SPM scan prior to Step 3, and subsequently avoided during tool detachment in Step 4. Surface editing is another approach. Due to the low surface nucleation density (Section 2.2.1), after the aforementioned mapping procedure it may be possible to selectively detach and remove from the surface all attached dimerized tooltip molecules that are detected, e.g., using focused ion beam, electron beam, or NSOM photoionization, subtractively editing the deposition surface prior to commencing CVD in Step 3. An alternative to subtractive editing is additive editing, wherein FIB deposition of new substrate atoms on and around the dimerized tooltip molecule can effectively bury it under a smooth mound of fresh substrate, again preventing nucleation of diamond at that site during Step 3.
Third, the capped-C2 tip of the capped tooltip molecule should not spontaneously recombine into the side or the bottom of the adamantane base of neighboring tooltip molecules, during synthesis or storage, as illustrated in Figure 5 for a side-bonding event. Recombination can occur between two tooltip molecules across one bond or two bonds. Table 4 shows the results of semi-empirical energy calculations using AM1 for the DCB6-Ge capped tooltip molecule in two particular cases of “tip-on-base” side-bonding recombination, for a variety of capping groups, in vacuo.
With no protective capping group, tip-on-base recombination is very energetically preferred, with 1-bond recombination preferred over 2-bond when the H atom released from the adamantane base during formation of the 1-bond link becomes bonded with the remaining dangling bond of the tip-held C2 dimer. Mann et al [38] showed that intermolecular dehydrogenation from the bottom of the adamantane base by a neighboring uncapped tooltip molecule is exothermic and kinetically accessible (against a 0.48 eV reaction energy barrier) at room temperature. However, with an appropriate cap in place, tooltip molecule recombination is energetically unfavorable to varying degrees, e.g., for 1-atom capping groups consisting of -I, -Br, -Na, and -Li, and also for several 2-atom capping groups including hydroxyl (-OH), amine (-NH2), oxylithyl (-OLi), seleniodinyl (-SeI), several sulfyl groups including sulfhydryl (-SH), sulfiodinyl (-SI), and sulfalithyl (-SLi), and dimagnesyl (-MgMg-). There may be some tip-to-tip ionic bonding for beryllium (-Be-), lithium, oxylithyl, seleniodinyl, selenobromyl (-SeBr), berylfluoryl (-BeF) and berylchloryl (-BeCl) capping groups, and the imide (-NH-) cap appears to twist the tooltip dimer out of horizontal alignment. In the case of some 2-atom sulfyl (-SF, -SBr), and selenyl (-SeH) capping groups, recombination is energetically unfavorable for direct =C-C= bonds linking the two tooltip molecules but appears likely to occur if recombination occurs through a sulfur (e.g., =C-S-C= or =C-S-S-C=) or selenium atom in the recombination bond(s) linking the two tooltip molecules. Single-bond recombination of an H-capped tooltip molecule with release of H2 is slightly energetically favorable, though double-bond dimerization for H-capped tooltips with release of 2H2 appears very unfavorable energetically. These analyses should be repeated using ab initio techniques, and should be extended to include a calculation of activation energy barriers (which could be substantial), weak ionic forces that could lead to crystallization (in the case of capping groups containing metal atoms), and solvent effects, all of which could affect the results.
Figure 5. Progressive stages of possible “tip-on-base” recombination of capped tooltip molecules
(A) unrecombined
(B) 1-bond recombination
(C) 2-bond recombination
|
|||
Tooltip Molecule Capping Group |
Unrecombined (eV) |
Recombined (1 bond) (eV) | Recombined (2 bonds) (eV) |
Oxyfluoryl (F-O-C=C-O-F) O in recombining bond(s): no O in recombining bond(s): Oxygen (=C-O-C=) Nitrodifluoryl (F2N-C=C-NF2) N in recombining bond(s): no N in recombining bond(s): Beryllium (=C-Be-C=) Diselenyl (=C-Se-Se-C=)* Se in recombining bond(s): no Se in recombining bond(s): NO CAPPING GROUP Diamine (=C-NHHN-C=) N in recombining bond(s): no N in recombining bond(s): Sulfur (=C-S-C=) Imide (=C-NH-C=) Diberyl (=C-Be-Be-C=) Be in recombining bond(s): no Be in recombining bond(s): Oxybromyl (Br-O-C=C-O-Br) O in recombining bond(s): no O in recombining bond(s): Selenium (=C-Se-C=)* Fluorosulfyl (F-S-C=C-S-F) S in recombining bond(s): no S in recombining bond(s): Fluorine (F-C=C-F) Selenohydryl (H-Se-C=C-Se-H)* Se in recombining bond(s): no Se in recombining bond(s): Oxyiodinyl (I-O-C=C-O-I) O in recombining bond(s): no O in recombining bond(s): Sulfobromyl (Br-S-C=C-S-Br) S in recombining bond(s): no S in recombining bond(s): Magnesium (=C-Mg-C=)* Borohydryl (H2B-C=C-BH2) B in recombining bond(s): no B in recombining bond(s): Chlorine (Cl-C=C-Cl) Nitrodiiodinyl (I2N-C=C-NI2) N in recombining bond(s): no N in recombining bond(s): Hydrogen (H-C=C-H) Hydroxyl (H-O-C=C-O-H) O in recombining bond(s): no O in recombining bond(s): Bromine (Br-C=C-Br) Phosphohydryl (H2P-C=C-PH2) P in recombining bond(s): no P in recombining bond(s): Amine (H2N-C=C-NH2) N in recombining side bond(s): N in recombining bottom bond(s): no N in recombining side bond(s): no N in recombining bottom bond(s): Dimagnesyl (=C-Mg-Mg-C=)* Mg in recombining bond(s): no Mg in recombining bond(s): Iodine (I-C=C-I) Sulfhydryl (H-S-C=C-S-H) S in recombining bond(s): no S in recombining bond(s): Sulfiodinyl (I-S-C=C-S-I) S in recombining bond(s): no S in recombining bond(s): Oxylithyl (Li-O-C=C-O-Li) O in recombining bond(s): no O in recombining bond(s): Sodium (Na-C=C-Na)** Berylfluoryl (F-Be-C=C-Be-F) Be in recombining bond(s): no Be in recombining bond(s): Sulfalithyl (Li-S-C=C-S-Li) S in recombining bond(s): no S in recombining bond(s): Berylchloryl (Cl-Be-C=C-Be-Cl) Be in recombining bond(s): no Be in recombining bond(s): Lithium (Li-C=C-Li) Selenobromyl (Br-Se-C=C-Se-Br)* Se in recombining bond(s): no Se in recombining bond(s): Seleniodinyl (I-Se-C=C-Se-I)* Se in recombining bond(s): no Se in recombining bond(s): |
+ 8.306 + 8.306 + 4.622 – + 4.228 + 4.228 + 3.544
+ 3.306 + 3.306 + 3.207
+ 3.118 + 3.118 + 3.106 + 2.883
+ 2.147 + 2.147
+ 2.027 + 2.027 + 1.788
+ 1.583 + 1.583 + 0.771
+ 0.668 + 0.668
+ 0.353 + 0.353
+ 0.351 + 0.351 + 0.258 – + 0.209 + 0.209 + 0.111 – + 0.068 + 0.068 0
0 0 0 – 0 0 – 0 0 0 0
0 0 0
0 0
0 0
0 0 0
0 0
0 0
0 0 0
0 0
0 0 |
+ 4.557 + 7.973 0 – + 2.779 + 4.011 0
+ 2.765 + 2.563 0
+ 0.014 + 0.622 0 0
0 + 0.154
+ 1.815 + 2.004 0
+ 1.312 + 2.365 0
+ 1.544 + 4.596
+ 0.502 + 0.257
+ 0.531 + 0.879 0 – + 0.237 + 1.073 0 – + 1.086 + 1.469 + 0.117
+ 1.304 + 0.143 + 0.276 – + 0.662 + 0.399 – + 1.066 + 1.043 + 0.423 + 0.744
+ 0.731 + 1.294 + 0.785
+ 0.799 + 0.890
+ 0.833 + 0.921
+ 2.218 + 1.148 + 1.225
+ 1.842 + 1.635
+ 3.018 + 2.032
+ 3.430 + 2.057 + 3.700
+ 5.340 + 7.749
+ 8.123 + 10.503 |
0 + 10.788 + 2.997 – 0 + 6.015 + 4.335
0 + 6.508 + 1.333
0 + 3.238 + 3.859 + 2.729
+ 0.663 + 3.393
0 + 5.019 + 3.680
0 + 6.057 + 2.620
0 + 8.318
0 + 3.334
0 + 5.087 + 3.352 – 0 + 4.215 + 3.121 – 0 + 3.632 + 2.679
+ 1.570 + 3.235 + 3.538 – + 0.615 + 2.607 – + 0.992 + 1.854 + 3.025 + 2.444
+ 1.196 + 3.229 + 4.256
+ 0.379 + 4.701
+ 0.425 + 5.383
+ 0.089 + 4.156 + 4.813
+ 2.665 + 5.569
+ 0.973 + 7.264
+ 5.542 + 6.162 + 7.444
+ 5.145 + 10.775
+ 11.421 + 14.970 |
* energy minimization computed using PM3 instead of AM1 ** energy minimization computed using MNDO/d instead of AM1 |
In the case of chlorine, and to a lesser extent in several other cases, the unrecombined and 1-bond recombined forms appear energetically almost equivalent, although 2-bond recombination is energetically unlikely. Application of the process described in Step 2 using a capping group having this characteristic could result in a mixture of unrecombined and 1-bond recombined tooltips attached to the deposition surface. In the event that some 1-bond recombinations occur and that a few recombined tooltip molecules are subsequently inserted into the deposition surface during Step 2, the distinctive two-lobed geometric signature of these recombined nucleation seeds can be detected and mapped via SPM scan prior to Step 3, and subsequently avoided during tool detachment in Step 4. Surface editing is another approach. Due to the low surface nucleation density (Section 2.2.1), after the aforementioned mapping procedure it may be possible to selectively detach and remove from the surface all attached recombined tooltip molecules that are detected, e.g., using focused ion beam, electron beam, or NSOM photoionization, subtractively editing the deposition surface prior to commencing CVD in Step 3. An alternative to subtractive editing is additive editing, wherein FIB deposition of new substrate atoms on and around the recombined tooltip molecule can effectively bury it under a smooth mound of fresh substrate, again preventing nucleation of diamond at that site during Step 3.
Fourth, the capped-C2 tip of the capped tooltip molecule should not spontaneously react with solvent, feedstock, or catalyst molecules that are employed during conventional techniques for the bulk chemical synthesis of functionalized adamantanes in solution phase. A definitive result regarding this capping-group selection factor depends critically upon the exact synthesis pathways required.
As a proxy for these many pathways, it has been shown that even straight-chain hydrocarbons, upon exposure to the customary aluminum halide catalysts at high temperature, readily produce mixtures of various polymethyladamantanes [81]. The simplest-case recombination event illustrated in Figure 6 was analyzed via semi-empirical energy calculations using AM1 for the DCB6-Ge iodine-capped tooltip molecule in the specific instances of 1-bond and 2-bond side-bonding recombination with a simple straight-chain hydrocarbon molecule (n-octane). The 2-bond analysis includes one event in which the second bond occurs adjacent to the first, producing a 4-carbon ring with the octane molecule, and a second alternative event in which the second bond occurs with an octane chain carbon atom three positions down the chain, producing a more stable 6-carbon ring with the octane molecule. Since solvent effects, temperature, reverse reaction rates, and so forth will determine whether the reaction can occur, and will also determine the relative yields of various products and reactants, the thermodynamics results indicate primarily the relative ease or difficulty of maintaining the given capped tooltip molecule stably in solution with liquid n-octane. The data in Table 5 show that iodine (-I), hydrogen (-H), amine (-NH2), and perhaps bromine (-Br) capped tooltip molecules should be the most stable in hydrocarbon media, as should seleniodinyl (-SeI) and several sulfyl-capped molecules including sulfhydryl (-SH), sulfiodinyl (-SI), and sulfobromyl (-SBr). Fluorine- and oxygen-containing capping groups may be (relatively) less stable.
Figure 6. Progressive stages of possible side-bonding recombination reaction between an iodine-capped DCB6-Ge tooltip molecule (above) and a molecule of n-octane (below)
->
|
||
(A) unrecombined | (B) 1-bond recombination | |
->
|
||
(C) 2-bond recombination (4-carbon ring) |
(D) 2-bond recombination (6-carbon ring) |
|
||||
Tooltip Molecule Capping Group |
Not Recombined (eV) |
Recombined (1 bond) (eV) |
Recombined (2 bonds, 4-carbon ring) (eV) |
Recombined (2 bonds, 4-carbon ring) (eV) |
Imide (-NH-) Sulfur (=C-S-C=) NO CAP Diamine (-NHHN-) Fluorine (-F) Lithium (-Li) Oxylithyl (-OLi) Selenobromyl (-SeBr)* Oxybromyl (OBr) Oxyiodinyl (-OI) Hydroxyl (-OH) Nitrodifluoryl (-NF2) Disulfyl (=C-S-S-C=) Chlorine (-Cl) Borohydryl (-BH2) Sulfalithyl (-SLi) Bromine (-Br) Hydrogen (-H) Phosphohydryl (-PH2) Iodine (-I) Amine (-NH2) Nitrodiiodinyl (-NI2) Sulfhydryl (-SH) Sulfiodinyl (-SI) Sulfobromyl (-SBr) Berylfluoryl (-BeF) Berylchloryl (-BeCl) Dimagnesyl (-Mg2-)* Seleniodinyl (-SeI)* |
+ 4.075 + 3.397 + 3.347 + 2.838 + 1.989 + 1.744 + 1.194 + 1.099 + 0.979 + 0.967 + 0.948 – + 0.885 + 0.841 + 0.765 – + 0.690 + 0.484 + 0.346 + 0.081 – + 0.043 0 – 0 – 0 0 0 0 0 0 – 0 0 |
0 0 —- + 2.949 + 1.029 + 2.439 + 1.189 + 1.612 + 0.503 + 0.575 + 0.472 – + 0.421 0 + 0.429 – + 1.370 + 1.276 + 0.214 + 0.069 – + 0.072 + 0.147 – + 0.148 – + 0.239 + 0.465 + 0.478 + 0.526 + 0.562 + 0.725 – + 0.956 + 1.474 |
+ 2.148 + 2.391 + 1.935 + 1.939 + 1.999 + 1.806 + 2.379 + 2.465 + 1.963 + 1.968 + 1.987 – +1.961 + 2.137 + 2.044 – + 4.003 + 1.859 + 1.946 + 1.939 – + 1.906 + 2.041 – + 2.263 – + 2.261 + 2.346 + 2.579 + 1.678 + 2.263 + 3.114 – + 2.399 + 0.834 |
+ 0.200 + 0.446 0 0 0 0 0 0 0 0 0 – 0 + 0.380 0 – 0 0 0 0 – 0 + 0.120 – + 0.301 – + 0.346 + 0.759 + 0.832 + 1.082 + 0.876 + 1.191 – + 0.802 + 1.498 |
* energy minimization computed using PM3 instead of AM1 |
STEP 2. Attach a small number of tooltip molecules to an appropriate deposition surface in tip-down orientation, so that the tooltip-bound dimer is bonded to the deposition surface.
The appropriate deposition surface material (Section 2.2.1) is determined by choosing a surface which is not readily amenable to bulk diamond deposition, under the thermal and chemical conditions that will prevail during the diamond deposition processes described in Step 3. In Attachment Method A (Section 2.2.2), tooltip molecules may be bonded to the deposition surface in the desired orientation via low-energy ion bombardment of the deposition surface in vacuo, creating a low density of preferred diamond nucleation sites. In Attachment Method B (Section 2.2.3), tooltip molecules may be bonded to the deposition surface in the desired orientation by non-impact dispersal and weak physisorption on the deposition surface, followed by tooltip molecule decapping via targeted energy input producing dangling bonds at the C2 dimer which can then bond into the deposition surface in vacuo, also creating a low density of preferred diamond nucleation sites. In Attachment Method C (Section 2.2.4), the techniques of conventional solution-phase chemical synthesis are used to attach tooltip molecules to a deposition surface in the preferred orientation, again creating diamond nucleation sites.
The intention of this invention is to grow a handle molecule as a single crystal of bulk diamond large enough to permit convenient physical manipulation of the attached C2 dimer-bearing tooltip. Since this single crystal will be in the size range of 0.1-10 microns, and since sufficient room must be allowed around each single crystal to afford access to a MEMS-scale gripping mechanism, the maximum surface nucleation density appropriate for this process in the preferred embodiment will be ~105 cm-2, giving a mean separation between handle molecule crystals of ~32 microns on the deposition surface. In other embodiments in which much smaller 100 nm handle molecule crystals can be employed with narrower attachment clearances for the external gripping mechanism, the maximum surface nucleation density could be as high as ~109 cm-2, giving a mean separation between surface-grown handle molecule crystals of ~320 nm.
Conventional diamond films grown by CVD on smooth nondiamond substrates are characterized by very low nucleation densities, typically <104 cm-2 when diamond is deposited on a polished silicon wafer surface, which is many orders of magnitude less than that exhibited by most materials [127]. (Interestingly, the CVD nucleation density of diamond nanocrystals on an SiO2 substrate is 6 orders of magnitude smaller than on pure silicon [18].) The commercial preparation of continuous diamond films requires separately nucleated diamond crystals eventually to grow together to form a single sheet, hence is maximally efficient under conditions of high nucleation density. Therefore diamond film growth procedures often include preliminary substrate preparation techniques which attempt to increase the nucleation density to a practicable level. Such techniques typically involve introduction of surface discontinuities by scratching or abrading the substrate surface with a fine diamond grit powder or paste. Such surface discontinuities either create preferential geometrical sites for diamond crystal nucleation, or more probably embedded residues from the diamond abrading powder may serve as nucleation sites from which diamond growth can occur by accumulation. The presence of carbon particles on the surface of a substrate can provide a high density of nucleation sites for subsequent diamond growth [82]. As shown in Table 6, despite abrasive surface preparation the nucleation densities for diamond films prepared by such techniques remain relatively low, on the order of ~108 cm-2 (~1 µm-2) (vs. ~1015 cm-2 available atomic sites), and the surface structure of such films is unpredictable and typically exhibits very disordered surface patterns [127]. Nucleation has also been enhanced by coating substrate surfaces with a thin (10-20 nm) layer of hydrocarbon oil [83].
|
|
Pretreatment Method |
Typical Nucleation Density (nuclei/cm2) |
No pretreatment Covering/coating with Fe film As+ ion implantation on Si Covering/coating with graphite film Manual scratching with diamond grit Seeding Ultrasonic scratching with diamond grit Biasing (voltage) Covering/coating with graphite fiber C70 clusters + biasing 0 |
103 – 105 5 x 105 105 – 106 106 106 – 1010 106 – 1010 107 – 1011 108 – 1011 >109 3 x 1010 |
Since the purpose of this invention is to grow isolated micron-scale diamond single crystals over tooltip molecule nucleation sites, rather than a continuous diamond film, the deposition surface ideally is chosen so as to minimize the number of natural (non-tooltip molecule) nucleation sites. If tooltip molecules are attached at a number density of ~105 cm-2 to a surface of polished silicon otherwise having no pretreatment, the number density of naturally occurring nucleation sites can be held to at most 103-105 cm-2. This implies that from 50% to 99% of the isolated micron-scale diamond single crystals that are grown during Step 3 (Section 2.3) will be correctly nucleated by surface-bound undimerized tooltip molecules. An SPM scan of the deposition surface, following the completion of Step 2 but prior to the commencement of Step 3, can identify and map the positions of all of the undimerized surface-bound tooltip molecules, so that the isolated micron-scale diamond single crystals that are later grown and properly nucleated by surface-bound tooltip molecules can be identified prior to selection and detachment in Step 4 (Section 2.4).
As noted by May [85], most of the CVD diamond films reported to date have been grown on single-crystal Si wafers, mainly due to the availability, low cost, and favorable properties of Si wafers. But this is not the only possible substrate material. Candidate substrates for diamond handle molecule crystal growth must satisfy five important basic criteria [85], the first four of which are summarized quantitatively in Table 7.
First, the substrate must have a melting point (at the process pressure) higher than the temperature required for diamond growth (at least 300-500 oC, but normally greater than 700 oC). This precludes the use of low-melting-point materials such as plastics, aluminum, certain glasses and some electronic materials such as GaAs as a deposition substrate, when hydrogenic diamond CVD techniques are employed in Step 3 (Section 2.3).
Second, for growing diamond films the substrate material should have a thermal expansion coefficient comparable with that of diamond, since at the high growth temperatures currently used, a substrate will tend to expand, and thus the diamond coating will be grown upon and bonded directly to an expanded substrate. Upon cooling, the substrate will contract back to its room temperature size, whereas the diamond coating, with its very small expansion coefficient, will be relatively unaffected by the temperature change, causing the diamond film to experience significant compressive stresses from the shrinking substrate, leading to bowing of the sample, and/or cracking, flaking or even delamination of the entire film [85]. However, a nondiamond deposition surface for growing diamond tool handle molecules, starting from surface-bound tooltip molecule nuclei, should incorporate the maximum possible thermal expansion mismatch between the substrate and diamond, producing thermal stresses upon cooling that can facilitate tool separation from the nondiamond deposition surface in Step 4 (Section 2.4).
Third, a mismatch in the crystal lattice constant [86, 87] between the diamond comprising the tool handle molecule and the nondiamond substrate greatly reduces the bonding opportunities between handle molecule and substrate, during handle molecule growth (Section 2.3). An extensive interfacial misfit also facilitates tool separation from the nondiamond deposition surface in Step 4 (Section 2.4).
Fourth, in order to form adherent diamond films it is a customary requirement that the substrate material should be capable of forming a carbide layer to a certain extent, since diamond CVD on nondiamond substrates usually involves the formation of a thin carbide interfacial layer upon which the diamond then grows. The carbide layer is viewed as a “glue” which promotes diamond growth and aids its adhesion by (partial) relief of interfacial stresses caused by lattice mismatch and substrate contraction [85]. However, the ideal nondiamond deposition surface for growing diamond tool handle molecules, starting from surface-bound tooltip molecule nuclei, is a substrate that resists or prohibits carbide formation. The absence of carbide on the nondiamond deposition surface (a) discourages downgrowth of the tool handle molecule into the substrate, (b) helps maintain the isolation of the finished tooltip apex, and (c) facilitates tool separation from the nondiamond deposition surface in Step 4 (Section 2.4). On the basis of carbide exclusion, potential substrate materials including metals, alloys and pure elements can be subdivided into three broad classes [85, 88], in descending order of preference for the present invention:
(1) Carbide Exclusion. Metals such as Cu, Sn, Pb, Ag and Au, as well as non-metals such as Ge and sapphire/alumina (Al2O3), have little or no solubility or reaction with C. These materials do not form a carbide layer, and so any diamond layer that might try to form will not adhere well to the surface (which is known as a way to make free-standing diamond films, as the films will often readily delaminate after deposition). These are the best materials for a deposition surface upon which to grow detachable diamond tool handle molecules nucleated by surface-bound tooltip molecules. Unwanted natural nucleation centers are unlikely to arise on polished non-pretreated surfaces and downgrowth from the tooltip molecule seed or the growing tool handle structure, towards the substrate, will be resisted by these surfaces.
(2) Carbon Solvation. Metals such as Pt, Pd, Rh, Ni, Ti and Fe exhibit substantial mutual solubility or reaction with C (all industrially important ferrous materials such as iron and stainless steel cannot be diamond coated using simple CVD methods) [85]. During CVD, a substrate composed of these metals acts as a carbon sink whereupon deposited carbon dissolves into the surface, forming a solid solution. This dissolution transports large quantities of C into the bulk, rather than remaining at the surface where it can promote diamond nucleation [85]. Often diamond growth on the surface only begins after the substrate is completely saturated with carbon, with carbide finally appearing on the surface, by which time the tool handle molecule may already have grown sufficiently large as a single diamond crystal atop a surface-bound tooltip molecule.
(3) Carbide Formation. Metals such as Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, W, Co, Ni, Fe, Y, Al, and certain other rare-earth metals can form carbide during CVD. In some metals, such as Ti, the interfacial carbide layer continues growing during diamond deposition and can become hundreds of microns thick. Non-metals such as B and Si, and Si-containing compounds such as SiO2, quartz and Si3N4, also form carbide layers, and substrates composed of carbides themselves, such as SiC, WC and TiC, are particularly amenable to diamond deposition [85]. Surface nucleation rates (cm-2 hr-1) on stable carbide-forming substrates (Si, Mo, W) are 10-100 times higher than on carbide-resistant substrates [89], and surface nucleation density (cm-2) on Mo is about 10 times higher than on other carbide-forming substrates (Si, Ni, Ti, Al) under similar deposition conditions [90]. If used as polished non-pretreated deposition surfaces for diamond tool handle growth, these materials should only sparsely produce competing diamond crystal nucleation centers during hydrogenic CVD processes. (Diamond cannot be epitaxially grown directly on silicon or GaAs substrates [91].) However, carbon dimers that are present in the feedstock gases during low-temperature nonhydrogenic CVD can insert into Si and SiO2 surfaces, readily producing silicon carbide [18]. Additionally, as the CVD process continues, carbide-forming materials may permit some unwanted downgrowth from the surface-bound tooltip molecule or growing tool handle structure, towards the substrate. Note that bombardment of surfaces, particularly refractory metal surfaces such as tungsten, with fullerene ions having energies from about 0.0025-250 MeV results in implantation of carbon and the formation of surface or subsurface carbides [11].
|
|||
Substrate Material |
Melting Point at 1 atm (oC) |
Linear Thermal ExpansionCoefficient (K-1) |
Lattice Constant at ~300 K (Å) |
Diamond (cubic) 0
Lonsdaleite (hexagonal) a-axis c-axis Graphite (hexagonal) a-axis c-axis 0 |
3057 [92] 0 —-
3797 [92] |
—-
<0 [94] 25 x 10-6 [94] |
3.566986 [95] 0
2.52 [94] 1.42 [94]
2.464 [95] 6.711 [95] 0 |
Carbide Exclusion: Ge 0 Sn 0 Pb 0 Sapphire/Alumina (Al2O3): normal to c-axis parallel to c-axis Au 0 Ag 0 Cu (fcc) 0 |
937 [96] 0 232 [96] 0 328 [96] 0 2045 [96] 0
1063 [96] 0 961 [96] 0 1084 [97] 0 |
6 x 10-6 [98] 22 x 10-6 [98] 28.9 x 10-6 [98]
5.0 x 10-6 [99] 6.66 x 10-6 [99] 14.2 x 10-6 [98] 18.9 x 10-6 [98] 17 x 10-6 [97] |
5.64613 [100] 0 6.48920 [100] 0 4.95 [95] 0
4.76 [99] 0 13.00 [99] 0 4.08 [95] 0 4.09 [95] 0 3.61 [95] 0 |
Carbide Solvation: Pt 0 Pd 0 Rh 0 |
1769 [96] 0 1552 [96] 0 1966 [96] 0 |
8.8 x 10-6 [98] 11.8 x 10-6 [98] 8.2 x 10-6 [98] |
3.92 [95] 0 3.89 [95] 0 3.80 [95] 0 |
Carbide Formation: Si (cubic) 0 SiO2 (quartz) Si3N4 B (fcc) Ti 0 Zr 0 Hf 0 V 0 Nb 0 Ta 0 Cr 0 Mo 0 W 0 Co (>390 oC) (fcc) 0 Ni (fcc) 0 Fe (<912 oC) (bcc) 0 Fe (912-1400 oC) (bcc) 0 Y 0 Y-ZrO2 (cubic) Al SiC (cubic) 0 WC (fcc) 0 TiC 0 |
1412 [97] 0 1710 [101] 0 1900 [96] 0 2300 [96] 0 1675 [96] 0 1852 [96] 0 2150 [96] 0 1890 [96] 0 2468 [96] 0 2996 [96] 0 1890 [96] 0 2610 [96] 0 3410 [96] 0 1494 [97] 0 1455 [97] 0 —- 0 1536 [97] 0 1495 [96] 0 2850 [102] 0 660 [96] 0 2697 [102] 0 2870 [96] 0 3140 [96] 0 |
7.6 x 10-6 [97] 13.3 x 10-6 [101] 3.3 x 10-6 [103] 6 x 10-6 [98] 8.6 x 10-6 [98] 5.7 x 10-6 [98] 5.9 x 10-6 [98] 8.4 x 10-6 [98] 7.3 x 10-6 [98] 6.3 x 10-6 [98] 4.9 x 10-6 [98] 4.8 x 10-6 [98] 4.5 x 10-6 [98] 12.5 x 10-6 [97] 13.3 x 10-6 [97] 12.1 x 10-6 [97] >14.6 x 10-6 [97] 10.6 x 10-6 [98] 4.0 x 10-6 [102] 23.1 x 10-6 [98] 4.63 x 10-6 [102] 4-7 x 10-6 [104] 7 x 10-6 [104] |
5.43095 [100] 0 4.91 (a), 5.41 (c) [101] 0 5.38 [105] 0 5.37 [106] 0 2.95 (a), 4.68 (c) [95] 0 3.23 (a), 5.15 (c) [95] 0 3.19 (a), 5.05 (c) [95] 0 3.03 [95] 0 3.30 [95] 0 3.30 [95] 0 2.51 (a), 4.07 (c) [95] 0 3.15 [95] 0 3.16 [95] 0 3.54 [97] 0 3.52 [97] 0 2.86 [97] 0 3.56 [97] 0 3.65 (a), 5.73 (c) [95] 0 5.07 [107] 0 4.05 [95] 0 4.248 [108] 0 ~ 8.1 [109] 0 |
Easy Nucleation: BN (cubic) 0 |
2727 [102] 0 |
0.59 x 10-6 [102] |
3.615 [102] 0 |
Dimer Release Criterion. In addition to these four basic factors, a fifth criterion in the choice of deposition substrate material is that the tooltip molecule should bind the C2 dimer more strongly than the deposition surface, so that when the finished tool is pulled away from the deposition surface in Step 4 (Section 2.4), the dimer will stay attached to the tool and not remain on the deposition surface. If the dimer stays with the tool, then the result is a tool with an active tip ready to perform diamond mechanosynthesis. If the dimer remains on the deposition surface, the result is a dimerless “discharged” tool which must be recharged with C2 dimer by some additional process [38] before the tool can be used for diamond mechanosynthesis.
A full computational simulation of the interaction between complete modeled deposition surfaces and the DCB6-Ge tooltip has not yet been done. However, a preliminary evaluation has examined the energy minima of a tooltip that is first joined to a deposition surface through the dimer (EJ) and is then pulled away from the deposition surface, for Dimer-on-Tooltip (EDoT) and Dimer-on-Surface (EDoS) configurations, where the “surfaces” are crudely modeled as follows: C (diamond), Si, Ge, Sn, and Pb surface as a single nonterminated 10-atom adamantane-like cage, with the tooltip dimer bonded to 2 adjacent cage atoms; Cu surface as 4 Cu atoms arranged in a square, with the tooltip dimer bonded to 2 adjacent Cu atoms; Al2O3 as a single 5-atom chain of alternating Al and O atoms, with the tooltip dimer bonded to the two Al atoms; and C (graphite) as a 3×3 (unit cells) flat single-plane sheet with all perimeter C atoms immobilized. The quantity (EDoS – EDoT), tabulated in the rightmost column of Table 8 for each surface, is negative if the dimer prefers to stick to the surface after the tooltip has been pulled away from the surface, and is positive if the dimer prefers to stick to the tooltip after the tooltip has been pulled away from the surface, the desired result. (This is only a crude analysis because the quantity (EDoS – EDoT) really informs us only as to whether the total process of charged tooltip deposition plus discharged tooltip retraction is endo- or exothermic, not the reaction direction or preference.) Since surfaces composed of the larger-radius Ag and Au atoms should bind the dimer less strongly than Cu, it appears that all “carbide exclusion” deposition surface materials listed in Table 7 (with the possible exception of Cu, whose (EDoS – EDoT) is slightly negative; Table 8), and graphite, at least tentatively satisfy this additional dimer-release criterion. Note that a release energy (EJ – EDoT) < 0 for all deposition surface in Table 8 suggests a thermodynamic preference for a decapped tooltip molecule to bind to the deposition surface.
|
||
Deposition Surface Material | (EJ – EDoT) in eV |
(EDoS – EDoT) in eV |
C (diamond) Si Cu Ge Sn Pb Al2O3 C (graphite) |
– 5.772 – 5.007 – 5.090 – 4.700 – 2.802 – 1.463 – 0.995 – 0.560 |
– 3.864 – 0.192 – 0.115 + 1.067 + 2.247 + 2.743 + 2.753 + 5.180 |
Taking all five factors into account (Tables 7 and 8), “carbide exclusion” materials are the optimal substrate for diamond handle molecule crystal growth, and thus constitute the preferred embodiment of this invention. Graphene sheets (e.g., graphite, carbon nanotubes) may also be used with nonhydrogenic CVD processes, since atomic hydrogen etches graphene, although there exists a preferential epitaxial lattice registry relationship between the diamond C(111) and graphite (0001) surfaces, and similarly between the diamond C(110) and graphite (1120) surfaces [84], which might encourage non-tooltip-molecule nucleation. Furthermore, any conventional substrate material suitable for the deposition of CVD diamond thereon may be employed as the substrate utilized in the present invention, though perhaps with decreased efficiency or convenience. Thus the substrate material could be a metal, a metal carbide, a metal nitride, or a ceramic – e.g., silicon carbide, tungsten carbide, molybdenum, boron, boron nitride, niobium, graphite, copper, aluminum nitride, silver, iron, steel, nickel, silicon, alumina, or silica [5], or combinations thereof including cermets such as Al2O3-Fe, TiC-Ni, TiC-Co, TiC-TiN, or B4C-Fe systems [110]. Finally, specialized surface treatments may be applied to the deposition surface in order to suppress natural nucleation – for example, ion implantation of Ar+ ions (3 x 1015 ions/cm2 at 100 KeV) on silicon substrate is known to decrease nucleation density [111].
Tooltip molecules may be bonded to the deposition surface in the desired orientation via low-energy ion bombardment of the deposition surface in vacuo, creating a low density of preferred diamond nucleation sites. This is similar to the recognized pretreatment method of (for example) As+ ion implantation (1014 ions/cm2 at 100 KeV) on silicon substrate [112, 113] which yields a typical nucleation density of 105-106 nuclei/cm2, up from 104 in the absence of such ion implantation treatment [84]. Ion-beam implantation of C+ ions to form diamond-like carbon (DLC) films on various atomically clean substrates in chambers maintained at <10-9 torr are well-known [114–118, 137], including gold [118] and copper [119] surfaces, and halogen atoms have been partially substituted for hydrogen in DLC deposited on metal substrate in photosensor applications [120].
The specifics of Attachment Method A in the present invention are as follows. First, capped tooltip molecules (Section 2.1) are supplied to an ionization source. A vapor of capped tooltip molecules is created by heating in a vacuum chamber (e.g., C60 has a vapor pressure of 0.001 torr at 500 oC [17]). The vaporized capped tooltip molecules are next ionized by at least one of the procedures of laser ablation, electron bombardment, electron attachment, or photoionization. The capped tooltip molecule ions are then electrostatically accelerated to form a low-energy, highly dilute tooltip molecule ion beam, a well-known technology [121]. The ion beam is then directed in a scanning pattern across the deposition surface in vacuo. Upon striking the surface, the tooltip molecule ions (Figure 7A) may partially fragment with the release of the capping group, producing dangling bonds at the C2 dimer which can then insert into the substrate surface (Figure 7B). This beam energy transferred to the tooltip molecule upon impact should not significantly exceed 7.802 eV, the minimum energy required to entirely remove the C2 dimer from an uncapped DCB6-Ge tooltip molecule [36]. (This is considerably lower than the 10-80 eV ions studied by Sinnott et al [145] to functionalize carbon nanotubes (CNTs) by similar means, the 10-300 eV C+ ion beams used to grow diamond-like carbon films on various substrates [118], and the >250 eV needed to fragment fullerene ions into free C2 dimer radicals [11].) Another outcome is that only one capping group is released, bonding the tooltip molecule to the surface with only one bond through the C2 dimer (Figure 7C). Table 9 shows that this 1-bond outcome is energetically comparable to the 2-bond outcome, in the case an iodine cap and a graphite surface. Yet another possible outcome is that the tooltip molecule bonds to the surface at its base through either one (Figure 7D) or two (Figure 7E) bonds, releasing an H or H2, respectively, though neither base-bonding outcome is energetically preferred compared to the desired dimer-bonding outcomes.
Figure 7. Schematic of iodine-capped DCB6-Ge tooltip molecule (A) impacting 3×3 unit-cell graphite surface in desired orientation, (B) bonding to surface and releasing capping group as an I2 molecule, or alternatively, (C) bonding to surface with only one bond through the C2 dimer with release of one I atom, (D) one bond to surface through tooltip molecule base with release of one H atom, or (E) two bonds to surface through tooltip molecule base with release of one H2 molecule
(A) |
(B) |
|
(C) |
(D) |
(E) |
|
||
(Tooltip + Surface) Configuration | Illustrated in: |
Energy (eV) |
Tooltip over surface (no bonding) 2 bonds to surface at C2 dimer + I2
1 bond to surface at C2 dimer + I
1 bond to surface at tooltip base + H 2 bonds to surface at tooltip base + H2 |
|
0
+ 2.649
+ 2.056
+ 5.414 + 4.382 |
Capping group removal energies from an isolated DCB6-Ge tooltip molecule for a variety of capping groups are estimated computationally (using semi-empirical AM1) as ranging from 1.9-7.4 eV (Table 10), as, for example, 3.554 eV for two iodine capping atoms, 4.728 eV for two amine capping groups, or 7.453 eV for two hydroxyl capping groups. These required energies would be halved when only one capping group is removed during tooltip molecule ion impact with the surface.
|
|||
Capping Group | Removal Energy (eV) | Capping Group |
Removal Energy (eV) |
Magnesium (-Mg-)* Phosphohydryl (-PH2 -PH2) Seleniodinyl (-SeI -SeI)* Dimagnesyl (-MgMg-)* Beryllium (-Be-) Sodium (-Na -Na)** Selenobromyl (-SeBr -SeBr)* Hydrogen (-H -H) Bromine (-Br -Br) Berylfluoryl (-BeF -BeF) Iodine (-I -I) Sulfobromyl (-SBr -SBr) Selenohydryl (-SeH -SeH)* Berylchloryl (-BeCl -BeCl) Sulfochloryl (-SCl -SCl) Chlorine (-Cl -Cl) Borohydryl (-BH2 -BH2) Diamine (-NHHN-) Sulfur (-S-) |
1.989 2.495 2.650 2.731 2.936 3.171 3.265 3.308 3.521 3.528 3.554 3.680 3.745 3.829 3.859 3.961 3.979 4.019 4.116 |
Sulfhydryl (-SH -SH) Sulphiodinyl (-SI -SI) Lithium (-Li -Li) Fluorosulfyl (-SF -SF) Nitrodiiodinyl (-NI2 -NI2) Sulfalithyl (-SLi -SLi) Amine (-NH2 -NH2) Nitrodifluoryl (-NF2 -NF2) Imide (-NH-) Disulfyl (-SS-) Oxygen (-O-) Oxyfluoryl (-OF -OF) Diberyl (-BeBe-) Fluorine (-F -F) Oxybromyl (-OBr -OBr) Oxylithyl (-OLi -OLi) Oxyiodinyl (-OI -OI) Hydroxyl (-OH -OH) |
4.141 4.231 4.323 4.374 4.624 4.702 4.728 4.896 5.012 5.058 5.339 5.474 5.761 6.782 7.063 7.104 7.215 7.453 |
* energy minimization computed using PM3 instead of AM1 ** energy minimization computed using MNDO/d instead of AM1 |
However, the removal energy for a single passivating hydrogen atom in the base of the tooltip molecule is 3.519 eV for an H atom removed from the bottom of the tooltip molecule base, comparable to many of the capping group removal energies listed in Table 10. Given the random orientation of tooltip molecules upon their arrival at (and impact with) the deposition surface, the sweep of a dilute beam of tooltip molecule ions across the surface will result in a thin scattering of tooltip molecules attached to the surface in a variety of orientations–some bound by two bonds to the uncapped dimer (as desired), others bound by only one bond to a partially uncapped dimer, and others bound directly to the tooltip molecule base in various orientations. Simple inspection of potential impact geometries suggests that energy transfer primarily into the dimer capping group upon impact is most probable if the tooltip molecule arrives at the deposition surface within (conservatively) ±20o of vertical, in tip-down orientation. Therefore the probability of such arrival (assuming a random distribution of tooltip molecule ion orientations in the beam) and hence the probability of a dimer-bonded tooltip molecule (having either 1 or 2 bonds to the surface through the C2 dimer) is roughly (40o/360o)2 ~ 1%, among all tooltip molecules that become bonded to the deposition surface.
Given a ~1% success rate, after the bombardment process and prior to the commencement of Step 3 the surface should be scanned by SPM to find and record the positions of those few tooltip molecules that are bound to the surface in the desired orientation. Depending upon the number density achieved, undesired tooltip molecule nucleation sites might simply be avoided during tool detachment in Step 4. Surface editing is another approach. Due to the low surface nucleation density (Section 2.2.1), after the aforementioned mapping procedure it may be possible to selectively detach and remove from the surface all attached misoriented tooltip molecules that are detected, e.g., using focused ion beam, electron beam, or NSOM photoionization, subtractively editing the deposition surface prior to commencing CVD in Step 3. A second alternative to subtractive editing is additive editing, wherein FIB deposition of new substrate atoms on and around the misoriented tooltip molecule can effectively bury it under a smooth mound of fresh substrate, again preventing nucleation of diamond at that site during Step 3. A third corrective procedure is reparative editing, wherein the methods described in Attachment Method B (Section 2.2.3) are employed to fully uncap the only partially uncapped tooltip molecule which has become bonded to the deposition surface (through only one carbon atom of the C2 dimer) during the ion bombardment process of Attachment Method A. The result of this editing is that in Step 3, diamond handle structures will grow only on properly-oriented surface-bound tooltip molecules.
The ability of a chemisorbed (covalently bonded) tooltip molecule to migrate across a deposition surface in vacuo depends strongly upon the chemical structure of both tooltip molecule and the deposition surface material, and temperature. For example, spontaneous surface migration of gold atoms on gold surfaces is well known, though this mobility is greatly reduced at low temperatures and possibly also by alloying with silver or in combinations with other carbide resistant substrate materials. On the other hand, Larsson [122] estimates that during conventional diamond CVD on diamond substrate the acetylide radical (C2H) has an energy barrier to migration of 3.6 eV across a clean diamond C(111) surface and the methyl radical (CH3) has an even higher energy barrier to migration of 3.7 eV; on C(100), estimates for migration barriers range from 1.3-1.9 eV for methylene (CH2) radicals [123, 124], 1.1-2.7 eV for methyl radicals [123, 125], and 1.7 eV for ethylene (C=CH2) radicals [124]. Taking migration time from the Arrhenius equation as tmigrate-1 ~ (kBT/h) exp(-Emig/kBT), where h = 6.63 x 10-34 J-sec (Planck’s constant) and kB = 1.381 x 10-23 J/K (Boltzmann’s constant), then at T = 300 K and Emig = 1.1-2.7 eV, tmigrate ~ 5 x 105 sec – 3 x 1032 sec on diamond substrate, which is very slow. Tooltip molecules have ten times as many atoms per molecule as the aforementioned radicals, hence should exhibit much slower surface migrations at any given temperature.
Tooltip molecules may be bonded to the deposition surface in the desired orientation by non-impact dispersal and weak physisorption on the deposition surface, followed by tooltip molecule decapping via targeted energy input producing dangling bonds at the C2 dimer which can then bond into the deposition surface in vacuo, again creating a low density of preferred diamond nucleation sites (Figure 8).
Figure 8. Schematic of iodine-capped DCB6-Ge tooltip molecule (A) dispersed on 3×3 unit-cell graphite surface in desired orientation, (B) absorbing targeted energy sufficient to decap the tooltip molecule in vacuo, releasing the capping group as two iodine ions or as an I2 molecule, and (C) bonding to the deposition surface
(A) | (B) |
(C) |
The specifics of Attachment Method B in the present invention are as follows.
First, capped tooltip molecules are dispersed and physisorbed onto the deposition surface by any of several methods. These methods may include (but is not limited to): (1) spin coating, in which a suspension of capped tooltip molecules is applied to the center of a spinning wafer of smooth deposition surface material, and subsequently dispersed across the wafer surface; (2) dip coating, in which a wafer of smooth deposition surface material is dipped into a suspension of capped tooltip molecules and slowly withdrawn; or (3) spray coating, in which a suspension of capped tooltip molecules is applied to the wafer of smooth deposition surface material as a fine spray. All three methods have been successfully employed commercially to apply onto a smooth silicon wafer a dilute coating of 100-200 nm diamond particles to a number density of ~1 µm-2 (~108 cm-2), starting with a suspension of 1 gm diamond particles in 1 liter of isopropanol [126–128], ethanol [82], or methanol [129]. In another analogous application [130], a layer of hydrocarbon molecules is applied to a substrate by the Langmuir-Blodgett technique, whereupon the surface is irradiated with a laser to decompose the layer of molecules at the surface without influencing the substrate; after decomposition the carbon atoms rearrange on the substrate surface to form a DLC film.
It is well-known that simple adamantane (C10H16), though having one of the highest melting points (542 K) of any hydrocarbon, "sublimes readily at atmospheric pressure and room temperature." [60] The enthalpy of sublimation for adamantane is DHsubl = 58,810 J/mole (~0.61 eV/molecule) [131] and the triple point for adamantane is Ttriple = 733 K at Ptriple = 2.7 GPa [132, 133], hence from the Clausius-Clapeyron equation the partial pressure of solid adamantane (Padam) may be estimated as: ln(Padam) = ln(Ptriple) + (DHsubl / R) (Ttriple-1 – Tadam-1) = 31.37 – (7077 Tadam-1), where R = 8.31 J/mole-K (universal gas constant). At Tadam = 77 K (LN2 temperature), the partial pressure of adamantane is only 5 x 10-32 atm, or ~1 sublimed adamantane molecule per 200,000 m3 of volume at equilibrium, entirely negligible. However, at 300 K, Padam = 0.024 atm, or ~1 sublimed adamantane molecule per 1700 nm3 of volume at equilibrium, a substantial sublimation rate.
The capped triadamantane tooltip molecule, being a larger molecule and containing two or more heavy atoms, should be less easily sublimed under ambient conditions. However, these molecules have not yet been synthesized nor are their precise thermodynamic properties known. Taking adamantane as the worst-case scenario, the surface dispersal conditions most certain to work consist of a suspension of capped tooltip molecules in a liquid nitrogen (LN2) carrier fluid, dispersed onto a smooth deposition surface which is maintained at or slightly below 77 K, the boiling point of LN2. After applying the suspension to the deposition surface, the surface temperature may be temporarily elevated to slightly above 77 K to drive off the chemically inert LN2 carrier fluid, leaving only capped tooltip molecules dispersed in vacuo on the cold deposition substrate surface in the energetically preferred equilibrium position shown in Figure 8A. If the selected capped tooltip molecules have a low or negligible sublimation rate at room temperature, then other higher-temperature suspension fluids may be used which are easily evaporatable and compatible with the underlying substrate, i.e., chemically nonreactive with the underlying substrate material(s). For example, fullerenes including C60 and C70 have been dispersed onto silicon, silica, and copper surfaces at room temperature using an evaporatable carrier fluid (e.g., toluene), then employed as growth nuclei for microwave plasma diamond film CVD [82].
Second, the capping group must be induced to debond from the C2 dimer in the tooltip molecule via excitation of the =C-cap bond. Some crude methods will not work. For example, if the capping atom is iodine, this atom has a large mass and hence a low frequency of vibration in a C-I bond (e.g., ~5.0 x 1012 Hz at 350 K), so the absorption of a single IR photon of this frequency would add only ~0.02 eV to the bond, which is insufficient to break it. From Table 10, ~1.777 eV is required to break each of the two C-I bonds constituting the capping group of a DCB6-Ge tooltip molecule. This energy corresponds to the absorption of a single 430 THz (~7000 Å) visible red photon. Laser photoexcitation, photodissociation or photofragmentation [11] is commonly used in atom-selective bond breaking to selectively control a chemical reaction, e.g., the photodissociation of iodine atoms from iodopropane ions [134]. The requisite bond-breaking energy can be provided by a beam of electrons, noble element ions, or other energetic neutrals [135–137] directed towards the cooled deposition surface where the capped tooltip molecules reside. Viewed from above in its preferred orientation relative to the deposition surface, the iodine capped tooltip molecule has a cross-sectional area of ~44.42 Å2 of which ~5.05 Å2 represents the cross-sectional area of the iodine capping group, hence the beam of photons or ions carrying the debonding energy will strike the capping group, on average, ~10% of the time that they strike a tooltip molecule at all. Much more selectively, an STM tip can be scanned over the cold deposition surface specifically to break the C-I bond via ~1.5 eV single tunneling electrons [138–140]. For instance, the STM-mediated positionally-controlled single-molecule dissociation of an iodine atom from individual molecules of copper surface-physisorbed iodobenzene (C6H5I) and diiodobenzene (C6H4I2) has been demonstrated experimentally by Hla et al [140]; in the inelastic tunneling regime, lower-energy electrons can also be injected via a resonance state between tip/substrate and the target molecule, breaking the weak C-I bond in iodobenzene without breaking the stronger C-C or C-H bonds [140].
Third, once the capping group has been removed and the dangling bonds have been exposed from the C2 dimer, these bonds can form strong attachments with the deposition substrate surface, thus affixing the tooltip molecule to the deposition surface in the desired tip-down orientation. The energetics of the bond-by-bond decapping procedure for an iodine-capped DCB6-Ge tooltip molecule on a 3×3 unit-cell graphite surface is estimated in Figure 9 using semi-empirical AM1 simulations which included four unattached atoms (2H, 2I) to permit total atom count to remain constant throughout all substitutions. After each iodine capping atom is removed, the conversion of the dangling C2 dimer bond to a new covalent bond between dimer and deposition surface appears to be energetically favored by 1.574 eV for the first bond and by 1.284 eV for the second bond. However, the presence of stray H or I ions can poison this reaction. For example, the dangling dimer bonds will bond to any H ions that are present, in preference to bonding with the deposition surface, so hydrogen must be excluded from the vicinity of the tooltip molecules during this stage of the process. It would be helpful to include a hydrogen getter in the vacuum chamber to absorb any hydrogens that become separated from the tooltip base. Stray iodine ions have a similar effect so it is helpful to include an intermittent positive-voltage getter plate inside the chamber to periodically attract and collect negative iodine ions as they are released from the tooltip caps. However, if the number of purposely decapped iodine atoms or accidentally debonded hydrogen atoms is on the order of ~105 cm-2 (Section 2.2.1 and Table 6) in a relatively large vacuum chamber, then an encounter between such stray atoms and a surface-bound tooltip molecule, even in the absence of any countermeasures, should be an exceedingly rare event.
Figure 9. Estimated energetics of the iodine-capped DCB6-Ge tooltip molecule decapping process on 3×3 unit-cell graphite surface, using semi-empirical AM1
The process of energy transfer to the tooltip molecule for the purpose of releasing the capping iodine atoms might also accidentally debond a hydrogen atom from the adamantane base of the tooltip molecule. The energetics of this dehydrogenation during various phases of the bond-by-bond decapping procedure for an iodine-capped DCB6-Ge tooltip molecule on a 3×3 unit-cell graphite surface is estimated in Figures 10, 11, and 12 using semi-empirical AM1 and including four unattached atoms (2H, 2I) to permit atom count to remain constant during all substitutions.
In the case of a tooltip molecule having no bonds to the surface through the C2 dimer (Figure 10), that loses one hydrogen atom in the side position of the base, the tooltip molecule has a large energy barrier of 1.319 eV against bonding to the deposition surface through the dangling bond. Unless a stray H or I atom impinges at high velocity and recombines, the dehydrogenated tooltip molecule will remain on the deposition surface in the unreacted state and can later be sublimated off the deposition surface by gentle heating.
Figure 10. Estimated energetics of a dehydrogenation of the base of the iodine-capped DCB6-Ge tooltip molecule during the decapping process on 3×3 unit-cell graphite surface, using semi-empirical AM1 (0 eV = lowest-energy configuration), for a tooltip molecule having no bonds to the surface (at bottom left)
In the case of a tooltip molecule having one bond to the surface through the C2 dimer (Figure 11), that loses one hydrogen atom in the side position of the base, the tooltip molecule has only a small energy barrier (0.063 eV) against bonding to the deposition surface through the dangling bond, so this unwanted double bonding is likely to occur even at LN2 temperatures and cannot later be reversed via gentle heating. Since the barrier is of order ~kBT, the configuration change will occur about equally in both directions, producing approximately equal populations of 1-bonded and 2-bonded configurations of tooltip molecules that have lost a single H atom in the side position of the base. These unwanted configurations can be observed by SPM and edited out as previously described. In the unlikely event that a stray H atom impinges and recombines, before the new bond to the deposition surface is established, the original hydrogenated tooltip molecule will be restored.
Figure 11. Estimated energetics of a dehydrogenation of the base of the iodine-capped DCB6-Ge tooltip molecule during the decapping process on 3×3 unit-cell graphite surface, using semi-empirical AM1, for a tooltip molecule with one bond to the surface (at bottom left)
In the case of a tooltip molecule having two bonds to the surface through the C2 dimer (Figure 12), that loses one hydrogen atom in the side position of the base, the tooltip molecule has a strong energy preference (2.277 eV) to bond again to the deposition surface through the dangling bond, making a total of 3 bonds to the surface, a configuration that must be removed by post-process editing, or mapped and avoided. As before, the unlikely prior recombination of a stray H atom restores the original hydrogenated tooltip molecule, but impingement of a stray H or I atom before dehydrogenating the base can partially debond the properly 2-bonded tooltip molecule from the deposition surface. While the activation energy barrier to this reaction may be large, even preventative, the existence of such pathways emphasizes the need to minimize the number of stray H and I atoms that are present in the vacuum chamber during the tooltip molecule attachment process.
Figure 12. Estimated energetics of a dehydrogenation of the base of the iodine-capped DCB6-Ge tooltip molecule during the decapping process on 3×3 unit-cell graphite surface, using semi-empirical AM1, for a tooltip molecule with two bonds to the surface (at left)
Once a tooltip molecule has established at least one strong bond to the deposition surface, its surface mobility should be extremely low (Section 2.2.2). However, prior to such bonding these molecules are only physisorbed to the surface. Isolated pairs of iodine-capped DCB6-Ge tooltip molecules placed in tip-to-tip, tip-to-base, tip-to-side, and base-to-base orientations show weak energy barriers (calculated using semi-empirical AM1) between these configurations of only 0.05-0.09 eV (vs. 0.04 eV for (300 K) room temperature, 0.007 eV for (77 K) LN2 temperature), with just a slight preference for the base-to-base orientation. Tooltip molecules placed near each other and tooltip molecules placed several molecule widths apart in the same orientation show almost no energetic preference with separation distance, so tooltip molecules should be distributed randomly across the cold deposition surface. By varying the choices of tooltip molecule, capping group, deposition surface materials, and deposition surface temperature, the speed of tooltip molecule migration across the deposition surface can be made almost arbitrarily slow.
The enthalpy of sublimation for molecular iodine (I2) is DHsubl = 60,800 J/mole (~0.63 eV/molecule) and the vapor pressure over the solid is 6060 Pa at 100 oC [141], hence from the Clausius-Clapeyron equation the partial pressure of solid iodine (Piodine) may be estimated as: ln(Piodine) ~ 28.32 – (7316 Tiodine-1). At Tiodine = 77 K (LN2 temperature), the partial pressure of iodine is only 1 x 10-34 atm but at room temperature (Tiodine = 300 K) the partial pressure Piodine = 0.0005 atm, hence any stray iodine that remains physisorbed to the deposition surface after the completion of the decapping procedure may be driven off by gentle heating and sublimation.
Tooltip molecules may be bonded to the deposition surface in the preferred orientation using the techniques of conventional solution-phase chemical synthesis, creating a low density of preferred diamond nucleation sites (Figure 13).
The specifics of Attachment Method C in the present invention are as follows.
First, the deposition surface is functionalized with an appropriate functionalization group. For illustrative purposes, Figure 13A shows a section of (10,0) single-walled carbon nanotube (CNT) with a functional group “X” attached at the para- isomer positions (1 and 4) in one of the 6-carbon rings in the graphene surface. A capped tooltip is shown above this surface. For this invention, the functionalized deposition surface could be a flat graphene surface (i.e., graphite), or could be a functionalized non-graphene surface such as silicon, germanium, gold, and so forth (see Table 7). Graphite is attacked by strong oxidizing agents (such as sulfuric + nitric acid, or by chromic acid) [142], allowing the random surface functionalization of graphene; also, the chemical functionalization of fullerenes is well-studied [143–148]. Since site-specific functionalization may not be not strictly required in all cases, e-beam irradiation of dilutely surface-dispersed moieties, ion-beam implantation of functional-group ions, electrochemical functionalization [149, 150], or other related techniques could be employed in some cases to attach functional groups on the deposition surface at very high dilution, e.g., at 1 micron separations. However, direct chemical modification of surfaces via SPM tip [39, 140] enables the functionalization of the deposition surface at specific atomic sites, in cases where this is necessary.
Figure 13. Attachment of tooltip molecule to graphene deposition surface via solution phase combination of capping group and surface functionalization group
(A) |
(B) |
(C) |
(D) |
(E) |
(F) |
Second, conventional techniques of chemical synthesis are employed to establish conditions in solution phase whereby the tooltip molecule capping group, illustrated in Figure 13A by iodine, combines with the deposition surface functionalization group, here illustrated as "X", resulting in the removal of both I and X, leaving the tooltip molecule chemically bound to the deposition surface across two bonds at the carbon C2 dimer as shown in Figure 13B – much like the standard esterification reaction wherein an alcohol molecule having a terminal –OH group combines with a second organic acid molecule having a terminal –H group, creating a C-C covalent bond between the two molecules (an ester) with the release of an H2O in the process. It is possible that a specific convenient alkenation reaction can be found in the standard chemical synthesis literature, perhaps as an analog to the synthesis pathways for bicyclooctene (Figure 13C) or more directly as an analog to methods that may already be known for the alkenation (ethenation) of graphite, CNTs, or other deposition surfaces such as Si, Ge, or Au. The attachment reaction could be enhanced in the case of a nanotube deposition surface by using a kinked CNT, then anticipating the tooltip to preferentially attach at the kink site where CNTs are most reactive [151].
Density functional theory (DFT) analysis [152] has considered cycloadditions of dipolar molecules to the C(100)-(2×1) diamond surface. Experiments [153] have demonstrated the [2+4] cycloaddition of benzyne (C6H4) to polycyclic aromatics such as anthracene, forming triptycene (Figure 13D). DFT studies [154, 155] of the possible cycloaddition reaction of ortho-benzyne molecules to the graphene walls of carbon nanotubes have been done (Figures 13E and 13F). There have also been experimental investigations of solution-phase cycloaddition of organic molecules to semiconductor surfaces [156] and studies of diamondlike carbon films grown in organic solution [157] or grown via the electrolysis of acetates in solution phase [158]. Hoke et al [159] and others [160] have examined the reaction path for ortho-benzyne with C60 and C70 that leads to the [2+2] cycloaddition product in which benzyne adds across one of the interpentagonal bonds, forming a cyclobutene ring.
Most directly on point as prior art, Giraud et al [161–163] have synthesized 2,2-divinyladamantane (DVA), a single-cage adamantane molecule with two vinyl (-CH=CH2) groups bonded to the same carbon atom in the cage, then dispersed this molecule onto a polished hydrogen-terminated Si(111) surface. Upon exposure to UV irradiation, photochemical double hydrosilylation occurs, fixing the adamantane molecule through two -C-C- tethers to two adjacent silicon atoms on the Si(111) surface with minimal steric strain. A rinse with ethanol, deionized water, and a 10 minute sonication with dichloromethane removed all ungrafted or physisorbed DVA. All adamantane molecules that become tethered to the surface via two bonds adopt the identical geometric orientation relative to the surface. Giraud et al [162] note that formation of the C-Si bond between the adamantane molecule and the silicon surface can be achieved by adapting any one of several commonly known techniques, including radical mediated hydrosilylation of olefins with molecular silanes [165–167], photochemical hydrosilylation of olefins with trichlorosilane [168], or hydrosilylation of olefins catalyzed by transition metal complexes [169–173].
STEP 3. Attach a large handle molecule or other handle structure to the deposition surface-bound tooltip molecule created in Step 2. There are two general methods that may be used to accomplish this: nanocrystal growth (Section 2.3.1) and direct handle bonding (Section 2.3.2).
In Method A, a bulk diamond deposition process (see below) is applied simultaneously to the entire tooltip-containing deposition surface (e.g., ~1 cm2) created in Step 2. The adamantane (diamond nanocrystal) base of each bound tooltip molecule serves as a nucleation seed from which a large diamond crystal will grow outward, in preference to growth on areas of the deposition surface where tooltip nucleation seed molecules are absent (Figure 14). Deposition should proceed until a sufficient quantity of bulk diamond crystal has grown outward and around the tooltip seed molecule such that the tooltip and its newly grown handle can be securely grasped by a MEMS-scale manipulator mechanism. The deposition process should be halted before adjacent growing crystals merge into a single film. As noted in Section 2.2, the number density of tools on the surface is controlled by limiting the number density of tooltip seed molecules attached to the deposition surface during Step 2. As distinguished from the more complex ex post strategy of chemically attaching a capped tooltip molecule to a larger prefabricated handle molecule, in the process described here the handle is grown directly onto the surface-bound tooltip, creating an optimally rigid and durable unitary mechanosynthetic tool structure. Alternatively and less preferred, the growing diamond crystal handle structure can be covalently bonded to some other appropriate large rigid structure such as a CNT, tungsten, or diamond-shard AFM tip, or an EBID/FIB-deposited metal or carbon column, e.g., by growing a vertical column of DLC atop the properly oriented tooltip molecule using a focused beam of hydrocarbon or C+ ions [114–118].
Figure 14. Multiply twinned diamond crystal growth during hot-filament assisted CVD. Photos courtesy of John C. Angus, Case Western Reserve University [174]
The most useful bulk deposition process is conventional diamond CVD, wherein micron/hour or faster deposition rates are typically demonstrated experimentally. The initial deposition rate onto the starting seed may be slow, but this rate should rapidly increase as more of the diamond handle structure is laid down during the deposition process which will require times on the order of hours. Traditional high-temperature CVD uses a large excess of atomic hydrogen which will etch a graphite or graphene surface, but CVD diamond can be deposited slowly at temperatures as low as 280-350 oC if necessary using the nonhydrogenic Argonne Lab C60/C2-dimer approach [175, 176] (Section 1.1(C)) which uses very little atomic H, in which case graphene etching would no longer be a serious problem. (Thermal suppression of nucleation at 1000 oC has been discussed by McCune [3].)
Does the CVD process deposit sp3-bonded diamond, not sp2-bonded graphite, onto such a tiny nucleation seed as the triadamantane base structure of the tooltip molecule? Conditions in vapor deposition of thin films require a critical nucleus size only on the order of a few atoms [177]. Under these conditions the free energy of formation of a critical nucleus may be negative [177] and the surface energy contribution may cause a reverse effect on the graphite-diamond phase stability [178, 179], a situation called nonclassical nucleation process [177]. Simple thermodynamic calculations by Badziag et al [180] and others [178, 179] have confirmed that hydrogen-terminated diamond nuclei <3 nm in diameter should have a lower energy than hydrogen-terminated graphite nuclei with the same number of carbon atoms, and that for surface bonds terminated with H atoms, diamonds smaller than ~3 nm are energetically favored over polycyclic aromatics (the precursors to graphite).
In 1983, Matsumato and Matsui [19], and later in 1990, Sato [20] and Olah [21], suggested that hydrocarbon cage molecules such as adamantane, bicyclooctane, tetracyclododecane, hexacyclopentadecane, and dodecahedrane could possibly serve as embryos for the homogeneous nucleation of diamond in gas phase. The adamantane molecule (C10M/sub>H16) is the smallest combination of carbon atoms possessing the diamond unit cage structure, i.e., three six-member rings in a chair conformation. The tetracyclododecane and hexacyclopentadecane molecules represent twinned diamond embryos that were proposed as precursors to the fivefold twinned diamond microcrystals prevalent in CVD diamond films – from simple atomic structure comparisons, the diamond lattice is easily generated from these cage compounds by simple hydrogen abstraction followed by carbon addition [7]. However, in one experiment adamantane placed on a molybdenum deposition surface during acetylene-oxygen combustion CVD failed to nucleate diamond growth [181], possibly due to “a fast transformation of adamantane on molybdenum to molybdenum carbide under diamond growth conditions.”
The first successful demonstration of the ability of surface-bound single-cage adamantane molecules to serve as nucleation seeds for diamond CVD was achieved experimentally by the Giraud group [161–164] during 1998-2001. In this process, a special seed molecule – 2,2-divinyladamantane (DVA), a single-cage adamantane with two vinyl (-CH=CH2) groups bonded to the same carbon atom in the cage–is synthesized using conventional solution phase techniques [161], then dispersed onto a polished hydrogen-terminated Si(111) surface. When a surface prepared in this way is subjected to microwave plasma CVD using an H2-rich 1% CH4 feedstock gas at 40 mbar and 850 oC for 2 hours, only a few diamond grains are observed during subsequent SEM inspection, with a nucleation density below ~104 cm-2 [163]. However, when the surface is additionally exposed to UV irradiation from a xenon arc lamp for 24 hours prior to CVD, photochemical double hydrosilylation occurs, fixing the seed molecule via two -C-C- tethers to two adjacent silicon atoms on the Si(111) surface with minimal steric strain. With the seed molecule thus tethered to the silicon surface, the CVD process is then run again as previously described, this time resulting in a diamond nucleation density that rises to ~109 cm-2 and producing a very homogeneous diamond size of ~2 microns [163] (indicating essentially all adamantane-based nucleations), as shown in Figure 15.
Figure 15. SEM photograph of uniform 2-micron diamond crystals grown by MPCVD using surface-tethered single-cage adamantane molecules as nucleation seeds on a Si(111) surface; image courtesy of Luc Giraud [163]
Giraud et al [163] notes that although the treatment should densely cover the surface with covalently bound adamantane seed molecules, “the subsequent CVD plasma conditions will remove all the singly and presumably a few doubly attached molecules. The fact that nucleated diamonds were effectively obtained…shows the stability of grafted DVA in the nucleation conditions. All the samples treated without…UV…suffered no nucleation. This nucleation method therefore offers, on top of the advantage of flexibility and mildness, the possibility of photolithographic nucleation: diamonds adopt a homogeneous spatial repartition in the center of the irradiated region, with a well-faceted shape due to their cubic structure, while nucleation density sharply decreases to ~5 x 106 cm-2 on the brink of the irradiated region without even using a light mask.” In sum, doubly bonded adamantane seed molecules nucleate the growth diamond “handle” crystals, whereas singly bonded or unbonded seed molecules are removed by the hot CVD process and thus produce no crystal growth.
Even though the core of the tooltip molecule is iceane (the unit cell of hexagonal diamond or lonsdaleite) and not pure adamantane as in conventional cubic diamond crystal, lonsdaleite can also be grown experimentally [73–76]. The Raman spectra of lonsdaleite has been reported [182] and detected in localized stacking defect domains in textured CVD films [183]. Crystals of hexagonal diamond have been prepared in both static and shock high-pressure laboratory experiments [184, 185], and directly from cubic diamond [186]. Lonsdaleite can also be reliably synthesized [187] using rf-assisted plasma CVD and pure acetylene gas as the carbon source with no hydrogen – Roul et al [188] reports that crystallites grown on Si(100) substrates consisted mainly of polytypes of hexagonal diamond with a little cubic diamond and a few higher-order hydrocarbon phases, and others have found diamond polytypes in CVD diamond films [189]. Both cis and trans boat-boat bicyclodecane and related multiply-twinned compounds have been suggested as possible lonsdaleite nucleators based on the presence of both boat and chair hexagonal carbon rings [190, 191]. Twinning – the stacking of alternating (as in lonsdaleite) or arbitrarily-ordered re-entrant and intersecting chair and boat planes – is commonly seen in CVD diamond [191–195]. A semi-empirical theoretical analysis of the lonsdaleite structure by Burgos et al [196] gives results in reasonable accord with the limited experimental data. L.V. Zhigilei et al [197] note that intermediate states during the reconstruction of the C(111) surface of cubic diamond can lead to growth processes which result in the formation of a stacking fault, or twin plane [198–200], which could in turn produce lonsdaleite [201], and other transition mechanisms have been proposed [202].
As noted by Battaile et al [203], experimentally grown CVD diamond crystallites can exhibit C(100) and C(111) facets [204–206]. The C(110) surfaces are not usually observed (except in (110)-oriented homoepitaxy [207, 208]) because they grow much faster than the C(111) and C(100) faces [204, 210], hence are normally terminated by (100) and (111) facets. Diamond deposition rates in a hot-filament CVD reactor at 1200 K from methyl radical are typically 1.3-2.0 µm/hr for C(110) [209, 210] but only 0.5 µm/hr for C(111) and just 0.4-0.5 µm/hr for C(100) [209–212]. With the tooltip molecule bound to the deposition surface in the preferred orientation (i.e., inverted), the C(110) plane is angled at 45o from vertical, leaning away from the vertical centerline; the C(100) plane is also angled at 45o from vertical, but leans toward the vertical centerline; the C(111) plane goes straight up along the centerline. So under CVD deposition, the tool handle structure will grow fastest outward at 45o. The C(100) plane will be buried inside the tool, and the tool handle crystal will exhibit C(110) facets on the sides and a C(111) facet on the top. (Plasma CVD diamond crystallites grown on Si(100) wafers also display a combination of C(111) and C(110) facets [6].) Note that while lonsdaleite has a repeating structure, here we should expect only a single twinning fault at the centerplane, not a series of repeating twinnings. However, geometry dictates that the detached tool cannot be concave on its active face, and would at worst be flat, hence even at minimum can serve as a primitive tool to experimentally demonstrate positionally controlled diamond mechanosynthesis.
Diamond films have been formed by immersing a substrate in a fluid medium comprising a carbon-containing precursor and irradiating the substrate with a laser to pyrolyze the precursor, a technique that could also be adapted to grow diamond handle structures onto isolated surface-bound tooltip molecules. For example, Hacker et al [213] describe a process in which gas containing an aliphatic acid or an aromatic carboxylic anhydride that vaporizes without decomposition is passed over a substrate and irradiated with a focused high-powered pulsed laser, depositing a diamond film. In the process disclosed by Neifeld [214], the substrate is immersed in a liquid containing carbon and hydrogen, e.g. methanol, and a laser pulse is then directed through the liquid coating to heat the substrate. The liquid is pyrolyzed and carbon material from the pyrolyzed liquid grows on the substrate to form a diamond coating on the substrate. Yu [130] applies a hydrocarbon layer to a substrate by the Langmuir-Blodgett technique, then irradiates the surface with a laser (or e-beam, x-rays, etc.) to decompose the layer of molecules at the surface without influencing the substrate; after decomposition, the carbon atoms rearrange on the surface of the substrate to form a DLC film. Bovenkerk et al [4] proposes using an unusual dual gas approach to CVD in which, for example, a hydrogen (H2) or methane (CH4) feedstock gas is alternated with a carbon tetraiodide (CI4) feedstock gas, with each exposure resulting in the deposition of a new diamond monolayer on an existing diamond substrate, and alternative lower-temperature CVD gas chemistries are being investigated such as use of CO2-based [215] or halogen-containing [216] gas mixtures. Finally, laser heating of solid CO2 at 30-80 GPa pressure causes the molecule to decompose into oxygen and diamond, revealing a new region of the CO2 phase diagram with a boundary having a negative P-T slope [217].
There are several other lesser-known alternatives to CVD, ion beam deposition, and laser pyrolysis which might also be adapted for growing the handle structure onto the surface-bound tooltip molecule. Diamond film prepared by physical vapor deposition has been described by Namba et al [218]. Liquid-phase diamond synthesis in boiling benzene or in molten lead was reported as early as 1905 [219], and more recently, a 2% yield of diamond from carbon tetrachloride in liquid sodium at 700oC [220] and the electrochemical growth of diamond films below 50oC in liquid ethanol [157] and in solutions of ammonium acetate in liquid acetic acid [158], and also the hydrothermal synthesis of diamond [221].
A final consideration is the overall temperature stability of the bound tooltip molecule under the conditions of CVD growth and related processes. One concern is that the tooltip molecule might destabilize if heated to CVD temperatures. Pure adamantane graphitizes at >480oC [60], and early thermodynamic equilibrium calculations [222, 223] showed that these and similar low molecular weight hydrocarbons are not stable at high temperatures (>600oC) in the harsh CVD environment. Another concern is that at elevated temperatures, the tooltip molecule might debond from the deposition surface. However, the work of the Giraud group [161–164] with the 2,2-divinyladamantane nucleation molecule for diamond CVD confirms experimentally that adamantane molecules having two tethers to a silicon deposition surface can survive at least 2 hours of CVD conditions at 850oC without destabilizing or detaching from the surface, although adamantanes with only one or no bonds to the surface evidently may be detached or destroyed at these temperatures. Table 8 gives the release energy (EJ – EDoT) for a decapped tooltip molecule bound to a Ge deposition surface as ~4.7 eV. If the activation energy (reaction barrier) is of similar magnitude, then from the Arrhenius equation (Section 2.2.2) the mean detachment time for a decapped tooltip molecule bound to a Ge deposition surface at 850oC is ~5 x 107 sec (>1 year). For some deposition surface materials the tooltip release energy (and reaction barrier) can be considerably lower, so it may be necessary to employ a lower-temperature CVD process to obtain an acceptably long thermal detachment time for some substrates. Successful low-temperature CVD of diamond crystallites or DLC films have been reported at temperatures as low as 250-750 oC [224], 280-350 oC [175, 176], 300-500 oC [116], 350-600 oC [128], >400 oC [110], and <500 oC [10].
In Method B, an SPM-manipulated dehydrogenated diamond shard having a flat or convex tip is brought down vertically onto a surface upon which tooltip molecules are attached. Retraction of the tip pulls the tooltip molecule off the surface, yielding a finished tool for diamond mechanosynthesis consisting of a tooltip molecule mounted on the diamond shard with an active C2 dimer exposed at the tip, as illustrated in Figure 16.
Figure 16. Extraction of surface-bound tooltip molecule via bonding to vertically inserted and retracted dehydrogenated diamond C(110) probe manipulated via SPM
(A) Lower |
(B) Bind |
(C) Retract |
The specific sequence of events is as follows:
(1) Prepare tooltip molecules. Bond tooltip molecules to the deposition surface in the preferred orientation, as described in Step 2 (Section 2.2).
(2) Mount diamond AFM tip. Mount a diamond shard as the working tip of an AFM. The apex of the shard should be flat or convex in cross-section, and the apical tip surface of the shard should expose the diamond C(110) crystal face.
(3) Depassivate AFM tip. The AFM tip is baked in vacuo at >1300 K to completely dehydrogenate the entire diamond shard, including most importantly its C(110) apical tip surface. The C(110) surface does not reconstruct during thermal depassivation [225].
(4) Lower tip onto surface. The depassivated diamond shard tip is positioned perpendicular to the deposition surface upon which the tooltip molecules are affixed in the preferred orientation. The shard tip is then lowered toward the deposition surface (Figure 16A), in vacuo at room temperature.
(5) Bind shard to tooltip molecule. As the apical tip surface of the diamond shard reaches and contacts the deposition surface, the many dangling bonds at the C(110) crystal face of the apical tip surface bond with several carbon atoms in the base of a tooltip molecule, displacing several passivating hydrogen atoms which migrate to nearby dangling bonds on the diamond shard apical tip surface (Figure 16B).
(6) Retract tip from surface. The diamond shard is retracted from the deposition surface in the vertical direction. The tooltip molecule is more strongly bonded to the shard, so the vertical retraction of the shard causes the two bonds to the deposition surface through the C2 dimer to break (Figure 16C), creating an active C2 dimer radical exposed at the apical tip surface of the shard. The diamond shard is now an active tool that can be employed in diamond mechanosynthesis.
The process for manufacturing a mechanosynthetic tool via Method B is much inferior to the Method A process for a number of reasons. First, in Method B, after contacting the surface it will be uncertain how many, if any, tooltip molecules have become bonded to the apical tip surface of the diamond shard probe. Second, after the bonds to the deposition surface through the C2 dimer have been broken, the tooltip molecule is free to rotate and may form additional bonds between the tooltip molecule base and the depassivated apical tip surface, most likely carrying the tooltip molecule out of its vertical orientation and placing it in some unknown, possibly useless, orientation. Third, if the tooltip molecule is bonded to the diamond shard probe through only a minimal number of bonds then the tool may be far less rigid than the solid crystalline tool created by Method A, and thus may be incapable of transmitting the full range of magnitudes and directions of forces that may be required in mechanosynthetic operations. Finally, if the tooltip molecule is bonded to the diamond shard probe through bonds in various numbers and different crystallographic positions, then the position, vibrations, and other important characteristics of the tool will be far less predictable than the tool created by Method A, and the positional uncertainty of dimer placement may be much greater, possibly unacceptably high for many applications, even if the tool is operated at LN2 or lower temperatures. Nevertheless, Method B is a considerably easier process from an experimental standpoint and so it may be possible to manufacture early, though less capable, mechanosynthetic tools in this manner.
STEP 4. Mechanically grasp and break away the diamond crystal-handled tool from the deposition surface, in vacuo. The covalent bond between the tooltip (through the C2 dimer) and the surface will mechanically break (Table 8), yielding either a tool with a naked carbon dimer attached (i.e., a charged, active mechanosynthetic tool; Figure 17A) or a tool with no dimer attached (i.e., a “discharged” tool needing recharge, e.g., with acetylene; Figure 17B). Ideally, handle diamond near the tooltip forms only weak van der Waals bonds to the deposition surface, so tool breakaway produces few or no unwanted dangling bonds near the active tip. If deemed necessary, each tool can be further machined or shaped via laser-, e-beam-, or ion-beam-ablation to provide any desired aspect ratio for the finished tool, or to provide any necessary larger-scale features on the handle surface such as slots, grooves, or ridges, prior to separation of the tool from the surface. This toolbuilding process should work for any carbon dimer deposition tooltip of similar type, as long as the capping group and the deposition surface are judiciously chosen for each case. Note also that the discharged dimer deposition tool can often be employed as a dimer removal tool [38], at least in the case of isolated dimers on a mechanosynthetic workpiece, permitting limited rework capability during subsequent mechanosynthetic operations using the tools produced by the present invention.
Figure 17. Idealized mechanosynthetic tool handle structure (passivating hydrogen atoms not shown)
(A) active C2 dimer bound on tip |
(B) C2 dimer discharged from tip |
Following the completion of Step 3 but prior to the commencement of Step 4, the mechanosynthetic tools grown on the deposition surface in Step 3 may be stably stored indefinitely at room temperature under an inert atmosphere. Prior to the commencement of Step 4, the deposition surface containing the bound tools should be baked in vacuo at a temperature high enough to drive off any physisorbed impurities that may have accumulated on the surface or handle structure during storage, but at a temperature low enough to avoid significant dehydrogenation of the diamond handle crystal. Hydrogen desorption becomes measurable at 800-1100 K for the C(111) diamond surface [226], 1400 K for the C(110) surface [227], and possibly as low as 623 K for the C(100) surface [228]. Taking Tbake = 600 K and the dimer-to-surface C-C bond energy Ebond = 556 zJ [32], then the minimum thermal detachment time is given by the Arrhenius equation as tdetach ~ [(kBTbake/h) exp(-Ebond/kBTbake)]-1 = 1.1 x 1016 sec, where h = 6.63 x 10-34 J-sec (Planck’s constant) and kB, = 1.381 x 10-23 J/K (Boltzmann’s constant).
The minimum force required to break a C-C bond in a characteristic bond cleavage time of ~0.1 ns at 300 K is estimated as ~4.4 nN and ~4.0 nN for a C-Si bond, and the threshold stress for breaking two C-C bonds “mechanically constrained to cleave in a concerted process” is ~6 nN per bond [32]. Hence the force required to simultaneously break both of the bonds between the two tooltip dimer carbon atoms and the two deposition surface atoms to which they are attached, during tool separation in Step 4, is likely on the order of 8-12 nN. However, a much larger van der Waals attraction may exist between the diamond tool handle crystal and the deposition surface. For example, two opposed hydrogenated diamond C(111) surfaces equilibrate at ~2.3 Å separation, according to a simple molecular mechanics (MM+) simulation. Assuming no additional covalent bonds have formed between tool and deposition surface except through the C2 dimer at the tooltip, two planar surfaces of area A ~ 1 µm2 with Hamaker constant H ~ 300 zJ (i.e., diamond, Si, Ge, graphite, metal surfaces) separated by a distance s ~ 2.3 Å experience an attractive force [32, 93] of F ~ HA/12ps3 ~ 650,000 nN. Even if the contact interface is only 100 nm2 the attractive force is still F ~ 65 nN, an order of magnitude larger than the force required to break each of the two covalent bonds between deposition surface and C2 dimer. The separation force required to snap the finished tool free from the deposition surface, assuming no rogue covalent bonds, is therefore on the order of 102-106 nN. For comparison, the force of gravity on a 1 µm3 diamond crystal is ~0.00003 nN and the force from a 10,000-g shock impact acceleration (e.g., dropping object on concrete floor) produces a lateral accelerative force of only 0.3 nN.
Additionally, the flexural strength of diamond is 23 times greater than that of silicon, permitting much greater forces to be applied to the tool handle element without breakage; if the diamond handle crystal should contact the substrate which it overhangs, its low coefficient of static friction ensures that the diamond crystal will not adhere to the substrate [18]. Note that in one combustion CVD experiment with adamantane-seeded diamond growth on Mo (a carbide-forming surface; Table 7) [181], it was observed that “the diamond crystals show a low adhesion on the molybdenum substrate.” Differential thermal expansion during post-CVD cooling causes the built tool and the deposition surface to shrink differently, creating stresses and possibly prematurely breaking off the tool; a similar technique allows a grown diamond film to separate as an integral diamond sheet on cooling.
The need to securely grip and apply forces against mechanical resistance during the tool separation process, while retaining precise positional knowledge in all coordinate and rotational axes, imposes specific operational requirements for the gripper and manipulator system. Since the bondlength between C2 dimer and deposition surface is ~1.5 Å, and since these bonds cannot tolerate excessive stretching before breaking, the manipulator system should have a repeatable positioning resolution of at least DRmin ~ 2 Å. Subsequent mechanosynthetic operations on diamond surfaces will likely require repeatable positional accuracies of at least 0.5 Å, and in some cases as little as 0.2 Å [38, 235], or about tenfold better than for mere tool separation alone. Since handle crystals are of slightly different size, shape, and orientation, it is also important to avoid excessively rotating the handle as it is being grasped in preparation for tool separation from the deposition surface. A handle crystal of radius Rhandle = 1 mm and a minimum allowable displacement of DRmin = 2 Å implies a minimum allowable rotation of Dqmin = sin-1(DRmin/Rhandle) ~ 200 µrad, or 20 µrad for mechanosynthesis operations where DR = 0.2 Å. A further requirement is the ability of the manipulator to apply incremental forces along various translational or rotational vectors of DFmin = 102-106 nN.
The Zyvex S100 Nanomanipulator [229] achieves a rotational accuracy of DqS100 = 2 µrad << Dqmin = 20-200 µrad, as required. The S100 grippers provide a maximum gripping force of 550,000 nN ~ DFmin, which should be adequate in most cases. However, the repeatable positional accuracy of the S100 is only 50 Å, or 25 times coarser than the ~2 Å required for controlled tool separation and ~100-250 times coarser than the 0.2-0.5 Å required for accurate mechanosynthesis [235]. The Klocke Nanotechnik Nanomanipulator claims 20 Å step sizes and 10 Å positional accuracy without backlash [230], still not quite good enough. Nevertheless, in a somewhat different context scanning with AFM tips may be undertaken with the ~0.1 Å accuracy that would be required during room temperature mechanosynthesis operations. By premeasuring the exact positions of all viable tooltip molecules attached to the deposition surface, and then carefully tracking all positional and rotational motions that are subsequently applied to the tool, the exact 3D spatial position of the active tool dimer may be continuously estimated with sufficient accuracy.
Once the completed mechanosynthetic tool has been detached from the deposition surface, the exposed C2 dimer radical is extremely chemically active. According to an AM1 simulation, an activated DCB6-Ge tooltip is energetically preferred to combine with incident O2 molecules by 6.7 eV and with incident N2 molecules by 2.8 eV, the principal constituents of air, the most likely environmental contaminant. Since any laboratory vacuum is imperfect, stray atoms, ions, and molecules will populate the vacuum chamber at some low concentration and will eventually impinge upon an unused active tooltip, reacting with it and rendering it useless for further mechanosynthetic work.
Using the standard formula for molecular incident rate [231], the mean lifetime ttool of an active DCB6-Ge tooltip exposed to vacuum with a partial pressure Patm of contaminant molecules having molar mass molar (kg/mole) at temperature T, is given by: ttool = (Nhits Vmolar / Atarget Patm NA) (p Mmolar / 2 kBT NA)1/2 (seconds), where the number of encounters between an active tooltip and a contaminant molecule that are required to deactivate the tooltip is taken as Nhits = 1, the molar gas volume Vmolar = 22.4141 x 10-3 m3-atm/mole, Atarget ~ 2 Å2 is the cross-sectional area of the exposed C2 dimer impact target (analogous to the room temperature dimer atom positional uncertainty footprint described in [38]), T = 77 K (LN2 temperatures), NA = 6.023 x 1023 molecules/mole (Avogadro’s number), and kB = 1.381 x 10-23 J/K (Boltzmann’s constant). Expressing pressure as Ptorr = 760 Patm in torr and rearranging terms, then Ptorr = (2.2 x 10-6) / ttool (torr) for hydrogen atoms (H) having molar mass Mmolar = 1 x 10-3 kg/mole; Ptorr = (1.2 x 10-5) / ttool (torr) for nitrogen molecules (N2) having molar mass Mmolar = 28 x 10-3 kg/mole and Ptorr = (1.3 x 10-5) / ttool (torr) for oxygen molecules (O2) having molar mass Mmolar = 32 x 10-3 kg/mole, the two most likely contaminant molecules from the ambient environment. To ensure a mean tooltip lifetime of ttool = 1000 sec requires maintaining a partial pressure Ptorr = 2.2 x 10-9 torr for H atoms, Ptorr = 1.2 x 10-8 torr for N2 and Ptorr = 1.3 x 10-8 torr for O2. Ultrahigh vacuums (UHV) of 10-7-10-10 torr have been commonly accessible experimentally for many decades [232], and vacuums as high as 10-15 torr have been created in the laboratory [233]. Note that a vacuum of 10-9 torr inside an enclosed 10,000 cubic micron box contains, on average, far less than one contaminant molecule – usually making, in effect, a perfect vacuum and allowing, in principle, an unrestricted tooltip lifetime.
Kinematic Self-Replicating Machines
Mechanosynthesis. Part I. Stability of Mediated Growth of Nanocryalline Diamond C(110) Surface
1. Tohru Inoue, Masaya Kadono, Akiharu Miyanaga, “Method for forming diamond and apparatus for forming the same,” U.S. Patent 5,360,477, 1 November 1994.
2. Nobuo Setaka, "Process for producing diamond powder by shock compression," U.S. Patent 4,377,565, 22 March 1983.
3. Robert C. McCune, Ronald J. Baird, "Making diamond composite coated cutting tools," U.S. Patent 4,919,974, 24 April 1990.
4. Harold P. Bovenkerk, Thomas R. Anthony, James F. Fleischer, William F. Banholzer, "CVD diamond by alternating chemical reactions," U.S. Patent 5,302,231, 12 April 1994.
5. Thomas R. Anthony, James F. Fleischer, "Smooth surface CVD diamond films and method for producing same," U.S. Patent 5,523,121, 4 June 1996.
6. Dieter M. Gruen, Thomas G. McCauley, Dan Zhou, Alan R. Krauss, "Tailoring nanocrystalline diamond film properties," U.S. Patent 6,592,839, 15 July 2003.
7. Y. Matsui, A. Yuuki, M. Sahara, Y. Hirose, "Flame structure and diamond growth mechanism of acetylene torch," Jpn. J. Appl. Phys. Part 1 28(1989):1718-1724.
8. D. M. Gruen, S. Liu, A. R. Krauss, J. Luo, X. Pan, "Fullerenes as Precursors for Diamond Film Growth Without Hydrogen or Oxygen Additions," Appl. Phys. Lett. 64(1994):1502-1504.
9. Dieter M. Gruen, Shengzhong Liu, Alan R. Krauss, Xianzheng Pan, "Diamond film growth from fullerene precursors," U.S. Patent 5,620,512, 15 April 1997.
10. Dieter M. Gruen, Alan R. Krauss, "Method for the preparation of nanocrystalline diamond thin films," U.S. Patent 5,772,760, 30 June 1998.
11. Dieter M. Gruen, "Conversion of fullerenes to diamond," U.S. Patent 5,209,916, 11 May 1993.
12. Dieter M. Gruen, "Conversion of fullerenes to diamond," U.S. Patent 5,328,676, 12 July 1994.
13. Dieter M. Gruen, "Conversion of fullerenes to diamond," U.S. Patent 5,370,855, 6 December 1994.
14. Dieter M. Gruen, "Conversion of fullerenes to diamonds," U.S. Patent 5,462,776, 31 October 1995.
15. Dieter M. Gruen, Alan R. Krauss, Shengzhong Liu, Xianzheng Pan, Christopher D. Zuiker, "Diamond film growth argon-carbon plasmas," U.S. Patent 5,849,079, 15 December 1998.
16. Dieter M. Gruen, Alan R. Krauss, Ali Erdemir, Cuma Bindal, Christopher D. Zuiker, "Smooth diamond films as low friction, long wear surfaces," U.S. Patent 5,989,511, 23 November 1999.
17. "Directed Energy Interactions with Surfaces: Fullerenes As Precursors for Diamond Film Growth," Chemistry Division, Argonne National Laboratory, accessed 29 December 2003; http://chemistry.anl.gov/surfaces/fullerenes.html
18. Alan R. Krauss, Dieter M. Gruen, Michael J. Pellin, Orlando Auciello, "Ultrananocrystalline diamond cantilever wide dynamic range acceleration/vibration/pressure sensor," U.S. Patent 6,422,077, 23 July 2002.
19. R.J.H. Klein-Douwel, J.J. ter Meulen, "Spatial distributions of atomic hydrogen and C2 in an oxyacetylene flame in relation to diamond growth," J. Appl. Phys. 83(1 May 1998):4734-4745; http://www.mlf.sci.kun.nl/publ/1998/H_C2.pdf
20. S. Matsumoto, Y. Matsui, "Electron microscopic observation of diamond particles grown from the vapour phase," J. Mater. Sci. 18(1983):1785-1793.
21. Y. Sato, Japan Review in New Diamond (English version), Japan New Diamond Forum, 1990, p. 5.
22. Jeremy E. Dahl, Robert M. Carlson, Shenggao Liu, "Diamondoid-containing materials in microelectronics," U.S. Patent Application 20020130407, 19 September 2002.
23. Thomas A. Plaisted, Susan B. Sinnott "Hydrocarbon thin films produced from adamantane–diamond surface deposition: Molecular dynamics simulations," J. Vac. Sci. Technol. A 19(January/February 2001):262-266; http://dx.doi.org/10.1116/1.1335683
24. M. Matsuura, K. Murakami, Y. Inaki, T. Yamamoto, "Diamond-like-carbon thin films deposited from adamantane," Bull. Res. Inst. Electronics Shizuoka Univ. 23(1988):47-56. In Japanese.
25. Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, William A. Goddard III, "Theoretical studies of a hydrogen abstraction tool for nanotechnology," Nanotechnology 2(1991):187-195; http://www.zyvex.com/nanotech/Habs/Habs.html
26. Michael Page, Donald W. Brenner, "Hydrogen abstraction from a diamond surface: Ab initio quantum chemical study using constrained isobutane as a model," J. Am. Chem. Soc. 113(1991):3270-3274.
27. Michael Page, Donald W. Brenner, "Ab initio quantum chemical study of hydrogen abstraction from isobutane constrained to model a diamond surface," in Russell Messier, Jeffrey T. Glass, James E. Butler, Rustum Roy, eds., Proceedings of the Second International Conference, New Diamond Science and Technology, Materials Research Society, Pittsburgh, PA, 1991, pp. 45-50.
28. Xiao Yan Chang, Martin Perry, James Peploski, Donald L. Thompson, Lionel M. Raff, "Theoretical studies of hydrogen-abstraction reactions from diamond and diamond-like surfaces," J. Chem. Phys. 99(15 September 1993):4748-4758.
29. Susan B. Sinnott, Richard J. Colton, Carter T. White, Donald W. Brenner, "Surface patterning by atomically-controlled chemical forces: molecular dynamics simulations," Surf. Sci. 316(1994):L1055-L1060.
30. D.W. Brenner, S.B. Sinnott, J.A. Harrison, O.A. Shenderova, "Simulated engineering of nanostructures," Nanotechnology 7(1996):161-167; http://www.zyvex.com/nanotech/nano4/brennerAbstract.html and http://www.zyvex.com/nanotech/nano4/brennerPaper.pdf
31. A. Ricca, C.W. Bauschlicher Jr., J.K. Kang, C.B. Musgrave, "Hydrogen abstraction from a diamond (111) surface in a uniform electric field," Surf. Sci. 429(1999):199-205.
32. K. Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley & Sons, New York, 1992; http://www.zyvex.com/nanotech/nanosystems.html
33. Ralph C. Merkle, “A proposed ‘metabolism’ for a hydrocarbon assembler,” Nanotechnology 8(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html
34. Stephen P. Walch, Ralph C. Merkle, "Theoretical studies of diamond mechanosynthesis reactions," Nanotechnology 9(September 1998):285-296;
35. Fedor N. Dzegilenko, Deepak Srivastava, Subhash Saini, "Simulations of carbon nanotube tip assisted mechano-chemical reactions on a diamond surface," Nanotechnology 9(December 1998):325-330.
36. Ralph C. Merkle, Robert A. Freitas Jr., "Theoretical analysis of a carbon-carbon dimer placement tool for diamond mechanosynthesis," J. Nanosci. Nanotechnol. 3(August 2003):319-324; http://www.rfreitas.com/Nano/DimerTool.htm and http://www.rfreitas.com/Nano/JNNDimerTool.pdf
37. Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, "Theoretical Analysis of Diamond Mechanosynthesis. Part I. Stability of C2 Mediated Growth of Nanocrystalline Diamond C(110) Surface," J. Comput. Theor. Nanosci. 1(March 2004):62-70. http://www.MolecularAssembler.com/JCTNPengMar04.pdf
38. David J. Mann, Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, "Theoretical Analysis of Diamond Mechanosynthesis. Part II. C2 Mediated Growth of Diamond C(110) Surface via Si/Ge-Triadamantane Dimer Placement Tools," J. Comput. Theor. Nanosci. 1(March 2004):71-80. http://www.MolecularAssembler.com/JCTNMannMar04.pdf
39. Wilson Ho, Hyojune Lee, "Single bond formation and characterization with a scanning tunneling microscope," Science 286(26 November 1999):1719-1722; http://www.physics.uci.edu/~wilsonho/stm-iets.html
40. A. Herman, "Towards mechanosynthesis of diamondoid structures: I. Quantum-chemical molecular dynamics simulations of sila-adamantane synthesis on hydrogenated Si(111) surface with the STM," Nanotechnology 8(September 1997):132-144.
41. A. Herman, "Towards mechanosynthesis of diamondoid structures. II. Quantum-chemical molecular dynamics simulations of mechanosynthesis on an hydrogenated Si(111) surface with STM," Modelling Simul. Mater. Sci. Eng. 7(January 1999):43-58; A. Herman, “Computational nanotechnology of silicon structures: a challenge far beyond 2000,” TASK Quarterly 1(July 1997):9-20.
42. Noriaki Oyabu, Oscar Custance, Insook Yi, Yasuhiro Sugawara, Seizo Morita, "Mechanical vertical manipulation of selected single atoms by soft nanoindentation using near contact Atomic Force Microscopy," Phys. Rev. Lett. 90(2 May 2003):176102; http://link.aps.org/abstract/PRL/v90/e176102
43. O. Marti, B. Drake, and P. K. Hansma, "Atomic force microscopy of liquid-covered surfaces: Atomic resolution images," Appl. Phys. Lett. 51(17 August 1987):484-486; http://content.aip.org/APPLAB/v51/i7/484_1.html
44. G. Tanasa, O. Kurnosikov, C.F.J. Flipse, J.G. Buijnsters, W.J.P. van Enckevort, "Diamond deposition on modified silicon substrates: Making diamond atomic force microscopy tips for nanofriction experiments," J. Appl. Phys. 94(1 August 2003):1699-1704; http://content.aip.org/JAPIAU/v94/i3/1699_1.html
45. Sacharia Albin, Jianli Zheng, John B. Cooper, Weihai Fu, Arnel C. Lavarias, "Microwave plasma chemical vapor deposited diamond tips for scanning tunneling microscopy," Appl. Phys. Lett. 71(10 November 1997):2848-2850.
46. E. Oesterschulze, W. Scholz, Ch. Mihalcea, D. Albert, B. Sobisch, W. Kulisch, "Fabrication of small diamond tips for scanning probe microscopy application," Appl. Phys. Lett. 70(27 January 1997):435-437; http://content.aip.org/APPLAB/v70/i4/435_1.html
47. G.J. Germann, G.M. McClelland, Y. Mitsuda, M. Buck, H. Seki, "Diamond force microscope tips fabricated by chemical vapor deposition," Rev. Sci. Instrum. 63(1 September 1992):4053-4055; http://content.aip.org/RSINAK/v63/i9/4053_1.html
48. Geoffrey J. Germann, Sidney R. Cohen, Gabi Neubauer, Gary M. McClelland, Hajime Seki, D. Coulman, "Atomic scale friction of a diamond tip on diamond (100) and (111) surfaces," J. Appl. Phys. 73(1 January 1993):163-167; http://content.aip.org/JAPIAU/v73/i1/163_1.html
49. Eric P. Visser, Jan W. Gerritsen, Willem J. P. van Enckevort, Herman van Kempen, "Tip for scanning tunneling microscopy made of monocrystalline, semiconducting, chemical vapor deposited diamond," Appl. Phys. Lett. 60(29 June 1992):3232-3234; http://content.aip.org/APPLAB/v60/i26/3232_1.html
50. J.H. Hafner, C.L. Chueng, C.M. Lieber, "Growth of nanotubes for probe microscopy tips," Nature 398(1999):761-762.
51. C.L. Cheung, J.H. Hafner, C.M. Lieber, "Carbon nanotube atomic force microscopy tips: Direct growth by chemical vapor deposition and application to high-resolution imaging," Proc. Natl. Acad. Sci. (USA) 97(2000):3809-3813.
52. Natalya Shcherbinina, "Carbon Nanotube Tips for Atomic Force Microscopy," 28 June 2001; http://cmliris.harvard.edu/html_natalya/research/probes/tip.html
53. E.W. Krahe, R. Mattes, K.-F. Tebbe, H.G. v. Schnering, G. Fritz, "Formation of organosilicons compounds. 47. The crystal and molecular structure of 1,3,5,7-tetramethyl-tetrasila-adamantane," Z. Anorg. Allg. Chem. 393(1972):74-80. In German.
54. R. Mattes, "Structure of organosilicon compounds. III. Vibrational spectra of 1,3,5,7-tetraethyl-silaadamantane," J. Molec. Structure 16(April 1973):53-58. In German.
55. G. Fritz, G. Marquardt, "Formation of organosilicons compounds. LIII. Novel carbosilanes by pyrolysis of Si(CH3)4 and their isolation," Z. Anorg. Allg. Chem. 404(March 1974):1-37. In German.
56. Stephan Pawlenko, Organosilicon Chemistry, Walter de Gruyter, New York, 1986.
57. Cecil L. Frye, Jerome M. Kosowski, Donald R. Weyenberg, "1,3,5,7-tetrasilaadamantanes. A facile synthesis via catalyzed ligand redistribution," J. Am. Chem. Soc. 92(21 October 1970):6379-6380.
58. M. Anthony McKervey, John J. Rooney, "Catalytic routes to adamantane and its homologues," in George A. Olah, ed., Cage Hydrocarbons, John Wiley & Sons, New York, 1990, pp. 39-64.
59. R.C. Bingham, P.v.R. Schleyer, Chemistry of Adamantanes: Recent Developments in the Chemistry of Adamantane and Related Polycyclic Hydrocarbons, Springer-Verlag, New York, 1971.
60. Raymond C. Fort, Jr., Adamantane: The Chemistry of Diamond Molecules, Marcel Dekker, New York, 1976.
61. Evgenii Ignatevich Bagrii, Adamantany: Poluchenie, Svoistva, Primenenie (Adamantanes: Preparation, Properties, and Application), Nauka Press, Moscow, 1989.
62. George A. Olah, ed., Cage Hydrocarbons, John Wiley & Sons, New York, 1990.
63. Paul von Rague Schleyer, "My thirty years in hydrocarbon cages: From adamantane to dodecahedrane," in George A. Olah, ed., Cage Hydrocarbons, John Wiley & Sons, New York, 1990, pp. 1-38.
64. W.D. Graham, P. von R. Schleyer, "Diamond lattice hydrocarbons: spiro[adamantane-2,2’-adamantane]," Tetrahedron Lett. 12(1972):1179-1180.
65. E. Boelema, J. Strating, Hans Wynberg, "Spiro[adamantane-2,2’-adamantane]," Tetrahedron Lett. (1972):1175-1177.
66. W. David Graham, Paul von R. Schleyer, Edward W. Hagaman, Ernest Wenkert, "[2]Diadamantane, the first member of a new class of diamondoid hydrocarbons," J. Am. Chem. Soc. 95(22 August 1973):5785-5786.
67. Chris A. Cupas, Paul von R. Schleyer, David J. Trecker, "Congressane," J. Am. Chem. Soc. 87(20 February 1965):917-918.
68. Van Zandt Williams, Jr., Paul von R. Schleyer, Gerald Jay Gleicher, Lynn B. Rodewald, "Triamantane," J. Am. Chem. Soc. 88(20 August 1966):3862-3863.
69. O. Vogl, B.C. Anderson, D.M. Simons, "Synthesis of hexaoxadiamantanes," Tetrahedron Lett. (1966):415-418.
70. William Burns, Thomas R.B. Mitchell, M. Anthony McKervey, John J. Rooney, George Ferguson, Paul Roberts, "Gas-phase reactions on platinum. Synthesis and crystal structure of anti-tetramantane, a large diamondoid fragment," J. Chem. Soc. Chem. Commun. (1976):893-895.
71. William Burns, M. Anthony McKervey, Thomas R.B. Mitchell, John J. Rooney, "A new approach to the synthesis of diamondoid hydrocarbons: synthesis of anti-tetramantane," J. Am. Chem. Soc. 100(1978):906-911.
72. Mingzuo Shen, Henry F. Schaefer III, Congxing Liang, Jenn-Huei Lii, Norman L. Allinger, Paul von Rague Schleyer, "Finite Td symmetry models for diamond: From adamantane to superadamantane (C35H36)," J. Am. Chem. Soc. 114(1992):497-505.
73. Chris A. Cupas, Leonard Hodakowski, "Iceane," J. Am. Chem. Soc. 96(10 July 1974):4668-4669.
74. David P.G. Hamon, Garry F. Taylor, "A Synthesis of Tetracyclo[5,3,1,12.6,04.9]dodecane (Iceane)," Tetrahedron Lett. (1975):155-158; David P.G. Hamon, Garry F. Taylor, "A Synthesis of Tetracyclo[5,3,1,12.6,04.9]dodecane (Iceane)," Aust. J. Chem. 29(1976):1721-1734.
75. David P.G. Hamon, Colin L. Raston, Garry F. Taylor, Jose N. Varghese, Allan H. White, "Crystal Structure of Tetracyclo[5,3,1,12.6,04.9]dodecane (Iceane)," Aust. J. Chem. 30(1977):1837-1840.
76. P.D. Ownby, "First commercial source of hexagonal diamond powder," submitted, 2004; http://www.umr.edu/~ownby/publications.html
77. S. Fahy, S.G. Louie, "High-pressure structural and electronic properties of carbon," Phys. Rev. B 36(1987):3373-3385.
78. G. Laqua, H. Musso, W. Boland, R. Ahlrichs, "Force Field Calculations (MM2) of Carbon Lattices," J. Am. Chem. Soc. 112(1990):7391-7392.
79. P.D. Ownby, X. Yang, J. Liu, "Calculated x-ray diffraction data for diamond polytypes," J. Amer. Ceramic Soc. 75(1992):1876-1883.
80. Pierluigi Mercandelli, Massimo Moret, Angelo Sironi, "Molecular mechanics in crystalline media," Inorg. Chem. 37(1998):2563-2569.
81. M. Nomura, P. von R. Schleyer, A.A. Arz, "Alkyladamantanes by rearrangement from diverse starting materials," J. Am. Chem. Soc. 89(5 July 1967):3657-3659.
82. Zhu Feng, Marilee Brewer, Ian Brown, Kyriakos Komvopoulos, "Pretreatment process for forming a smooth surface diamond film on a carbon-coated substrate," U.S. Patent 5,308,661, 3 May 1994.
83. A.A. Morrish, Pehr E. Pehrsson, "Effects of surface pretreatments on nucleation and growth of diamond films on a variety of substrates," Appl. Phys. Lett. 59(22 July 1991):417-419; http://content.aip.org/APPLAB/v59/i4/417_1.html
84. Huimin Liu, David S. Dandy, "Studies on nucleation process in diamond CVD: An overview of recent developments," Diam. Relat. Mater. 4(1995):1173-1188; http://navier.engr.colostate.edu/pubs/DRM1-Full.pdf. See also: Huimin Liu, David S. Dandy, Diamond Chemical Vapor Deposition: Nucleation and Early Growth Stages, Noyes Publications, Park Ridge, New Jersey, 1995; http://navier.engr.colostate.edu/pubs/BookSummary.pdf
85. Paul W. May, "Diamond thin films: a 21st-century material," Phil. Trans. R. Soc. Lond. A 358(2000):473-495; http://www.chm.bris.ac.uk/pt/diamond/pdf/rscreview.pdf
86. Wei Zhu, Peichun Yang, Jeffrey T. Glass, "Method of fabricating oriented diamond films on nondiamond substrates and related structures," U.S. Patent 5,449,531, 12 September 1995.
87. Hiromu Shiomi, Naoji Fujimori, "Method for producing single crystal diamond film," U.S. Patent 5,387,310, 7 February 1995.
88. B. Lux, R. Haubner, "Nucleation and growth of low-pressure diamond," in R.E. Clausing, L.L. Horton, J.C. Angus, P. Koidl, eds., Diamond and Diamond-like Films and Coatings, Plenum Press, New York, 1991, p. 579-609.
89. B.V. Spitzyn, L.L. Bouilov, B.V. Derjaguin, J. Crystal Growth 52(1981):219.
90. J.W. Kim, Y.J. Baik, K.Y. Eun, in Y. Tzeng, M. Yoshikawa, M. Murakawa, A. Feldman, eds., Applications of Diamond Films and Related Materials, Elsevier Sci. Publ., New York 1991, p. 399.
91. Takahiro Imai, Naoji Fujimori, "Thin film single crystal diamond substrate," U.S. Patent 4,863,529, 5 September 1989.
92. R. Hultgren, P.D. Desai, D.T. Hawkins, M. Gleiser, K.K. Kelley, Selected Values of the Thermodynamic Properties of Binary Alloys, American Society for Metals, Metals Park, OH, 1973.
93. Robert A. Freitas Jr., Nanomedicine, Volume I: Basic Capabilities, Landes Bioscience, Georgetown, TX, 1999, Appendix A. http://www.nanomedicine.com/NMI/AppendixA.htm
94. H.O. Pierson, Handbook of Carbon, Graphite, Diamond and Fullerenes, Noyes Publications, Park Ridge, New Jersey, 1993; J.E. Field, The Properties of Diamond, Academic Press, London, 1979.
95. Michael J. Mehl, "Tight binding parameters for the elements," U.S. Naval Research Laboratory (NRL), 25 July 2002; http://cst-www.nrl.navy.mil/bind/index.html
96. Robert C. Weast, Handbook of Chemistry and Physics, 49th Edition, CRC Press, Cleveland OH, 1968.
97. E.A. Brandes, Smithells Metals Reference Book, 6th edition, Butterworth & Co, London, 1983.
98. "WebElements" website, 2003; http://www.webelements.com/webelements/elements/text/<Symbol>/heat.html
99. "Sapphire Properties Table," MarkeTech International Inc., Port Townsend WA, 21 February 2002; http://www.mkt-intl.com/sapphires/sapphphotos.htm
100. S.M. Sze, Physics of Semiconductor Devices, Wiley Interscience Publications, New York, 1981, pp. 848-849; http://www.veeco.com/learning/learning_lattice.asp
101. http://www.impex-hightech.de/Quartz.html; http://www.argusinternational.com/quartz.html
102. J. Shackelford, W. Alexander, The CRC Materials Science and Engineering Handbook, CRC Press, Boca Raton FL, 1992.
103. http://www.accuratus.com/silinit.html
104. http://www.lucasmilhaupt.com/htmdocs/brazing_support/everything_about_brazing/materials_comp_chart.html
105. M.E. Bachlechner, A. Omeltchenko, A. Nakano, R.K. Kalia, P. Vashishta, A. Madhukar, P. Messina, "Multimillion-atom molecular dynamics simulation of atomic level stresses in Si(111)/Si3N4(0001) nanopixels," Appl. Phys. Lett. 72(20 April 1998):1969-1971.
106. D.A. Papaconstantopoulos, M.J. Mehl, "First-principles study of superconductivity in high-pressure boron," 3 July 2003; http://arxiv.org/pdf/cond-mat/0111385
107. G.V. Samsonov, The Oxide Handbook, IFI/Plenum Data Corporation, New York, 1973.
108. http://www.umsl.edu/~fraundor/rworld/msa99.pdf
109. A.V. Postnikov, P. Entel, "Ab initio molecular dynamics and elastic properties of TiC and TiN nanoparticles," University of Duisburg, 2003; http://www.thp.uni-duisburg.de/Paper/Postnik/tin_final.pdf
110. Toshimichi Ito, "Method for synthesis of diamond and apparatus therefor," U.S. Patent 4,869,924, 26 September 1989.
111. K. Kobayashi, M. Kumagai, S. Karasawa, T. Watanabe, F. Togashi, J. Cryst. Growth 128(1993):408.
112. S.J. Lin, S.L. Lee, J. Hwang, C.S. Chang, H.Y. Wen, "Effects of local facet and lattice damage on nucleation of diamond grown by microwave plasma chemical vapor deposition," Appl. Phys. Lett. 60(30 March 1992):1559-1561; http://content.aip.org/APPLAB/v60/i13/1559_1.html
113. K. Hirabayashi, Y. Taniguchi, O. Takamatsu, T. Ikeda, K. Ikoma, N. Iwasaki-Kurihara, "Selective deposition of diamond crystals by chemical vapor deposition using a tungsten-filament method," Appl. Phys. Lett. 53(7 November 1988):1815-1817; http://content.aip.org/APPLAB/v53/i19/1815_1.html
114. S. Aisenberg, R. Chabot, "Ion-beam deposition of thin films of diamondlike carbon," J. Appl. Phys. 42(1971):2953.
115. V.E. Strelnitskii, I.I. Aksenov, S.I. Vakula, V.G. Padakula, Sov. Phys. Tech. Phys. 23(1978):222.
116. Mutsukazu Kamo, Seiichiro Matsumoto, Yoichiro Sato, Nobuo Setaka, "Method for synthesizing diamond," U.S. Patent 4,434,188, 28 February 1984.
117. Seiichiro Matsumoto, Mototsugu Hino, Yusuke Moriyoshi, Takashi Nagashima, Masayuki Tsutsumi, "Method for synthesizing diamond by using plasma," U.S. Patent 4,767,608, 30 August 1988.
118. John W. Rabalais, Srinandan R.Kasi, "Chemically bonded diamond films and method for producing same," U.S. Patent 4,822,466, 18 April 1989.
119. T.P. Ong, Fulin Xiong, R.P.H. Chang, C.W. White, "Nucleation and growth of diamond on carbon-implanted single crystal copper surfaces," J. Mater. Res. 7(September 1992):2429-2439; http://www.mrs.org/publications/jmr/jmra/1992/sep/P02429.PDF
120. Shuji Iino, Hideo Hotomi, Izumi Osawa, Mitsutoshi Nakamura, "Photosensitive member with hydrogen-containing carbon layer," U.S. Patent 4,743,522, 10 May 1988.
121. Kiyoshi Morimoto, Toshinori Takagi, "Ion beam deposition apparatus," U.S. Patent 4,559,901, 24 December 1985.
122. Karin Larsson, "Migration of species on a diamond (111) surface," in J.L. Davidson, W.D. Brown, A. Gicquel, B.V. Spitsyn, J.C. Angus, eds., Proc. Fifth Intl. Symp. on Diamond Materials, The Electrochemical Society, Pennington, NJ, 1998, pp. 247-253.
123. S.P. Mehandru, Alfred B. Anderson, "Adsorption of H, CH3, CH2 and C2H2 on 2 x 1 restructured diamond (100)," Surf. Sci. 248(June 1991):369-381.
124. Michael Frenklach, Sergei Skokov, "Surface migration in diamond growth," J. Phys. Chem. 101(1997):3025-3036.
125. E.J. Dawnkaski, D. Srivastava, B.J. Garrison, "Time dependent Monte Carlo simulations of H reactions on the diamond (001)(2×1) surface under chemical vapor deposition conditions," J. Chem. Phys. 102(15 June 1995):9401-9411; http://galilei.chem.psu.edu/pdf/147bjg.pdf
126. Frank Jansen, Mary A. Machonkin, "Processes for the preparation of polycrystalline diamond films," U.S. Patent 4,925,701, 15 May 1990.
127. Allen R. Kirkpatrick, "Diamond films and method of growing diamond films on nondiamond substrates," U.S. Patent 5,082,359, 21 January 1992.
128. Michael J. Ulczynski, Donnie K. Reinhard, Jes Asmussen, "Process for depositing adherent diamond thin films," U.S. Patent 5,897,924, 27 April 1999.
129. Japanese Patent Application, Abstract No. 2138395, June 1987; cited in Frank Jansen, Mary A. Machonkin, "Processes for the preparation of polycrystalline diamond films," U.S. Patent 4,925,701, 15 May 1990.
130. Bing-Kun Yu, "Preparation of diamond and diamond-like thin films," U.S. Patent 5,273,788, 28 December 1993.
131. James Chickos, Donald Hesse, Sarah Hosseini, Gary Nichols, Paul Webb, "Sublimation enthalpies at 298.15 K using correlation gas chromatography and differential scanning calorimetry measurements," Thermochimica Acta 313(1998):101-110; http://www.umsl.edu/~jscumsl/JSCPUBS/seale.pdf
132. C.W.F.T. Pistorius, H.C. Snyman, Z. Physik Chem. 43(1964):278; C.W.F.T. Pistorius, H.A. Resing, Mol. Cryst. Liq. Cryst. 5(1969):353.
133. Ilham Mokbel, Kvetoslav Ruzicka, Vladimir Majer, Vlastimil Ruzicka, Madeleine Ribeiro, Jacques Jose, Milan Zabransky, "Phase equilibria (v-s, v-l) for three compounds of petroleum interest: 1-phenyldodecane, (5a)-cholestane, adamantane," Fluid Phase Equilibria 169(2000):191-207.
134. Sang Tae Park, Sang Kyu Kim, Myung Soo Kim, "Observation of conformation-specific pathways in the photodissociation of 1-iodopropane ions," Nature 415(17 January 2002):306-308.
135. G. Lucovsky, P. D. Richards, R. J. Markura, paper presented at Workshop of Dielectric Systems for III-V Semiconductors, San Diego CA, 26-27 May 1984.
136. P.D. Richard, R.J. Markunas, G. Lucovsky, G.G. Fountain, A.N. Mansour, D.V. Tsu, "Remote plasma enhanced CVD deposition of silicon nitride and oxide for gate insulators in (In, Ga)AS FET devices," J. Vac. Sci. Technology A 3(May-June 1985):867-872.
137. Chandra V. Desphandey, Rointan F. Bunshah, Hans J. Doerr, "Process for making diamond, doped diamond, diamond-cubic boron nitride composite films," U.S. Patent 4,816,291, 28 March 1989.
138. S.W. Hla, L. Bartels, G. Meyer, K.-H. Rieder, "Inducing all steps of a chemical reaction with the scanning tunneling microscop tip: Towards single molecule engineering," Phys. Rev. Lett. 85(2000):2777-2780; http://www.phy.ohiou.edu/~hla/7.pdf
139. Saw-Wai Hla, Gerhard Meyer, Karl-Heinz Rieder, "Inducing single-molecule chemical reactions with a UHV-STM: A new dimension for nano-science and technology," Chem. Phys. Chem. 2(2001):361-366; http://plato.phy.ohiou.edu/~hla/HLA2001-1.pdf
140. Saw-Wai Hla, Karl-Heinz Rieder, "STM control of chemical reactions: single-molecule synthesis," Annu. Rev. Phys. Chem. 54(June 2003):307-330; http://www.phy.ohiou.edu/~hla/HLA-annualreview.pdf
141. William L. Masterton, Emil J. Slowinski, Chemical Principles, Second Edition, W.B. Saunders Co., Philadelphia, PA, 1969, pp. 215, 299.
142. Robert A. Freitas Jr., Nanomedicine, Volume IIA: Biocompatibility, Landes Bioscience, Georgetown, TX, 2003, p. 64. http://www.nanomedicine.com/NMIIA/15.3.3.3.htm#p6
143. R. Taylor, D.R.M. Walton, "The chemistry of fullerenes," Nature 363(1993):685.
144. F. Diederich, C. Thilgen, "Covalent fullerene chemistry," Science 271(19 January 1996):317-323.
145. B. Ni, S.B. Sinnott, "Chemical functionalization of carbon nanotubes through energetic radical collisions," Phys. Rev. B 61(2000):R16343-R16346.
146. Sarbajit Banerjee, Stanislaus S. Wong, "Functionalization of Carbon Nanotubes with a Metal-Containing Molecular Complex," Nano Letters 2(2002):49-53.
147. Keun Soo Kim, Kyung Ah Park, Hyun Jin Kim, Dong Jae Bae, Seong Chu Lim, Young Hee Lee, Jae Ryong Kim, Ju-Jin Kim, Won Bong Choi, "Band Gap Modulation of a Carbon Nanotube by Hydrogen Functionalization," J. Korean Phys. Soc. 42(February 2003):S137-S142; http://nanotube.skku.ac.kr/data/paper/KSKim_JKPS.pdf
148. Sarbajit Banerjee, Michael G.C. Kahn, Stanislaus S. Wong, "Rational Chemical Strategies for Carbon Nanotube Functionalization," Chem. Eur. J. 9(2003):1898-1908.
149. Frank J. Owens, Zafar Iqbal, "Electrochemical Functionalization Of Carbon Nanotubes With Hydrogen," 23rd Army Science Conference, Session L Poster Summaries: Nanotechnology, LP-11, 2002; http://www.asc2002.com/summaries/l/LP-11.pdf
150. J.L. Bahr, J.P. Yang, D.V. Kosynkin, M.J. Bronikowski, R.E. Smalley, J.M. Tour, "Functionalization of carbon nanotubes by electrochemical reduction of aryl diazonium salts: A bucky paper electrode," J. Am. Chem. Soc. 123(2001):6536-6542; http://smalley.rice.edu/rick’s publications/JACS123-6536.pdf
151. Kevin D. Ausman, Henry W. Rohrs, MinFeng Yu, Rodney S. Ruoff, "Nanostressing and mechanochemistry," Nanotechnology 10(September 1999):258-262.
152. X. Lu, X. Xu, N. Wang, Q. Zhang, "A DFT study of the 1,3-dipolar cycloadditions on the C(100)-2 x 1 surface," J. Org. Chem. 67(25 January 2002):515-520.
153. R. W. Hoffmann, Dehydrobenzene and Cycloalkynes, Verlag Chemie-Academic Press, New York, 1967.
154. Richard Jaffe, Jie Han, Al Globus, "Formation of Carbon Nanotube Based Gears: Quantum Chemistry and Molecular Mechanics Study of the Electrophilic Addition of o-Benzyne to Fullerenes, Graphene, and Nanotubes," First Electronic Molecular Modeling & Graphics Society Conference, 1996; http://www.nas.nasa.gov/Groups/Nanotechnology/publications/MGMS_EC1/quantum/paper.html
155. Al Globus, Richard Jaffe, "NanoDesign: Concepts and Software for a Nanotechnology Based on Functionalized Fullerenes," First Electronic Molecular Modeling & Graphics Society Conference, 1996; http://www.nas.nasa.gov/Groups/Nanotechnology/publications/MGMS_EC1/NanoDesign/paper.html
156. R.J. Hamers, S.K. Coulter, M.D. Ellison, J.S. Hovis, D.F. Padowitz, M.P. Schwartz, C.M. Greenlief, J.N. Russell Jr., "Cycloaddition chemistry of organic molecules with semiconductor surfaces," Acc. Chem. Res. 33(September 2000):617-624.
157. Yoshikatsu Namba, "Attempt to grow diamond phase carbon films from an organic solution," J. Vac. Sci. Technol. A 10(September/October 1992):3368-3370.
158. P. Aublanc, V.P. Novikov, L.V. Kuznetsova, M. Mermoux, "Diamond synthesis by electrolysis of acetates," Diam. Rel. Mater. 10(March-July 2001):942-946.
159. Steven H. Hoke, Jay Molstad, Dominique Dilettato, Mary Jennifer Jay, Dean Carlson, Bart Kahr, R. Graham Cooks, "Reaction of Fullerenes and Benzyne," J. Org. Chem. 57(11 September 1992):5069-5071.
160. M.S. Meier, G.W. Wang, R.C. Haddon, C.P. Brock, M.A. Lloyd, J.P. Selegue, "Benzyne adds across a closed 5-6 ring fusion in C70: Evidence for bond delocalization in fullerenes," J. Am. Chem. Soc. 120(1998):2337-2342
161. L. Giraud, V. Huber, T. Jenny, "2,2-Divinyladamantane: a new substrate for the modification of silicon surfaces," Tetrahedron 54(1998):11899-11906.
162. E. Leroy, O. M. Kuttel, L. Schlapbach, L. Giraud, T. Jenny, "Chemical vapor deposition of diamond growth using chemical precursor," Appl. Phys. Lett. 73(24 August 1998):1050-1052.
163. Anne Giraud, Titus Jenny, Eric Leroy, Olivier M. Kuttel, Louis Schlapbach, Patrice Vanelle, Luc Giraud, "Chemical nucleation for CVD diamond growth," J. Am. Chem. Soc. 123(2001):2271-2274.
164. Liliana Dumitrescu Buforn, Eberhard Blank, "Diamond nucleation on chemically modified silicon using HFCVD," Paper G/PII.35, Session X: Multilayer Coatings, Symposium G, E-MRS Spring Meeting 2003, 10-13 June 2003; http://www-emrs.c-strasbourg.fr/2003SPRING/2003ABSTRACTS/2003_G_ABS.PDF
165. L.H. Sommer, E.W. Pietrusza, F.C. Whitmore, J. Am. Chem. Soc. 69(1947):188.
166. C. Chatgilialoglu, "Organosilanes as radical-based reducing agents in synthesis," Acc. Chem. Res. 25(1992):188-194.
167. B. Kopping, C. Chatgilialoglu, M. Zehnder, B. Giese, J. Org. Chem. 57(1992):3994-4000.
168. W.I. Bevan, R.N. Haszeldine, J. Middleton, A.E. Tipping, J. Chem Soc., Perkin Trans. 1(1974):2305-2309.
169. K. Yamamoto, T. Hayashi, M. Kumada, J. Organomet. Chem. 28(1971):C37-C38.
170. J.L. Speier, "Homogeneous Catalysis Hydrosilation by Transition Metals," in F.G.A. Stones, R. West, eds., Advances in Organometallic Chemistry, Vol. 17, Academic Press, New York, 1979, pp. 407-447.
171. K. Tamao, T. Nakajima, R. Sumiya, H. Arai, N. Higuchi, Y. Ito, J. Am. Chem. Soc. 108(1986):6090-6093.
172. M. Tanaka, Y. Uchimari, H.J. Lautenschlager, Organometallics 10(1991):16-18.
173. L.N. Lewis, J. Stein, R.E. Colborn, Y. Gao, J. Dong, "The chemistry of fumarate and maleate inhibitors with platinum hydrosilylation catalysts," J. Organomet. Chem. 521(1996):221-227.
174. John C. Angus, Phillip W. Morrison, "Diamond Lab," Case Western Reserve University, 1998; http://web.archive.org/web/19981206235656/http://k2.scl.cwru.edu/cse/eche/faculty/angus/diamond.htm
175. K. Bando, K. Kamo, T. Ando, Y. Sato, "Deposition of diamond crystal at substrate temperature lower than 500 oC," in Russell Messier, Jeffrey T. Glass, James E. Butler, Rustum Roy, eds., Proceedings of the Second International Conference, New Diamond Science and Technology, Materials Research Society, Pittsburgh, PA, 1991, pp. 467-472.
176. Argonne National Laboratory, "Diamond Films for Microelectromechanical Systems (MEMS)"; http://www.techtransfer.anl.gov/techtour/diamondmems.html
177. M. Tomellini, "Evidence for nonclassical nucleation at solid surfaces in diamond deposition from the gas phase," J. Mater. Res. 8(July 1993):1596-1604.
178. A.R. Badzian, R.C. DeVries, "Crystallization of diamond from the gas phase: Part I," Mater. Res. Soc. Bull. 23(1988):385-400.
179. A.R. Badzian, T. Badzian, R. Roy, R. Messier, K.E. Spear, "Crystallization of diamond crystals and films by microwave assisted CVD (Part II)," Mater. Res. Soc. Bull. 23(April 1988):531-548.
180. P. Badziag, W.S. Verwoerd, W.P. Ellis, N.R. Greiner, "Nanometre-sized diamonds are more stable than graphite," Nature 343(1990):244-245.
181. Burak Atakan, Karsten Lummer, Katharina Kohse-Hoinghaus, "Diamond deposition in acetylene-oxygen: nucleation and early growth on molybdenum substrates for different pretreatment procedures," Phys. Chem. Chem. Phys. 1(1999):3151-3156; http://www.rsc.org/ej/CP/1999/F9901945.PDF
182. D.S. Knight, W.B. White, "Characterization of diamond films by Raman spectroscopy," J. Mater. Res. 4(March-April 1989):385-393.
183. L. Fayette, M. Mermoux, B. Marcus, F. Brunet, P. Germi, M. Pernet, L. Abello, G. Lucazeau, J. Garden, "Analysis of the fine structure of the Raman line and x-ray reflection profiles for textured CVD diamond films," Diam. Rel. Mater. 4(1995):1243-1250.
184. F.P. Bundy, J.S. Kasper, "Hexagonal diamond – A new form of carbon," J. Chem. Phys. 46(1 May 1967):3437-3446.
185. R.E. Hanneman, H.M. Strong, F.P. Bundy, "Hexagonal diamonds in meteorites: implications," Science (24 February 1967):995-997.
186. Hongliang He, T. Sekine, T. Kobayashi, "Direct transformation of cubic diamond to hexagonal diamond," Appl. Phys. Lett. 81(22 July 2002):610-612; http://content.aip.org/APPLAB/v81/i4/610_1.html
187. D.V. Fedoseev, V.L. Bukhovets, I.G. Varshavskaya, A.V. Lavrentev, B.V. Derjaguin, "Transition of graphite into diamond in a solid phase under the atmospheric pressure," Carbon 21(1983):237-241.
188. B.K. Roul, B.B. Nayak, P.K. Mishra, B.C. Mohanty, "Diamond and diamond-like-carbon growth on Si(100) by hot filament-assisted RF plasma CVD," J. Mater. Synth. Proc. 7(1999):281-288.
189. S. Bhargava, H.D. Bist, S. Sahli, M. Aslam, H.B. Tripathi, "Diamond polytypes in the chemical vapor deposited diamond films," Appl. Phys. Lett. 67(December 1995):1706-1708.
190. J.C. Angus, F.A. Buck, M. Sunkara, T.F. Groth, C.C. Hayman, R. Gat, "Diamond growth at low pressures," MRS Bull. 1989(October 1989):38-47.
191. John C. Angus, Cliff C. Hayman, “Low-pressure, metastable growth of diamond and "diamondlike’ phases," Science 241(19 August 1988):913-921.
192. R.E. Clausing, L. Heatherly, K.L. More, G.M. Begun, "Electron microscopy of the growth features and crystal structures of filament-assisted CVD diamond films," Surf. Coatings Technol. 39/40(1989):199-210.
193. B.E. Williams, J.T. Glass, R.F. Davis, K. Kobashi, "The analysis of defect structures and substrate/film interfaces of diamond thin films," J. Cryst. Growth 99(1990):1168-1176.
194. Keiji Hirabayashi, Noriko Iwasaki Kurihara, Naoto Ohtake, Masanori Yoshikawa, "Size dependence of morphology of diamond surfaces prepared by DC arc plasma jet chemical vapor deposition," Jpn. J. Appl. Phys. 31(February 1992):355-360.
195. R.C. DeVries, "Synthesis of diamond under metastable conditions," Annu. Rev. Mater. Sci. 17(1987):161-176.
196. E. Burgos, E. Halac, H. Bonadeo, "A semi-empirical potential for the statics and dynamics of covalent carbon systems," Chem. Phys. Lett. 298(18 December 1998):273-278.
197. L.V.Zhigilei, D.Srivastava, and B.J.Garrison, "Intermediate metastable structure of the C{111}/(1×1)H-C{111}/(2×1) surface phase transition," Phys. Rev. B 55(1997):1838-1843; http://galilei.chem.psu.edu/pdf/155bjg.pdf
198. B.E. Williams, J.T. Glass, R.F. Davis, K. Kobashi, K.L. More, in J.P. Dismukes, ed., Proc. First Intl. Symp. On Diamond and Diamond-like Films, Electrochemical Society, New York, 1989, p. 202.
199. John C. Angus, Alberto Argoitia, Roy Gat, Zhidan Li, Mahendra Sunkara, Long Wang, Yaxin Wang, "Chemical vapour deposition of diamond," in A. Lettington, J.W. Steeds, eds., Thin Film Diamond, Chapman and Hall, London, 1994, pp. 1-14; see also in: Phil. Trans. R. Soc. London A 342(1993):195-208.
200. C. Wild, R. Kohl, N. Herres, W. Muller-Sebert, P. Koidl, "Oriented CVD diamond films: twin formation, structure and morphology," Diam. Rel. Mater. 3(April 1994):373-381.
201. K.E. Spear, M. Frenklach, in K.E. Spear, J.P. Dismukes, eds., Synthetic Diamond: Emerging CVD Science and Technology, John Wiley & Sons, New York, 1993, pp. 243-304.
202. H. Sowa, E. Koch, "A proposal for a transition mechanism from the diamond to the lonsdaleite type," Acta Crystallogr. A 57(July 2001):406-413.
203. C.C. Battaile, D.J. Srolovitz, I.I. Oleinik, D.G. Pettifor, A.P. Sutton, S.J. Harris, J.E. Butler, "Etching effects during the chemical vapor deposition of (100) diamond," J. Chem. Phys. 111(1 September 1999):4291-4299; http://www.princeton.edu/~pmi/srolgroup/publications/JCP04291.pdf
204. K.E. Spear, "Diamond, ceramic coating of the future," J. Am. Ceram. Soc. 72(1989):171-191.
205. Andrzej Badzian, Teresa Badzian, "Diamond homoepitaxy by chemical vapor deposition," Diam. Rel. Mater. 2(31 March 1993):147-157.
206. C. Wild, P. Koidl, W. Muller-Sebert, H. Walcher, R. Kohl, N. Herres, R. Locher, R. Samlenski, R. Brenn, "Chemical vapour deposition and characterization of smooth {100}-faceted diamond films," Diam. Rel. Mater. 2(1993):158-168.
207. G. Janssen, J.J. Schermer, W.J.P. van Enckevort, L.J. Giling, "On the occurrence of (113) facets on CVD-grown diamond," J. Cryst. Growth 125(November 1992):42-50.
208. K.A. Snail, Z.P. Lu, R. Weimer, J. Heberlein, E. Pfender, L.M. Hanssen, "Confirmation of (113) facets on diamond grown by chemical vapor deposition," J. Cryst. Growth 137(April 1994):676-679.
209. C.J. Chu, M.P. D’Evelyn, R.H. Hauge, J.L. Margrave, "Mechanism of diamond growth by chemical vapor deposition on diamond (100), (111), and (110) surfaces: Carbon-13 studies," J. Appl. Phys. 70(1 August 1991):1695-1705.
210. C.J. Chu, R.H. Hauge, J.L. Margrave, M.P. D’Evelyn, "Growth kinetics of (100), (110), and (111) homoepitaxial diamond films," Appl. Phys. Lett. 61(21 September 1992):1393-1395.
211. S.S. Lee, D.W. Minsek, D.J. Vestyck, P. Chen, "Growth of diamond from atomic hydrogen and a supersonic free jet of methyl radicals," Science 263(18 March 1994):1596-1598.
212. R.E. Rawles, W.G. Morris, M.P. D’Evelyn, in D.L. Dreifus, A. Collins, T. Humphreys, K. Das, P.E. Pehrsson, eds., Diamond for Electronic Applications, Symp. Proc. 416, Materials Research Society, Pittsburgh, PA, 1996, pp. 13-18.
213. Nigel P. Hacker, George W. Tyndall, III, "Deposition of diamond films," U.S. Patent 4,948,629, 14 August 1990.
214. Richard A. Neifeld, "Method of preparing a thin diamond film," U.S. Patent 4,954,365, 4 September 1990.
215. T.P. Mollart, K.L. Lewis, "Optical-quality diamond growth from CO2-containing gas chemistries," Diam. Relat. Mater. 8(March 1999):236-241.
216. Marcus Asmann, Joachim Heberlein, Emil Pfender, "A review of diamond CVD utilizing halogenated precursors," Diam. Relat. Mater. 8(1 January 1999):1-16.
217. O. Tschauner, H.K. Mao, R.J. Hemley, "New transformations of CO(2) at high pressures and temperatures," Phys. Rev. Lett. 87(13 August 2001):075701.
218. Y. Namba, Jin Wie, T. Mohri, E.A. Heidarpour, "Large grain size thin films of carbon with diamond structure," J. Vac. Sci. Technol. A 7(January-February 1989):36-39.
219. C.V. Burton, "Artificial diamonds," Nature 72(24 August 1905):397.
220. Yadong Li, Yitai Qian, Hongwei Liao, Yi Ding, Li Yang, Cunyi Xu, Fangqing Li, Guien Zhou, "A reduction-pyrolysis-catalysis synthesis of diamond," Science 281(10 July 1998):246-247.
221. S. Feng, R. Xu, "New materials in hydrothermal synthesis," Acc. Chem. Res. 34(March 2001):239-247.
222. Stephen A. Godleski, Paul von Rague Schleyer, Eiji Osawa, Todd Wipke, "The systematic prediction of the most stable neutral hydrocarbon isomer," Prog. Phys. Org. Chem. 13(1981):63-117.
223. Stephen E. Stein, "Diamond and graphite precursors," Nature 346(9 August 1990):517.
224. Donald E. Patterson, Robert H. Hauge, C. Judith Chu, John L. Margrave, "Halogen-assisted chemical vapor deposition of diamond," U.S. Patent 5,071,677, 10 December 1991.
225. P.G. Lurie, J.M. Wilson, "The diamond surface. I. The structure of the clean surface and the interaction with gases and metals," Surf. Sci. 65(1977):453-475.
226. K. Bobrov, B. Fisgeer, H. Shechter, M. Folman, A. Hoffman, "Thermally-programmed desorption (TPD) of deuterium from Di(111) surface: presence of two adsorption states," Diam. Rel. Mater. 6(April 1997):736-742.
227. B.B. Pate, "The diamond surface: Atomic and electronic structure," Surf. Sci. 165(1986):83-142.
228. G.R. Brandes, A.P. Mills, Jr., "Work function and affinity changes associated with the structure of hydrogen-terminated diamond (100) surfaces," Phys. Rev. B 58(15 August 1998):4952-4962.
229. "Zyvex’s S100 Nanomanipulator System," http://www.zyvex.com/Products/S100_Faq.html; "Zyvex Microgrippers," http://www.zyvex.com/Products/Grippers.html
230. Klocke Nanotechnik, “Manipulators: Univeral Tools with 1 Nanometer Resolution,” http://www.nanomotor.de/p_nanomanipulator.htm; “SEM-Manipulators,” http://www.nanomotor.de/pdf/Compare_e_lo.PDF; “Processing Material in Electron Microscopes: Nanomanipulation With Several D.O.F.” http://www.nanomotor.de/aa_processing.htm
231. W.C. Gardiner Jr., Rates and Mechanisms of Chemical Reactions, Benjamin, New York, 1969.
232. W.F. Brunner Jr., T.H. Batzer, Practical Vacuum Techniques, Reinhold Publishing, New York, 1965, p. 124.
233. "Vacuum Pumps," McGraw Hill Encyclopedia of Science and Technology, Vol. 19, 1992, p. 128.
234. Robert A. Freitas Jr., Ralph C. Merkle, Diamond Surfaces and Diamond Mechanosynthesis, Landes Bioscience, Georgetown, TX, 2006. In preparation. http://www.MolecularAssembler.com/DSDM.htm. See also: Robert A. Freitas Jr., Ralph C. Merkle, "A Minimal Toolset for Positional Diamond Mechanosynthesis,"J. Comput. Theor. Nanosci. (2005). Submitted.
235. Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, John N. Randall, George D. Skidmore, "Theoretical Analysis of Diamond Mechanosynthesis. Part III. Positional C2 Deposition on Diamond C(110) Surface using Si/Ge/Sn-based Dimer Placement Tools," J. Comput. Theor. Nanosci. (2005). Submitted.
© 2003-2004
Ten years ago, Netscape’s explosive IPO ignited huge piles of money. The brilliant flash revealed what had been invisible only a moment before: the World Wide Web. As Eric Schmidt (then at Sun, now at Google) noted, the day before the IPO, nothing about the Web; the day after, everything.
Computing pioneer Vannevar Bush outlined the Web’s core idea—hyperlinked pages—in 1945, but the first person to try to build out the concept was a freethinker named Ted Nelson who envisioned his own scheme in 1965. However, he had little success connecting digital bits on a useful scale, and his efforts were known only to an isolated group of disciples. Few of the hackers writing code for the emerging Web in the 1990s knew about Nelson or his hyperlinked dream machine.
At the suggestion of a computer-savvy friend, I got in touch with Nelson in 1984, a decade before Netscape. We met in a dark dockside bar in Sausalito, California. He was renting a houseboat nearby and had the air of someone with time on his hands. Folded notes erupted from his pockets, and long strips of paper slipped from overstuffed notebooks. Wearing a ballpoint pen on a string around his neck, he told me—way too earnestly for a bar at 4 o’clock in the afternoon—about his scheme for organizing all the knowledge of humanity. Salvation lay in cutting up 3 x 5 cards, of which he had plenty.
Although Nelson was polite, charming, and smooth, I was too slow for his fast talk. But I got an aha! from his marvelous notion of hypertext. He was certain that every document in the world should be a footnote to some other document, and computers could make the links between them visible and permanent. But that was just the beginning! Scribbling on index cards, he sketched out complicated notions of transferring authorship back to creators and tracking payments as readers hopped along networks of documents, what he called the docuverse. He spoke of "transclusion" and "intertwingularity" as he described the grand utopian benefits of his embedded structure. It was going to save the world from stupidity.
I believed him. Despite his quirks, it was clear to me that a hyperlinked world was inevitable—someday. But looking back now, after 10 years of living online, what surprises me about the genesis of the Web is how much was missing from Vannevar Bush’s vision, Nelson’s docuverse, and my own expectations. We all missed the big story. The revolution launched by Netscape’s IPO was only marginally about hypertext and human knowledge. At its heart was a new kind of participation that has since developed into an emerging culture based on sharing. And the ways of participating unleashed by hyperlinks are creating a new type of thinking—part human and part machine—found nowhere else on the planet or in history.
Not only did we fail to imagine what the Web would become, we still don’t see it today! We are blind to the miracle it has blossomed into. And as a result of ignoring what the Web really is, we are likely to miss what it will grow into over the next 10 years. Any hope of discerning the state of the Web in 2015 requires that we own up to how wrong we were 10 years ago.
1995
Before the Netscape browser illuminated the Web, the Internet did not exist for most people. If it was acknowledged at all, it was mischaracterized as either corporate email (as exciting as a necktie) or a clubhouse for adolescent males (read: pimply nerds). It was hard to use. On the Internet, even dogs had to type. Who wanted to waste time on something so boring?
The memories of an early enthusiast like myself can be unreliable, so I recently spent a few weeks reading stacks of old magazines and newspapers. Any promising new invention will have its naysayers, and the bigger the promises, the louder the nays. It’s not hard to find smart people saying stupid things about the Internet on the morning of its birth. In late 1994, Time magazine explained why the Internet would never go mainstream: "It was not designed for doing commerce, and it does not gracefully accommodate new arrivals." Newsweek put the doubts more bluntly in a February 1995 headline: "THE INTERNET? BAH!" The article was written by astrophysicist and Net maven Cliff Stoll, who captured the prevailing skepticism of virtual communities and online shopping with one word: "baloney."
This dismissive attitude pervaded a meeting I had with the top leaders of ABC in 1989. I was there to make a presentation to the corner office crowd about this "Internet stuff." To their credit, they realized something was happening. Still, nothing I could tell them would convince them that the Internet was not marginal, not just typing, and, most emphatically, not just teenage boys. Stephen Weiswasser, a senior VP, delivered the ultimate putdown: "The Internet will be the CB radio of the ’90s," he told me, a charge he later repeated to the press. Weiswasser summed up ABC’s argument for ignoring the new medium: "You aren’t going to turn passive consumers into active trollers on the Internet."
I was shown the door. But I offered one tip before I left. "Look," I said. "I happen to know that the address abc.com has not been registered. Go down to your basement, find your most technical computer guy, and have him register abc.com immediately. Don’t even think about it. It will be a good thing to do." They thanked me vacantly. I checked a week later. The domain was still unregistered.
While it is easy to smile at the dodos in TV land, they were not the only ones who had trouble imagining an alternative to couch potatoes. Wired did, too. When I examine issues of Wired from before the Netscape IPO (issues that I proudly edited), I am surprised to see them touting a future of high production-value content—5,000 always-on channels and virtual reality, with a side order of email sprinkled with bits of the Library of Congress. In fact, Wired offered a vision nearly identical to that of Internet wannabes in the broadcast, publishing, software, and movie industries: basically, TV that worked. The question was who would program the box. Wired looked forward to a constellation of new media upstarts like Nintendo and Yahoo!, not old-media dinosaurs like ABC.
Problem was, content was expensive to produce, and 5,000 channels of it would be 5,000 times as costly. No company was rich enough, no industry large enough, to carry off such an enterprise. The great telecom companies, which were supposed to wire up the digital revolution, were paralyzed by the uncertainties of funding the Net. In June 1994, David Quinn of British Telecom admitted to a conference of software publishers, "I’m not sure how you’d make money out of it."
The immense sums of money supposedly required to fill the Net with content sent many technocritics into a tizzy. They were deeply concerned that cyberspace would become cyburbia—privately owned and operated. Writing in Electronic Engineering Times in 1995, Jeff Johnson worried: "Ideally, individuals and small businesses would use the information highway to communicate, but it is more likely that the information highway will be controlled by Fortune 500 companies in 10 years." The impact would be more than commercial. "Speech in cyberspace will not be free if we allow big business to control every square inch of the Net," wrote Andrew Shapiro in The Nation in July 1995.
The fear of commercialization was strongest among hardcore programmers: the coders, Unix weenies, TCP/IP fans, and selfless volunteer IT folk who kept the ad hoc network running. The major administrators thought of their work as noble, a gift to humanity. They saw the Internet as an open commons, not to be undone by greed or commercialization. It’s hard to believe now, but until 1991, commercial enterprise on the Internet was strictly prohibited. Even then, the rules favored public institutions and forbade "extensive use for private or personal business."
In the mid-1980s, when I was involved in the WELL, an early nonprofit online system, we struggled to connect it to the emerging Internet but were thwarted, in part, by the "acceptable use" policy of the National Science Foundation (which ran the Internet backbone). In the eyes of the NSF, the Internet was funded for research, not commerce. At first this restriction wasn’t a problem for online services, because most providers, the WELL included, were isolated from one another. Paying customers could send email within the system—but not outside it. In 1987, the WELL fudged a way to forward outside email through the Net without confronting the acceptable use policy, which our organization’s own techies were reluctant to break. The NSF rule reflected a lingering sentiment that the Internet would be devalued, if not trashed, by opening it up to commercial interests. Spam was already a problem (one every week!).
This attitude prevailed even in the offices of Wired. In 1994, during the first design meetings for Wired‘s embryonic Web site, HotWired, programmers were upset that the innovation we were cooking up—what are now called clickthrough ad banners—subverted the great social potential of this new territory. The Web was hardly out of diapers, and already they were being asked to blight it with billboards and commercials. Only in May 1995, after the NSF finally opened the floodgates to ecommerce, did the geek elite begin to relax.
Three months later, Netscape’s public offering took off, and in a blink a world of DIY possibilities was born. Suddenly it became clear that ordinary people could create material anyone with a connection could view. The burgeoning online audience no longer needed ABC for content. Netscape’s stock peaked at $75 on its first day of trading, and the world gasped in awe. Was this insanity, or the start of something new?
2005
The scope of the Web today is hard to fathom. The total number of Web pages, including those that are dynamically created upon request and document files available through links, exceeds 600 billion. That’s 100 pages per person alive.
How could we create so much, so fast, so well? In fewer than 4,000 days, we have encoded half a trillion versions of our collective story and put them in front of 1 billion people, or one-sixth of the world’s population. That remarkable achievement was not in anyone’s 10-year plan.
The accretion of tiny marvels can numb us to the arrival of the stupendous. Today, at any Net terminal, you can get: an amazing variety of music and video, an evolving encyclopedia, weather forecasts, help wanted ads, satellite images of anyplace on Earth, up-to-the-minute news from around the globe, tax forms, TV guides, road maps with driving directions, real-time stock quotes, telephone numbers, real estate listings with virtual walk-throughs, pictures of just about anything, sports scores, places to buy almost anything, records of political contributions, library catalogs, appliance manuals, live traffic reports, archives to major newspapers—all wrapped up in an interactive index that really works.
This view is spookily godlike. You can switch your gaze of a spot in the world from map to satellite to 3-D just by clicking. Recall the past? It’s there. Or listen to the daily complaints and travails of almost anyone who blogs (and doesn’t everyone?). I doubt angels have a better view of humanity.
Why aren’t we more amazed by this fullness? Kings of old would have gone to war to win such abilities. Only small children would have dreamed such a magic window could be real. I have reviewed the expectations of waking adults and wise experts, and I can affirm that this comprehensive wealth of material, available on demand and free of charge, was not in anyone’s scenario. Ten years ago, anyone silly enough to trumpet the above list as a vision of the near future would have been confronted by the evidence: There wasn’t enough money in all the investment firms in the entire world to fund such a cornucopia. The success of the Web at this scale was impossible.
But if we have learned anything in the past decade, it is the plausibility of the impossible.
Take eBay. In some 4,000 days, eBay has gone from marginal Bay Area experiment in community markets to the most profitable spinoff of hypertext. At any one moment, 50 million auctions race through the site. An estimated half a million folks make their living selling through Internet auctions. Ten years ago I heard skeptics swear nobody would ever buy a car on the Web. Last year eBay Motors sold $11 billion worth of vehicles. EBay’s 2001 auction of a $4.9 million private jet would have shocked anyone in 1995—and still smells implausible today.
Nowhere in Ted Nelson’s convoluted sketches of hypertext transclusion did the fantasy of a global flea market appear. Especially as the ultimate business model! He hoped to franchise his Xanadu hypertext systems in the physical world at the scale of a copy shop or café—you would go to a store to do your hypertexting. Xanadu would take a cut of the action.
Instead, we have an open global flea market that handles 1.4 billion auctions every year and operates from your bedroom. Users do most of the work; they photograph, catalog, post, and manage their own auctions. And they police themselves; while eBay and other auction sites do call in the authorities to arrest serial abusers, the chief method of ensuring fairness is a system of user-generated ratings. Three billion feedback comments can work wonders.
What we all failed to see was how much of this new world would be manufactured by users, not corporate interests. Amazon.com customers rushed with surprising speed and intelligence to write the reviews that made the site’s long-tail selection usable. Owners of Adobe, Apple, and most major software products offer help and advice on the developer’s forum Web pages, serving as high-quality customer support for new buyers. And in the greatest leverage of the common user, Google turns traffic and link patterns generated by 2 billion searches a month into the organizing intelligence for a new economy. This bottom-up takeover was not in anyone’s 10-year vision.
No Web phenomenon is more confounding than blogging. Everything media experts knew about audiences—and they knew a lot—confirmed the focus group belief that audiences would never get off their butts and start making their own entertainment. Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs. Blogs and other participant media would never happen, or if they happened they would not draw an audience, or if they drew an audience they would not matter. What a shock, then, to witness the near-instantaneous rise of 50 million blogs, with a new one appearing every two seconds. There—another new blog! One more person doing what AOL and ABC—and almost everyone else—expected only AOL and ABC to be doing. These user-created channels make no sense economically. Where are the time, energy, and resources coming from?
The audience.
I run a blog about cool tools. I write it for my own delight and for the benefit of friends. The Web extends my passion to a far wider group for no extra cost or effort. In this way, my site is part of a vast and growing gift economy, a visible underground of valuable creations—text, music, film, software, tools, and services—all given away for free. This gift economy fuels an abundance of choices. It spurs the grateful to reciprocate. It permits easy modification and reuse, and thus promotes consumers into producers.
The open source software movement is another example. Key ingredients of collaborative programming—swapping code, updating instantly, recruiting globally—didn’t work on a large scale until the Web was woven. Then software became something you could join, either as a beta tester or as a coder on an open source project. The clever "view source" browser option let the average Web surfer in on the act. And anyone could rustle up a link—which, it turns out, is the most powerful invention of the decade.
Linking unleashes involvement and interactivity at levels once thought unfashionable or impossible. It transforms reading into navigating and enlarges small actions into powerful forces. For instance, hyperlinks made it much easier to create a seamless, scrolling street map of every town. They made it easier for people to refer to those maps. And hyperlinks made it possible for almost anyone to annotate, amend, and improve any map embedded in the Web. Cartography has gone from spectator art to participatory democracy.
The electricity of participation nudges ordinary folks to invest huge hunks of energy and time into making free encyclopedias, creating public tutorials for changing a flat tire, or cataloging the votes in the Senate. More and more of the Web runs in this mode. One study found that only 40 percent of the Web is commercial. The rest runs on duty or passion.
Coming out of the industrial age, when mass-produced goods outclassed anything you could make yourself, this sudden tilt toward consumer involvement is a complete Lazarus move: "We thought that died long ago." The deep enthusiasm for making things, for interacting more deeply than just choosing options, is the great force not reckoned 10 years ago. This impulse for participation has upended the economy and is steadily turning the sphere of social networking—smart mobs, hive minds, and collaborative action—into the main event.
When a company opens its databases to users, as Amazon, Google, and eBay have done with their Web services, it is encouraging participation at new levels. The corporation’s data becomes part of the commons and an invitation to participate. People who take advantage of these capabilities are no longer customers; they’re the company’s developers, vendors, skunk works, and fan base.
A little over a decade ago, a phone survey by Macworld asked a few hundred people what they thought would be worth $10 per month on the information superhighway. The participants started with uplifting services: educational courses, reference books, electronic voting, and library information. The bottom of the list ended with sports statistics, role-playing games, gambling, and dating. Ten years later what folks actually use the Internet for is inverted. According to a 2004 Stanford study, people use the Internet for (in order): playing games, "just surfing," shopping the list ends with responsible activities like politics and banking. (Some even admitted to porn.) Remember, shopping wasn’t supposed to happen. Where’s Cliff Stoll, the guy who said the Internet was baloney and online catalogs humbug? He has a little online store where he sells handcrafted Klein bottles.
The public’s fantasy, revealed in that 1994 survey, began reasonably with the conventional notions of a downloadable world. These assumptions were wired into the infrastructure. The bandwidth on cable and phone lines was asymmetrical: Download rates far exceeded upload rates. The dogma of the age held that ordinary people had no need to upload; they were consumers, not producers. Fast-forward to today, and the poster child of the new Internet regime is BitTorrent. The brilliance of BitTorrent is in its exploitation of near-symmetrical communication rates. Users upload stuff while they are downloading. It assumes participation, not mere consumption. Our communication infrastructure has taken only the first steps in this great shift from audience to participants, but that is where it will go in the next decade.
With the steady advance of new ways to share, the Web has embedded itself into every class, occupation, and region. Indeed, people’s anxiety about the Internet being out of the mainstream seems quaint now. In part because of the ease of creation and dissemination, online culture is the culture. Likewise, the worry about the Internet being 100 percent male was entirely misplaced. Everyone missed the party celebrating the 2002 flip-point when women online first outnumbered men. Today, 52 percent of netizens are female. And, of course, the Internet is not and has never been a teenage realm. In 2005, the average user is a bone-creaking 41 years old.
What could be a better mark of irreversible acceptance than adoption by the Amish? I was visiting some Amish farmers recently. They fit the archetype perfectly: straw hats, scraggly beards, wives with bonnets, no electricity, no phones or TVs, horse and buggy outside. They have an undeserved reputation for resisting all technology, when actually they are just very late adopters. Still, I was amazed to hear them mention their Web sites.
"Amish Web sites?" I asked.
"For advertising our family business. We weld barbecue grills in our shop."
"Yes, but—"
"Oh, we use the Internet terminal at the public library. And Yahoo!"
I knew then the battle was over.
2015
The Web continues to evolve from a world ruled by mass media and mass audiences to one ruled by messy media and messy participation. How far can this frenzy of creativity go? Encouraged by Web-enabled sales, 175,000 books were published and more than 30,000 music albums were released in the US last year. At the same time, 14 million blogs launched worldwide. All these numbers are escalating. A simple extrapolation suggests that in the near future, everyone alive will (on average) write a song, author a book, make a video, craft a weblog, and code a program. This idea is less outrageous than the notion 150 years ago that someday everyone would write a letter or take a photograph.
What happens when the data flow is asymmetrical—but in favor of creators? What happens when everyone is uploading far more than they download? If everyone is busy making, altering, mixing, and mashing, who will have time to sit back and veg out? Who will be a consumer?
No one. And that’s just fine. A world where production outpaces consumption should not be sustainable; that’s a lesson from Economics 101. But online, where many ideas that don’t work in theory succeed in practice, the audience increasingly doesn’t matter. What matters is the network of social creation, the community of collaborative interaction that futurist Alvin Toffler called prosumption. As with blogging and BitTorrent, prosumers produce and consume at once. The producers are the audience, the act of making is the act of watching, and every link is both a point of departure and a destination.
But if a roiling mess of participation is all we think the Web will become, we are likely to miss the big news, again. The experts are certainly missing it. The Pew Internet & American Life Project surveyed more than 1,200 professionals in 2004, asking them to predict the Net’s next decade. One scenario earned agreement from two-thirds of the respondents: "As computing devices become embedded in everything from clothes to appliances to cars to phones, these networked devices will allow greater surveillance by governments and businesses." Another was affirmed by one-third: "By 2014, use of the Internet will increase the size of people’s social networks far beyond what has traditionally been the case."
These are safe bets, but they fail to capture the Web’s disruptive trajectory. The real transformation under way is more akin to what Sun’s John Gage had in mind in 1988 when he famously said, "The network is the computer." He was talking about the company’s vision of the thin-client desktop, but his phrase neatly sums up the destiny of the Web: As the OS for a megacomputer that encompasses the Internet, all its services, all peripheral chips and affiliated devices from scanners to satellites, and the billions of human minds entangled in this global network. This gargantuan Machine already exists in a primitive form. In the coming decade, it will evolve into an integral extension not only of our senses and bodies but our minds.
Today, the Machine acts like a very large computer with top-level functions that operate at approximately the clock speed of an early PC. It processes 1 million emails each second, which essentially means network email runs at 1 megahertz. Same with Web searches. Instant messaging runs at 100 kilohertz, SMS at 1 kilohertz. The Machine’s total external RAM is about 200 terabytes. In any one second, 10 terabits can be coursing through its backbone, and each year it generates nearly 20 exabytes of data. Its distributed "chip" spans 1 billion active PCs, which is approximately the number of transistors in one PC.
This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number—but brains are not doubling in size every few years. The Machine is.
Since each of its "transistors" is itself a personal computer with a billion transistors running lower functions, the Machine is fractal. In total, it harnesses a quintillion transistors, expanding its complexity beyond that of a biological brain. It has already surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. For this reason some researchers pursuing artificial intelligence have switched their bets to the Net as the computer most likely to think first. Danny Hillis, a computer scientist who once claimed he wanted to make an AI "that would be proud of me," has invented massively parallel supercomputers in part to advance us in that direction. He now believes the first real AI will emerge not in a stand-alone supercomputer like IBM’s proposed 23-teraflop Blue Brain, but in the vast digital tangle of the global Machine.
In 10 years, the system will contain hundreds of millions of miles of fiber-optic neurons linking the billions of ant-smart chips embedded into manufactured products, buried in environmental sensors, staring out from satellite cameras, guiding cars, and saturating our world with enough complexity to begin to learn. We will live inside this thing.
Today the nascent Machine routes packets around disturbances in its lines; by 2015 it will anticipate disturbances and avoid them. It will have a robust immune system, weeding spam from its trunk lines, eliminating viruses and denial-of-service attacks the moment they are launched, and dissuading malefactors from injuring it again. The patterns of the Machine’s internal workings will be so complex they won’t be repeatable; you won’t always get the same answer to a given question. It will take intuition to maximize what the global network has to offer. The most obvious development birthed by this platform will be the absorption of routine. The Machine will take on anything we do more than twice. It will be the Anticipation Machine.
One great advantage the Machine holds in this regard: It’s always on. It is very hard to learn if you keep getting turned off, which is the fate of most computers. AI researchers rejoice when an adaptive learning program runs for days without crashing. The fetal Machine has been running continuously for at least 10 years (30 if you want to be picky). I am aware of no other machine—of any type—that has run that long with zero downtime. While portions may spin down due to power outages or cascading infections, the entire thing is unlikely to go quiet in the coming decade. It will be the most reliable gadget we have.
And the most universal. By 2015, desktop operating systems will be largely irrelevant. The Web will be the only OS worth coding for. It won’t matter what device you use, as long as it runs on the Web OS. You will reach the same distributed computer whether you log on via phone, PDA, laptop, or HDTV.
In the 1990s, the big players called that convergence. They peddled the image of multiple kinds of signals entering our lives through one box—a box they hoped to control. By 2015 this image will be turned inside out. In reality, each device is a differently shaped window that peers into the global computer. Nothing converges. The Machine is an unbounded thing that will take a billion windows to glimpse even part of. It is what you’ll see on the other side of any screen.
And who will write the software that makes this contraption useful and productive? We will. In fact, we’re already doing it, each of us, every day. When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that can learn. Think of the 100 billion times per day humans click on a Web page as a way of teaching the Machine what we think is important. Each time we forge a link between words, we teach it an idea. Wikipedia encourages its citizen authors to link each fact in an article to a reference citation. Over time, a Wikipedia article becomes totally underlined in blue as ideas are cross-referenced. That massive cross-referencing is how brains think and remember. It is how neural nets answer questions. It is how our global skin of neurons will adapt autonomously and acquire a higher level of knowledge.
The human brain has no department full of programming cells that configure the mind. Rather, brain cells program themselves simply by being used. Likewise, our questions program the Machine to answer questions. We think we are merely wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the Web OS, thereby programming the Machine by using it.
What will most surprise us is how dependent we will be on what the Machine knows—about us and about what we want to know. We already find it easier to Google something a second or third time rather than remember it ourselves. The more we teach this megacomputer, the more it will assume responsibility for our knowing. It will become our memory. Then it will become our identity. In 2015 many people, when divorced from the Machine, won’t feel like themselves—as if they’d had a lobotomy.
Legend has it that Ted Nelson invented Xanadu as a remedy for his poor memory and attention deficit disorder. In this light, the Web as memory bank should be no surprise. Still, the birth of a machine that subsumes all other machines so that in effect there is only one Machine, which penetrates our lives to such a degree that it becomes essential to our identity—this will be full of surprises. Especially since it is only the beginning.
There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born.
You and I are alive at this moment.
We should marvel, but people alive at such times usually don’t. Every few centuries, the steady march of change meets a discontinuity, and history hinges on that moment. We look back on those pivotal eras and wonder what it would have been like to be alive then. Confucius, Zoroaster, Buddha, and the latter Jewish patriarchs lived in the same historical era, an inflection point known as the axial age of religion. Few world religions were born after this time. Similarly, the great personalities converging upon the American Revolution and the geniuses who commingled during the invention of modern science in the 17th century mark additional axial phases in the short history of our civilization.
Three thousand years from now, when keen minds review the past, I believe that our ancient time, here at the cusp of the third millennium, will be seen as another such era. In the years roughly coincidental with the Netscape IPO, humans began animating inert objects with tiny slivers of intelligence, connecting them into a global field, and linking their own minds into a single thing. This will be recognized as the largest, most complex, and most surprising event on the planet. Weaving nerves out of glass and radio waves, our species began wiring up all regions, all processes, all facts and notions into a grand network. From this embryonic neural net was born a collaborative interface for our civilization, a sensing, cognitive device with power that exceeded any previous invention. The Machine provided a new way of thinking (perfect search, total recall) and a new mind for an old species. It was the Beginning.
In retrospect, the Netscape IPO was a puny rocket to herald such a moment. The product and the company quickly withered into irrelevance, and the excessive exuberance of its IPO was downright tame compared with the dotcoms that followed. First moments are often like that. After the hysteria has died down, after the millions of dollars have been gained and lost, after the strands of mind, once achingly isolated, have started to come together—the only thing we can say is: Our Machine is born. It’s on.
© 2005 Kevin Kelly. Reprinted with permission.
]]>My dangerous idea is the near-term inevitability of radical life extension and expansion. The idea is dangerous, however, only when contemplated from current linear perspectives.
First the inevitability: the power of information technologies is doubling each year, and moreover comprises areas beyond computation, most notably our knowledge of biology and of our own intelligence. It took 15 years to sequence HIV and from that perspective the genome project seemed impossible in 1990. But the amount of genetic data we were able to sequence doubled every year while the cost came down by half each year.
We finished the genome project on schedule and were able to sequence SARS in only 31 days. We are also gaining the means to reprogram the ancient information processes underlying biology. RNA interference can turn genes off by blocking the messenger RNA that express them. New forms of gene therapy are now able to place new genetic information in the right place on the right chromosome. We can create or block enzymes, the work horses of biology. We are reverse-engineeringand gaining the means to reprogramthe information processes underlying disease and aging, and this process is accelerating, doubling every year. If we think linearly, then the idea of turning off all disease and aging processes appears far off into the future just as the genome project did in 1990. On the other hand, if we factor in the doubling of the power of these technologies each year, the prospect of radical life extension is only a couple of decades away.
In addition to reprogramming biology, we will be able to go substantially beyond biology with nanotechnology in the form of computerized nanobots in the bloodstream. If the idea of programmable devices the size of blood cells performing therapeutic functions in the bloodstream sounds like far off science fiction, I would point out that we are doing this already in animals. One scientist cured type I diabetes in rats with blood cell sized devices containing 7 nanometer pores that let insulin out in a controlled fashion and that block antibodies. If we factor in the exponential advance of computation and communication (price-performance multiplying by a factor of a billion in 25 years while at the same time shrinking in size by a factor of thousands), these scenarios are highly realistic.
The apparent dangers are not real while unapparent dangers are real. The apparent dangers are that a dramatic reduction in the death rate will create over population and thereby strain energy and other resources while exacerbating environmental degradation. However we only need to capture 1 percent of 1 percent of the sunlight to meet all of our energy needs (3 percent of 1 percent by 2025) and nanoengineered solar panels and fuel cells will be able to do this, thereby meeting all of our energy needs in the late 2020s with clean and renewable methods. Molecular nanoassembly devices will be able to manufacture a wide range of products, just about everything we need, with inexpensive tabletop devices. The power and price-performance of these systems will double each year, much faster than the doubling rate of the biological population. As a result, poverty and pollution will decline and ultimately vanish despite growth of the biological population.
There are real downsides, however, and this is not a utopian vision. We have a new existential threat today in the potential of a bioterrorist to engineer a new biological virus. We actually do have the knowledge to combat this problem (for example, new vaccine technologies and RNA interference which has been shown capable of destroying arbitrary biological viruses), but it will be a race. We will have similar issues with the feasibility of self-replicating nanotechnology in the late 2020s. Containing these perils while we harvest the promise is arguably the most important issue we face.
Some people see these prospects as dangerous because they threaten their view of what it means to be human. There is a fundamental philosophical divide here. In my view, it is not our limitations that define our humanity. Rather, we are the species that seeks and succeeds in going beyond our limitations.
[Continued on Edge]
]]>Its not clear to me whether the Singularity is a technical belief system or a spiritual one.
The Singularitya notion thats crept into a lot of skiffy, and whose most articulate in-genre spokesmodel is Vernor Vingedescribes the black hole in history that will be created at the moment when human intelligence can be digitized. When the speed and scope of our cognition is hitched to the price-performance curve of microprocessors, our "prog-ress" will double every eighteen months, and then every twelve months, and then every ten, and eventually, every five seconds.
Singularities are, literally, holes in space from whence no information can emerge, and so SF writers occasionally mutter about how hard it is to tell a story set after the information Singularity. Everything will be different. What it means to be human will be so different that what it means to be in danger, or happy, or sad, or any of the other elements that make up the squeeze-and-release tension in a good yarn will be unrecognizable to us pre-Singletons.
Its a neat conceit to write around. Ive committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.
Of course, the Singularity isnt just a conceit for noodling with in the pages of the pulps: its the subject of serious-minded punditry, futurism, and even science.
Ray Kurzweil is one such pundit-futurist-scientist. Hes a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.
Kurzweil believes in the Singularity. In his 1990 manifesto, "The Age of Intelligent Machines," Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called The Age of Spiritual Machines, whose most audacious claim is that the worlds computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industrys Moores Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.
Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.
See what I meant about his being a Heinlein hero?
I still dont know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.
I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when youve been restored from backup?
The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that its a person, then to all intents and purposes, its a person.
So how do you know if the backed-up you that youve restored into a new bodyor a jar with a speaker attached to itis really you? Well, you can ask it some questions, and if it answers the same way that you do, youre talking to a faithful copy of yourself.
Sounds good. But the me who sent his first story into Asimovs seventeen years ago couldnt answer the question, "Write a story for Asimovs" the same way the me of today could. Does that mean Im not me anymore?
Kurzweil has the answer.
"If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesnt have to match the quantum state of my every neuron, either: if you meet me the next day, Id pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we dont examine the assumption that we are the same person closely.
"We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brainthe physical part of us most closely associated with our identitycells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only days. Im a completely different set of particles from what I was a week ago.
"Consciousness is a difficult subject, and Im always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we cant postulate a consciousness detector that does not have some assumptions about consciousness built into it.
"Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and theres a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers dont generally attribute consciousness to animals that arent humanlike.
"When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals."
The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer thats as fast and complex as a brain, and voila, intelligence.
Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us us.
But brains arent that complex, Kurzweil says. Already, were starting to unravel their mysteries.
"We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies dont have any, most animals dont have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.
"Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a childs mastery of language. Language is the true embodiment of human intelligence."
If were not so complex, then its only a matter of time until computers are more complex than us. When that comes, our brains will be model-able in a computer and thats when the fun begins. Thats the thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.
Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z petabytes, and that n and x and z are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.
But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinleins "Man Who Sold the Moon," the gimmick is that once the computer becomes complex enough, with enough "random numbers," it just wakes up.
Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselvesthey tend to be no smarter than the people who write their software.
But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lotsthousands or even millionsof randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.
Indeed, evolutionary computing is a promising and exciting field thats realizing real returns through cool offshoots like "ant colony optimization" and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.
So if you buy Kurzweils premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to evolve the best way to model a scanned-in human brain such that it "wakes up" like Heinleins Mike computer?
Indeed, this is the crux of Kurzweils argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.
But its a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution: herit-able variation of candidates and a system that culls the least-suitable candidates. This latterthe fitness-factor that determines which individuals in a cohort breed and which vanishis the key to a successful evolutionary system. Without it, theres no pressure for the system to achieve the desired goal: merely mutation and more mutation.
But how can a machine evaluate which of a trillion models of a human brain is "most like" a conscious mind? Or better still: which one is most like the individual whose brain is being modeled?
"It is a sleight of hand in Spiritual Machines," Kurzweil admits. "But in The Singularity Is Near, I have an in-depth discussion about what we know about the brain and how to model it. Our tools for understanding the brain are subject to the Law of Accelerating Returns, and weve made more progress in reverse-engineering the human brain than most people realize." This is a tasty Kurzweilism that observes that improvements in technology yield tools for improving technology, round and round, so that the thing that progress begets more than anything is more and yet faster progress.
"Scanning resolution of human tissueboth spatial and temporalis doubling every year, and so is our knowledge of the workings of the brain. The brain is not one big neural net, the brain is several hundred different regions, and we can understand each region, we can model the regions with mathematics, most of which have some nexus with chaos and self-organizing systems. This has already been done for a couple dozen regions out of the several hundred.
"We have a good model of a dozen or so regions of the auditory and visual cortex, how we strip images down to very low-resolution movies based on pattern recognition. Interestingly, we dont actually see things, we essentially hallucinate them in detail from what we see from these low resolution cues. Past the early phases of the visual cortex, detail doesnt reach the brain.
"We are getting exponentially more knowledge. We can get detailed scans of neurons working in vivo, and are beginning to understand the chaotic algorithms underlying human intelligence. In some cases, we are getting comparable performance of brain regions in simulation. These tools will continue to grow in detail and sophistication.
"We can have confidence of reverse-engineering the brain in twenty years or so. The reason that brain reverse engineering has not contributed much to artificial intelligence is that up until recently we didnt have the right tools. If I gave you a computer and a few magnetic sensors and asked you to reverse-engineer it, you might figure out that theres a magnetic device spinning when a file is saved, but youd never get at the instruction set. Once you reverse-engineer the computer fully, however, you can express its principles of operation in just a few dozen pages.
"Now there are new tools that let us see the interneuronal connections and their signaling, in vivo, and in real-time. Were just now getting these tools and theres very rapid application of the tools to obtain the data.
"Twenty years from now we will have realistic simulations and models of all the regions of the brain and [we will] understand how they work. We wont blindly or mindlessly copy those methods, we will understand them and use them to improve our AI toolkit. So well learn how the brain works and then apply the sophisticated tools that we will obtain, as we discover how the brain works.
"Once we understand a subtle science principle, we can isolate, amplify, and expand it. Air goes faster over a curved surface: from that insight we isolated, amplified, and expanded the idea and invented air travel. Well do the same with intelligence.
"Progress is exponentialnot just a measure of power of computation, number of Internet nodes, and magnetic spots on a hard diskthe rate of paradigm shift is itself accelerating, doubling every decade. Scientists look at a problem and they intuitively conclude that since weve solved 1 percent over the last year, itll therefore be one hundred years until the problem is exhausted: but the rate of progress doubles every decade, and the power of the information tools (in price-performance, resolution, bandwidth, and so on) doubles every year. People, even scientists, dont grasp exponential growth. During the first decade of the human genome project, we only solved 2 percent of the problem, but we solved the remaining 98 percent in five years."
But Kurzweil doesnt think that the future will arrive in a rush. As William Gibson observed, "The future is here, its just not evenly distributed."
"Sure, itd be interesting to take a human brain, scan it, reinstantiate the brain, and run it on another substrate. That will ultimately happen."
"But the most salient scenario is that well gradually merge with our technology. Well use nanobots to kill pathogens, then to kill cancer cells, and then theyll go into our brain and do benign things there like augment our memory, and very gradually theyll get more and more sophisticated. Theres no single great leap, but there is ultimately a great leap comprised of many small steps.
"In The Singularity Is Near, I describe the radically different world of 2040, and how well get there one benign change at a time. The Singularity will be gradual, smooth.
"Really, this is about augmenting our biological thinking with nonbiological thinking. We have a capacity of 1026 to 1029 calculations per second (cps) in the approximately 1010 biological human brains on Earth and that number wont change much in fifty years, but nonbiological thinking will just crash through that. By 2049, nonbiological thinking capacity will be on the order of a billion times that. Well get to the point where bio thinking is relatively insignificant.
"People didnt throw their typewriters away when word-processing started. Theres always an overlapitll take time before we realize how much more powerful nonbiological thinking will ultimately be."
Its well and good to talk about all the stuff we can do with technology, but its a lot more important to talk about the stuff well be allowed to do with technology. Think of the global freak-out caused by the relatively trivial advent of peer-to-peer file-sharing tools: Universities are wiretapping their campuses and disciplining computer science students for writing legitimate, general purpose software; grandmothers and twelve-year-olds are losing their life savings; privacy and due process have sailed out the window without so much as a by-your-leave.
Even P2Ps worst enemies admit that this is a general-purpose technology with good and bad uses, but when new tech comes along it often engenders a response that countenances punishing an infinite number of innocent people to get at the guilty.
Whats going to happen when the new technology paradigm isnt song-swapping, but transcendent super-intelligence? Will the reactionary forces be justified in razing the whole ecosystem to eliminate a few parasites who are doing negative things with the new tools?
"Complex ecosystems will always have parasites. Malware [malicious software] is the most important battlefield today.
"Everything will become softwareobjects will be malleable, well spend lots of time in VR, and computhought will be orders of magnitude more important than biothought.
"Software is already complex enough that we have an ecological terrain that has emerged just as it did in the bioworld.
"Thats partly because technology is unregulated and people have access to the tools to create malware and the medicine to treat it. Todays software viruses are clever and stealthy and not simpleminded. Very clever.
"But heres the thing: you dont see people advocating shutting down the Internet because malware is so destructive. I mean, malware is potentially more than a nuisanceemergency systems, air traffic control, and nuclear reactors all run on vulnerable software. Its an important issue, but the potential damage is still a tiny fraction of the benefit we get from the Internet.
"I hope itll remain that waythat the Internet wont become a regulated space like medicine. Malwares not the most important issue facing human society today. Designer bioviruses are. People are concerted about WMDs, but the most daunting WMD would be a designed biological virus. The means exist in college labs to create destructive viruses that erupt and spread silently with long incubation periods.
"Importantly, a would-be bio-terrorist doesnt have to put malware through the FDAs regulatory approval process, but scientists working to fix bio-malware do.
"In Huxleys Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who wont listen to the regulators anyway.
"The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.
"I advocate a one hundred billion dollar program to accelerate the development of anti-biological virus technology. The way to combat this is to develop broad tools to destroy viruses. We have tools like RNA interference, just discovered in the past two years to block gene expression. We could develop means to sequence the genes of a new virus (SARS only took thirty-one days) and respond to it in a matter of days.
"Think about it. Theres no FDA for software, no certification for programmers. The government is thinking about it, though! The reason the FCC is contemplating Trusted Computing mandates,"a system to restrict what a computer can do by means of hardware locks embedded on the motherboard"is that computing technology is broadening to cover everything. So now you have communications bureaucrats, biology bureaucrats, all wanting to regulate computers.
"Biology would be a lot more stable if we moved away from regulationwhich is extremely irrational and onerous and doesnt appropriately balance risks. Many medications are not available today even though they should be. The FDA always wants to know what happens if we approve this and will it turn into a thalidomide situation that embarrasses us on CNN?
"Nobody asks about the harm that will certainly accrue from delaying a treatment for one or more years. Theres no political weight at all, people have been dying from diseases like heart disease and cancer for as long as weve been alive. Attributable risks get 100-1000 times more weight than unattributable risks."
Is this spirituality or science? Perhaps it is the melding of bothmore shades of Heinlein, this time the weird religions founded by people who took Stranger in a Strange Land way too seriously.
After all, this is a system of belief that dictates a means by which we can care for our bodies virtuously and live long enough to transcend them. It is a system of belief that concerns itself with the meddling of non-believers, who work to undermine its goals through irrational systems predicated on their disbelief. It is a system of belief that asks and answers the question of what it means to be human.
Its no wonder that the Singularity has come to occupy so much of the science fiction narrative in these years. Science or spirituality, you could hardly ask for a subject better tailored to technological speculation and drama.
© 2005 Cory Doctorow. Reprinted with permission.
]]>Karl Sarnow says:
Welcome to the Xplora scheduled chat with Ray Kurzweil about robots and artificial intelligence.
My name is Karl Sarnow and I am member of the Xplora team, trying to enable teachers of mathematics and science to give fascinating science lessons.
It is a great pleasure and honour to have Ray Kurzweil here in the chat. The topic itself is really fascinating and inspiring. But having one of the early birds of AI applications personally here is nothing more but wonderful.
Cordially welcome Ray.
Ray Kurzweil says:
Hi, glad to be here.
Karl Sarnow says:
We now start with the procedure as discussed in the mail in advance. Ray will start with a short introduction and then the others follow with a short sentence.
Ray Kurzweil says:
Yes, here is something I wrote recently that is a brief introduction to my latest book, The Singularity is Near, When Humans Transcend Biology:
Ray Kurzweil says:
So what is the Singularity?
Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), "experience beaming (like Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.
And thats the Singularity?
No, thats just the precursor. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. Well get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.
When will that occur?
I set the date for the Singularityrepresenting a profound and disruptive transformation in human capabilityas 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.
Why is this called the Singularity?
The term Singularity in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. Thats what Ive tried to do in this book.
Okay, lets break this down. It seems a key part of your thesis is that we will be able to capture the intelligence of our brains in a machine.
Indeed.
So how are we going to achieve that?
We can break this down further into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Several supercomputers with 1 quadrillion cps are already on the drawing board, with two Japanese efforts targeting already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps around the end of the decade. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.
And how will we recreate the algorithms of human intelligence?
To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, its conservative to conclude that we will have effective models for all of the brain.
So at that point well just copy a human brain into a supercomputer?
I would rather put it this way: At that point, well have a full understanding of the methods of the human brain. One benefit will be a deep understanding of ourselves, but the key implication is that it will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern recognition abilities. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.
By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.
Okay, that should do it. Sorry it had to be cut into a number of pieces.
Karl Sarnow says:
Okay, lets start with an introduction. I propose in the order people appear on the right. So Alexa will be the first one.
Alexa Joyce says:
I’m Alexa Joyce, I’m project manager of Xplora. I’m a biologist originally but now working in technology.
Donelle Batty says:
Hello I’m Tom Steele and I’m a student from Riverside High School from Tasmania. I’m be representing the Pegasus project during this chat
Donelle Batty says:
(sorry I may be abit tired it is 1AM over here)
Karl Sarnow says to Donelle Batty:
Youre welcome Tom. A real student is fine. James now?
Ray Kurzweil says:
Where are you?
James Whipple says:
Hi I’m James, I’m a design student who has been interested in the singularity for several years.
Damon Zucconi says:
Sorry I was reading… I’m Damon Zucconi, a student at Maryland Institute College of Art in Interactive Media.
Karl Sarnow says:
I am a physicist with a PhD in biophysics, teaching mathematics, physics and computer science at a German Gymnasium for about 30 years. Now I am seconded to the European Schoolnet to help setup Xplora.
Matt Neil says:
I’m Matt I am in Australia 2, I work in technology as a IT Architect designing systems for large corporates, the organisation I work for is busy building a grid mesh for global computing. In my last role I was working for a health organising changing the face of radiology from traditional photograpghy to digital and I have a passion for what we are talking about
Ray Kurzweil says:
I’m Ray Kurzweil, an inventor, author, and futurist. Delighted to be with all of you, including all the students "listening" in.
Sally H says:
I am Sally. I am in Loughborough, UK. I work in IT (eLearning) and Home Educate my 15 year old son. He is very interested in robots, the future of computing, as well as computer gaming!
Karl Sarnow says:
Ok, now we know who you are (almost).
Karl Sarnow says:
Now for the questions, only one per person please.
Karl Sarnow says:
(for this round).
Donelle Batty says:
Maybe I should start with a simple one
Karl Sarnow says:
Just go ahead
James Whipple says:
i will ask after Donelle’s question
Donelle Batty says:
Some of the students wish to know what inspired you to create all of the fantastic things you’ve created and they also wish to know what troubles you encountered as they wish to be inventors themselves.
Ray Kurzweil says:
I’ve had the idea of being an inventor since I was 5. What is exciting and inspiring about being an inventor is the link between dry formulas on a black board (the invention) and transformations in people’s lives. As for challenges, the biggest issue is timing. Most inventors get their inventions to "work" but most of the time the timing is wrong, so it fails in the market place. That’s why I got into tracking technology trends over 30 years ago.
Ray Kurzweil says:
One more comment on Donnelle’s question: the most gratifying project I’ve been involved in has been reading machines for the blind. I introduced the first one 30 years ago, and have been involved ever since. We just introduced a print-to-speech reading machine that fits in your pocket—it’s 10,000 times smaller and lighter than the first one.
Donelle Batty says:
Thanks.
Matt Neil says:
Ray my question is around how we are going to transfer/copy/replicate those already higher congnitive functions of the mind, such as ESP that may actually lay dormant in the mind, that we do not understand and from observation may look like garbage as they are not normally activated. Are these functions inherent? And if we copy the code will these extras services come along with it?
Ray Kurzweil says:
WRG Matt’s question, we’re now embarked on a grand project to "reverse-engineer" the human brain. We’re in the early stages but the progress will exponential not linear. So we will ultimately understand in precise terms how the brain performs its functions, including ESP to the extent that it can actually do that. We will routinely have brain-to-brain communication when we have nanobots in our brains that are on the Internet. We already have simulations of 20 regions of the brain that perform well on tests compared to human function, for example, the cerebellum which comprises more than half the neurons in the brain, and 15 regions of the auditory cortext. There are several hundred more regions to go. (end of response to Matt).
Matt Neil says to Ray Kurzweil:
Cheers
James Whipple says:
Will there be room for the human ego in a post-singularity society, or will we be led to less and less individuation as our interconnections grow? How much of the brain’s baggage will we want to take with us as we integrate with machines?
Ray Kurzweil says:
WRG James’ question, I suppose we’ll have to reconsider what ego means. It’s not always a bad thing if people are driven to perform creative work. Machines can have it both ways—they can be individuals and they can also merge to form one larger intelligence. Humans "merge" also in societies but not with the same ease. We’ll still have ego and conflicting agendas but we’ll have more capability.
James Whipple says:
Interesting, reminds me of Howard Bloom’s "Global Brain" book. Thanks, Ray.
Ray Kurzweil says:
As for the brain’s baggage—we will ultimately have the "source code" for our intelligence—we’ll have a precise description of its algorithms and be able to modify them. We’ll have to proceed with caution, of course, but there will be obvious dysfunctions we’d like to fix. (end of response to James).
Sally H says:
With the known vulnerabilities in current O/S software and computer security still being a nightmare—do you think that this vision of the future could be scarily nightmarish, and can you see a way that this would be countered?
Ray Kurzweil says:
WRG Sally’s question, there are definitively downsides to all 3 overlapping revolutions—G (genetics or biotech), N (nanotech) and R (robotics which really refers to strong AI, AI at the human level). We have the downside of "G" now—the potential for bioengineered biological viruses. I wrote (coauthored with Bill Joy) an op-ed piece in the New York Times recently criticizing the US Goverment for publishing the genome of the 1918 flu virus, for example, as it could aid bioterrorists. WRG software, I actually think we’re doing reasonably well. We have "mission critical" software running intensive care units in hospitals, flying and landing airplanes, running factories, etc., and this software almost nevere fails. We do know how to create reliable software. And with regard to software pathogens (software viruses, etc.), we’re also reasonably keeping pace. Our technological "immune system" responds generally within hours of a new type of attack. (end of response to Sally).
Karl Sarnow says:
You mention Weve already created simulations of ~ 20 regions (out of several hundred) of the brain. Do you mean computer programs, that behave like the brain regions? How can you test that? Is there any interface to living beings?
Ray Kurzweil says:
WRG Karl’s question, yes these are computer programs—computer simulations and yes, these are tested—for example by applying psychoacoustic tests to the simulation and applying the same tests to human auditory perception. It does not prove that the simulations are perfect, and undoubtedly they are not, but it shows we are moving in the right direction.
Ray Kurzweil says:
WRG interfacing to living beings, we have of course a variety of neural implants now—cochlear implants, and implants for Parkinson’s patients which replaced diseased brain tissue. The latest generation of the Parkinson’s implant allows you to download new software to your neural implant from outside the patient. There are stroke patients who have an implant that can now communicate to their computer and by extension to the rest of the world and to control their environment. (end)
Alexa Joyce says:
yes exactly I’m rather worried to read about the re-creation of the live virus by bio-engineering too
Karl Sarnow says:
Did everybody have his question?
Matt Neil says to Ray Kurzweil:
Not a question but an interuption—are you using speech to text for todays session?
Jan Kapoun says:
Hi, Ray. I was interested in Martin Rees´ book "Our Final Hour", especially the part about dangers of new technology. Have you read the book? What do you think about it?
Ray Kurzweil says:
WRG Kapound’s question:
Ray Kurzweil says:
I alluded earlier to the downsides—the perils—of GNR. I’m familiar with Rees’ book. He talks about these perils as well as natural ones, like an asteroid hitting Earth. On that last one, this happens infrequently (at least a big one) and I’m confident we’ll have the technology to blast it out of the sky before that happens. The more daunting challenges are the downsides of the self-replicating technology we’re creating. I mentioned bioterrorists creating a modified biological virus. When we have full nanotechnology manufacturing, there will ultimately be the potential for self-replicating nanotechnology. There are strategies for dealing with these issues. The issue for society is one of priorities. We need to put a higher priority on the defenses—I gave testimony to the U.S. Congress on a proposal for a $100 billion program to develop new technologies (like RNA interference) to combat new biological viruses. President Bush recently proposed a $7 billion program for this—it’s a start but not enough. We also need to be smart about not disseminating overtly dangerous information. No one proposes putting the design of an atom bomb on the web, so why put the design of a killer virus there? (end)
Jan Kapoun says:
Thank you. But is it certain that we will avoid a situation, such as the example in Michael Crichton´s Prey?
Ray Kurzweil says:
Now with regard Crichton’s prey.
Ray Kurzweil says:
He was writing about self-replicating nanotech, although he mixed it up with biological viruses and there are some scientifically unrealistic aspects to his novel, but the basic danger of self-replicating nanotech is—or I should say will be—real. The existential danger we face now is with biological viruses, we don’t yet have full molecular nanotechnology assembly. When we do the basic message for society is the same—which is to put more stones on the defensive side of the scale by developing explicitly defensive technologies. I describe some stratgies in Singularity is Near. There are also ethical guidelines that need to be followed by responsible practitioners. The Foresight Institute, founded by nanotechnology pioneer Eric Drexler has articulated a set of these. (end)
Damon Zucconi says:
With technologies such as the Fritz chip coming into play do you think that the fragmentation of THE computer (the Internet) is something that is possible and a legitimate concern for the iminent singularity?
Ray Kurzweil says:
WRG Damon’s question—by fragmentation I assume you mean decentralization. This is a very good trend. Decentralized technologies are more stable. We are moving to a "world wide mesh" in which computing and communications will be distributed among billions of devices in a flexible and self-organizing manner. (end)
Damon Zucconi says:
I think I meant fragmentation more like cut of government controlled networks.
Damon Zucconi says:
cut-off rather
Ray Kurzweil says:
I’m not using speech to text—I type faster.
Ray Kurzweil says:
WRG Damon’s question—do you mean attempts by the Chinese government to control the web? I think these will fail—they may nominally "control" overt expression of a few sensitive political issues, but people will work around them. There is already an enormous explosion of expression on Chinese web sites including a healthy and exploding blogger community. This is a democratizing force. I wrote in the 1980s that the decentralized communication technologies that would emerge would ultimately destroy the Soviet Union, which it did. That 1991 coup against Gorbachev failed not because of Yeltsin standing on a tank but because of the clandestine network of fax machines and early email using teletype machines. I mentioned this to Gorbachev recently at a lunch I had with him and he heartedly agreed. Of course, anything to put Yeltsin down.
Alexa Joyce says:
Do you think then, by extension of what you say about bio-viruses, in the next few years we’ll see teenage bio-hackers the same way we have young hackers on the Internet now?
Ray Kurzweil says:
WRG Alex’s concern:
Yes, it will ultimately get easier and easier to do this kind of work, so the answer is we need to develop a (very) rapid response system that can combat ANY new biological virus whether natural (like bird flu) or bioengineered. The good news is that the tools to accomplish this are coming into place. RNA interference can turn any gene off—we send pieces of RNA in as a medication, it latches on to the messenger RNA expressing a gene and destroys it.
This has been effective for stopping biological viruses. I described a plan in which we would have a rapid response system that could quickly sequence a virus, create an RNAi medication and gear up production, all in a manner of days. We have the tools to do this but we need to put it in place. There are other protective ideas as well.
Alexa Joyce says:
You mean antisense RNA here to block the genes?
Alexa Joyce says:
This concept of an open source biology community is very intriguing.
Ray Kurzweil says:
Wel first to clarify in response to Joyce—there are two competing technologies that can block the messenger RNA expressing a gene – antisense technology and RNA interference (RNAi). RNAi works very well, antisense technology has been disappointing.
Ray Kurzweil says:
WRG Joyces comment—yes there will be open source biology. Everything of importance will ultimately be information. Even manufacturing products. In the 2020s we’ll be able to manufacture almost anything we need/want with our own table top manufacturing devices—and there will be open source versions of designs—sneakers, meals, etc. (end)
Karl Sarnow says:
I have read your PP with great intrest. In slide 56 there are a lot of abbreviations. Is there some information available what these abbreviations mean? Could you give us a pointer?
Ray Kurzweil says:
Which slide was that?
Karl Sarnow says:
It is about reverse engineering the brain and shows a diagramm with many 3-letters abbreviations.
Damon Zucconi says:
Reverse engineering the human brain.
Ray Kurzweil says:
See endnote 96 for chapter 5 of Singularity is Near.
Karl Sarnow says to Ray Kurzweil:
Ok.
Ray Kurzweil says:
Page 546. The main text discussion is on pages 183-185. Chapter 4 is about reverse engineering the brain.
Ray Kurzweil says:
Did I miss a question?
Karl Sarnow says:
I don’t thinks so, what say the others?
Jan Kapoun says:
Your great book, "The Age of Intelligent Machines," celebrated 15 years this year. Would you change something in it now?
Ray Kurzweil says:
I think it actually does track quite well. Obviously some things are off by a few years—I would have said 1997 for a computer taking the world chess championship not 1998. This brings up the issue as to whether or not we can predict the future. The common wisdom is that we cannot. But there are certain measures of information technology—price-performance, capacity, bandwidth, etc.—that are very predictable. And its not just computer devices, but information technology is deeply influencing everything of value. So we can anticipate many scenarios quite accurately. We might wonder how can this be? Specific projects are indeed not predictable. yet the overall impact is predictable. We see a similar phenomenon in thermodynamics—the path of each particle in a gas is unpredictable, yet the overall properties of the gas—made of a vast number of chaotic unpredictable particles—is very predictable according to the laws of thermodynamics, to a high degree of precision. So it is with information technology, also a complex and chaotic system. (end)
Matt Neil says to Ray Kurzweil:
Which companies do you think will get the jump those coming from the bio health side or those coming from the tech molecular computing side.
Ray Kurzweil says:
There’s clearly a role for both. If you look at how biology is done now it is becoming an information technology. It used to be hit and miss—we would just find something that happened to work with no theory of operation, using "drug discovery." Now we’re actually learning the precise information processes underlying biological processes like atherosclerosis (the cause of most heart disease) and also gaining the means to reprogram these processes away from disease. Biochemical simulators are playing a big role. Drug development is already quite specialized with smaller companies doing the "risk removal" of specific treatments, then doing deals with the larger pharma companies. (end)
Karl Sarnow says:
I am not sure about this chat room, but I assume it is kicking us out at exactly 15h00 Brussels time. But I would not be happy to finish the session, before saying a big, big, thank you to Ray. It was very inspiring to read your answers and the questions from all of you. You will be able to read an edited version of the chat on Xplora. Thanks a lot again and I hope to see/read you again at Xplora somehow.
Good bye and thanks.
Karl
James Whipple says:
Thanks!
Alexa Joyce says:
Actually I think we can carry on if Ray is still happy to take a couple more questions…
Ray Kurzweil says:
I’ve got time for one or two more.
Karl Sarnow says:
I just hear that the tool will probably not kick us out, but nobody will be able to get in. So feel free to continue. I will record until the end of the debate.
Jan Kapoun says:
Thanks, Ray! It was a nice experience to chat with you.
James Whipple says:
I’m sure you know about people such as Hugo de Garis’ pessimistic visions of society’s reaction to a technological singularity. How will society react to change it can barely keep track of? How can the transition to be smoothed out?
Ray Kurzweil says:
WRG de Garis, as I said before I am concerned with the downsides. Bill Joy’s pessimistic piece in WIRED stemmed from a conversation we had in 1998 and his reading Age of Spiritual Machines. I do think that de Garis particular scenario does not make sense. He envisions a war between the "cosmists" (those who have enhanced themselves by merging with nonbiological intelligence) and the "terrists" (those who have not). It’s kind of absurd – like a war between those who use cell phones and those who don’t, or between the Amish and the armed forces. There are certainly concerns about "strong AI" run amok, but a war between those eschewing technology and those embracing it does not make sense. Such a "war" would be a non starter. (end)
Donelle Batty says:
I was just wondering about the use of nanobots to acheive immortality becasue I. Wouldn’t this create many more problems such as over-populization.
James Whipple says:
I agree!
Matt Neil says to Ray Kurzweil:
Yes and on that how many tablets are you taking a day for longevity—some say its in the hundres!
Ray Kurzweil says:
WRG over population:
Ray Kurzweil says:
That would be a problem if we had radical life extension and NO other changes. But nanotechnology will also enable us to create any physical product we will need from inexpensive raw materials being reorganized by massively parallel computerized processes using table top nanotech fabricators (2020s scenario). We’ll be able to meet the needs of any conceivable size biological population. I describe in Singularity is Near a scenario for energy—by capturing just 3% of 1% of the sunlight that falls on the Earth we can meet the projected energy needs of 2030. We’ll be able to do with nanoengineered solar panels and store the energy in highly decentralized nanoengineered fuel cells. (end)
Ray Kurzweil says:
Finally, last question, wrg supplements:
Ray Kurzweil says:
I do take a lot of supplements (about 250 pills a day) to "reprogram" my biochemistry. I take a lot of tests (50 or 60 blood levels) every few months to see how I’m doing. And I’m doing fine. I had type II diabetes 22 years ago but for 20 years have had no indication of this. My cholesterol many years ago was 280, but its been 130 for a long time. And all my other levels are relatively ideal. And according to biological aging tests, I was biologically 38 when I was chronolically 40. Now that I’m chronologically 57, I come out about 40 biologically. So there may be controversy about the validity of these biological aging tests, but I do a lot of other testing and feel I’m doing well. People may think this is a lot of trouble to go to, but actually I think it’s a lot more trouble to get sick. For young people in their 20s and 30s, they only need to stay reasonably healthy and perhaps take a good multivitamin. But for my contemporaries, people in their 50s and 60s, if they really want to be in good shape when we have these dramatic new technologies from biotech and nanotech, then they need to be aggressive to reprogram their biochemistry now.
Ray Kurzweil says:
Thanks again for chatting—enjoyed it a great deal!
© 2005 xplora. Reprinted with permission.
]]>Click here for audio recording.
President Berkey, trustees, esteemed faculty, honored graduates, proud parents and guests, its a pleasure to be here. Its a great honor to receive this distinction. Congratulations to all of you. I’ve long been an admirer of WPI and this is a terrific way to start your career. Actually judging by the practical experience you’ve had and the entrepreneurship which is blossoming on this campus you’ve already started your career.
A commencement is a good time to reflect on the future, on your future, and I’ve actually spent a few decades thinking about the future, trying to model technology trends. I suppose thats one reason you asked me to share my ideas with you on what the future will hold, which will be rather different and empowering in terms of our ability to create knowledge, more so than many people realize.
I started thinking about the future and trying to anticipate it because of my interest in being an inventor myself. I realized that my inventions had to make sense when I finished a project, which would be three or four years later, and the world would be a different place. Everything would be different—the channels of distribution, the development tools. Most inventions, most technology projects fail not because the R&D department can’t get it to work—if you read business plans, 90 percent of those groups will do exactly what they say if they’re given the opportunity yet 90 percent of those projects will still fail because the timing is wrong. Not all the enabling factors will be in place when they’re needed. So realizing that, I began to try to model technology trends attempting to anticipate where technology will be. This has taken on a life of its own. I have a team of 10 people that gathers data in many different fields and we try to build mathematical models of what the future will look like.
Now, people say you can’t predict the future. And for some things that turns out to be true. If you ask me, Will the stock price of Google be higher or lower three years from now? thats hard to predict. What will the next wireless common standard be? WiMAX, G-3, CDMA? Thats hard to predict. But if you ask me, What will the cost of a MIPS of computing be in 2010? or, How much will it cost to sequence a base pair of DNA in 2012? or, What will the special and temporal resolution of non-invasive brain scanning be in 2014?, I can give you a figure and its likely to be accurate because we’ve been making these predictions for several decades based on these models. Theres smooth, exponential growth in the power of these information technologies and computation that goes back a century—very smooth, exponential growth, basically doubling the power of electronics and communication every year. Thats a 50 percent deflation rate.
The same thing is true in biology. It took us 15 years to sequence HIV. We sequenced SARS in 31 days. Well soon be able to sequence a virus in just a few days time. We’re basically doubling the power of these technologies every year.
And thats going to lead to three great revolutions that sometimes go by the letters GNR: genetics, nanotechnology and robotics. Let me describe these briefly and talk about the implications for our lives ahead.
G, genetics, which is really a term for biotechnology, means that we are gaining the tools to actually understand biology as information processes and reprogram them. Now, 99 percent of the drugs that are on the market today were not done that way. They were done through drug discovery, basically finding something. Oh, heres something that lowers blood pressure. We have no idea why it works or how it works and invariably it has lots of side effects, similar to primitive man and woman when they discovered their first tools. Oh, heres a rock, this will make a good hammer. But we didn’t have the means of shaping the tools to actually do a job. We’re now understanding the information processes underlying disease and aging and getting the tools to reprogram them.
We have little software programs inside us called genes, about 23 thousand of them. They were designed or evolved tens of thousands of years ago when conditions were quite different. I’ll give you just one example. The fat insulin receptor gene says, Hold on to every calorie because the next hunting season may not work out so well. And thats a gene we’d like to reprogram. It made sense 20 thousand years ago when calories were few and far between. What would happen if we blocked that? We have a new technology that can turn genes off called RNA interference. So when that gene was turned off in mice, these mice ate ravenously and yet they remained slim. They got the health benefits of being slim. They didn’t get diabetes, didn’t get heart disease or cancer. They lived 20 to 25 percent longer while eating ravenously. There are several pharmaceutical companies who have noticed that might be a good human drug.
Theres many other genes we’d like to turn off. There are genes that are necessary for atherosclerosis, the cause of heart disease, to progress. There are genes that cancer relies on to progress. If we can turn these genes off, we could turn these diseases off. Turning genes off is just one of the methodologies. There are new forms of gene therapy that actually add genes so we’ll not just have designer babies but designer baby boomers. And you probably read this Korean announcement a couple of days ago of a new form of cell therapy where we can actually create new cells with your DNA so if you need a new heart or new heart cells you will be able to grow them with your own DNA, have them DNA-corrected, and thereby rejuvenate all your cells and tissues.
Ten or 15 years from now, which is not that far away, we’ll have the maturing of these biotechnology techniques and we’ll dramatically overcome the major diseases that we’ve struggled with for eons and also allow us to slow down, stop and even reverse aging processes.
The next revolution is nanotechnology, where we’re applying information technology to matter and energy. We’ll be able to overcome major problems that human civilization has struggled with. For example, energy. We have a little bit of sunlight here today. If we captured .03 percent, thats three ten-thousandths of the sunlight that falls on the Earth, we could meet all of our energy needs. We can’t do that today because solar panels are very heavy, expensive and inefficient. New nano-engineered designs, designing them at the molecular level will enable us to create very inexpensive, very efficient, light-weight solar panels, store the energy in nano-engineered fuel cells, which are highly decentralized, and meet all of our energy needs.
The killer app of nanotechnology is something called nanobots, basically little robots the size of blood cells. If that sound very futuristic, there are four major conferences on that already and they’re already performing therapeutic functions in animals. One scientist cured Type-1 diabetes with these blood cell-sized nano-engineered capsules.
In regard to the 2020s, these devices will be able to go inside the human body and keep us healthy by destroying pathogens, correcting DNA errors, killing cancer cells and so on and even go into the brain, and interact with our biological neurons. If that sounds futuristic, there are already neural implants that are FDA-approved so there are people walking around who have computers in their brains and the biological neurons in their vicinity are perfectly happy to interact with these computerized devices. And the latest generation of the neural implant for Parkinsons disease allows the patients to download new software to their neural implant from outside the patient. By the 2020s, we’ll be able to greatly enhance human intelligence, provide full immersion virtual reality, for example, from within the nervous system using these types of technologies.
And finally R, which stands for robotics, which is really artificial intelligence at the human level, we’ll see that in the late 2020s. By that time this exponential growth of computation will provide computer systems that are more powerful than the human brain. We’ll have completed the reverse engineering of the human brain to get the software algorithms, the secrets, the principles of operation of how human intelligence works. A side benefit of that is we’ll have greater insight into ourselves, how human intelligence works, how our emotional intelligence works, what human dysfunction is all about. Well be able to correct, for example, neurological diseases and also expand human intelligence. And this is not going to be an alien invasion of intelligent machines. We already routinely do things in our civilization that would be impossible without our computer intelligence. If all the AI programs, narrow AI, thats embedded in our economic infrastructure were to stop today, our human civilization would grind to a halt. So we’re already very integrated with our technology. Computer technology used to be very remote. Now we carry it in our pockets. It’ll soon be in our clothing. Its already begun migrating into our bodies and brains. We will become increasingly intimate with our technology.
The implications of all this is we will extend human longevity. We’ve already done that. A thousand years ago, human life expectancy was about 23. So most of you would be senior citizens if this were taking place a thousand years ago. In 1800, 200 years ago, human life expectancy was 37. So most of the parents here, including myself, wouldn’t be here. It was 50 years in 1900. Its now pushing 80. Every time theres been some advance in technology we’ve pushed it forward.: sanitation, antibiotics. This biotechnology revolution will expand it again. Nanotechnology will solve problems that we don’t get around to with biotechnology. We’ll have dramatic expansion of human longevity.
But actually life would get boring if we were sitting around for a few hundred years—we would be doing the same things over and over again—unless we had radical life expansion. And this technology will also expand our opportunities, expand our ability to create and appreciate knowledge. And creating knowledge is what the human species is all about. We’re the only species that has knowledge that we pass down from generation to generation. Thats what you’ve been doing for the last four years. Thats what you will continue doing indefinitely. We are expanding exponentially human knowledge and that is really what is exciting about the future.
I was told that commencement addresses should have a vision, which I’ve tried to share with you, and some practical advice. And my practical advice is that creating knowledge is what will be most exciting in life. And in order to create knowledge you have to have passion. So find a challenge that you can be passionate about, and there many of them that are worthwhile. And if youre passionate about a worthwhile challenge, you can find the ideas to overcome that challenge. Those ideas exist and you can find them. And persistence usually pays off. You’ve all had timed tests where you had two or three hours to complete a test. But the tests in life are not timed. If you need an extra hour you can take it. Or an extra day, an extra week, an extra year, an extra decade. Youre the only one that will determine your own success or failure. Thomas Edison tried thousands of filaments to get his light bulb to work and none of them worked. And he easily could have said, I guess all those skeptics who said that a practical light bulb was impossible were right. Obviously he didn’t do that. You know the rest of the story.
If you have a challenge that you feel passionately about thats really worthwhile, then you should never give in. To quote Winston Churchill, Never give in. Never give in. Never, never, never, never, in nothing great or small, large or petty, never give in.
Congratulations once again. This is a great achievement. I wish all of you long lives—very long lives—of success, creativity, health and happiness. And may the Force be with you.
© 2005 KurzweilAI.net
]]>