The Drexler-Smalley debate on molecular assembly

December 1, 2003

Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly. Smalley’s position, which denies both the promise and the peril of molecular assembly, will ultimately backfire and will fail to guide nanotechnology research in the needed constructive direction, says Ray Kurzweil. By the 2020s, molecular assembly will provide tools to effectively combat poverty, clean up our environment, overcome disease, extend human longevity, and many other worthwhile pursuits, he predicts.

Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly, which is the key to the most revolutionary capabilities of nanotechnology. Although Smalley was originally inspired by Drexler’s ground-breaking works and has himself become a champion of contemporary research initiatives in nanotechnology, he has also taken on the role of key critic of Drexler’s primary idea of precisely guided molecular manufacturing. This debate has picked up intensity with  publication of several rounds of this dialogue between these two pioneers. First some background:

Background: The Roots of Nanotechnology

Nanotechnology promises the tools to rebuild the physical world, our bodies and brains included, molecular fragment by molecular fragment, potentially atom by atom. We are shrinking the key feature size of technology, in accordance with what I call the “law of accelerating returns,” at the exponential rate of approximately a factor of 4 per linear dimension per decade. At this rate, the key feature sizes for most electronic and many mechanical technologies will be in the nanotechnology range, generally considered to be under 100 nanometers, by the 2020s (electronics has already dipped below this threshold, albeit not yet in three-dimensional structures and not self-assembling). Meanwhile, there has been rapid progress, particularly in the last several years, in preparing the conceptual framework and design ideas for the coming age of nanotechnology.

Most nanotechnology historians date the conceptual birth of nanotechnology to physicist Richard Feynman’s seminal speech in 1959, “There’s Plenty of Room at the Bottom,” in which he described the profound implications and the inevitability of engineering machines at the level of atoms:

“The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible. . . .for a physicist to synthesize any chemical substance that the chemist writes down. . .How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed – a development which I think cannot be avoided.”

An even earlier conceptual root for nanotechnology was formulated by the information theorist John Von Neumann in the early 1950s with his model of a self-replicating system based on a universal constructor combined with a universal computer. In this proposal, the computer runs a program that directs the constructor, which in turn constructs a copy of both the computer (including its self-replication program) and the constructor. At this level of description, Von Neumann’s proposal is quite abstract — the computer and constructor could be made in a great variety of ways, as well as from diverse materials, and could even be a theoretical mathematical construction. He took the concept one step further and proposed a “kinematic constructor,” a robot with at least one manipulator (arm) that would build a replica of itself from a “sea of parts” in its midst.

It was left to Eric Drexler to found the modern field of nanotechnology, with a draft of his seminal Ph.D. thesis in the mid 1980s, by essentially combining these two intriguing suggestions. Drexler described a Von Neumann Kinematic Constructor, which for its “sea of parts” used atoms and molecular fragments, as suggested in Feynman’s speech. Drexler’s vision cut across many disciplinary boundaries, and was so far reaching, that no one was daring enough to be his thesis advisor, except for my own mentor, Marvin Minsky. Drexler’s doctoral thesis (premiered in his book, Engines of Creation in 1986 and articulated technically in his 1992 book Nanosystems) laid out the foundation of nanotechnology and provided the road map still being pursued today.

Von Neumann’s Universal Constructor, as applied to atoms and molecular fragments, was now called a “universal assembler.” Drexler’s assembler was universal because it could essentially make almost anything in the world. A caveat is in order here. The products of a universal assembler necessarily have to follow the laws of physics and chemistry, so only atomically stable structures would be viable. Furthermore, any specific assembler would be restricted to building products from its sea of parts, although the feasibility of using individual atoms has been repeatedly demonstrated.

Although Drexler did not provide a detailed design of an assembler, and such a design has still not been fully specified, his thesis did provide extensive existence proofs for each of the principal components of a universal assembler, which include the following subsystems:

  • The computer: to provide the intelligence to control the assembly process. As with all of the subsystems, the computer needs to be small and simple. Drexler described an intriguing mechanical computer with molecular “locks” instead of transistor gates. Each lock required only 5 cubic nanometers of space and could switch 20 billion times a second. This proposal remains more competitive than any known electronic technology, although electronic computers built from three-dimensional arrays of carbon nanotubes may be a suitable alternative.
  • The instruction architecture: Drexler and his colleague Ralph Merkle have proposed a “SIMD” (Single Instruction Multiple Data”) architecture in which a single data store would record the instructions and transmit them to trillions of molecular-sized assemblers (each with their own simple computer) simultaneously. Thus each assembler would not have to store the entire program for creating the desired product. This “broadcast” architecture also addresses a key safety concern by shutting down the self-replication process if it got out of control by terminating the centralized source of the replication instructions. However, as Drexler points out[1], a nanoscale assembler does not necessarily have to be self-replicating. Given the inherent dangers in self-replication, the ethical standards proposed by the Foresight Institute contain prohibitions against unrestricted self-replication, especially in a natural environment.
  • Instruction transmission: transmission of the instructions from the centralized data store to each of the many assemblers would be accomplished electronically if the computer is electronic or through mechanical vibrations if Drexler’s concept of a mechanical computer were used.
  • The construction robot: the constructor would be a simple molecular robot with a single arm, similar to Von Neumann’s kinematic constructor, but on a tiny scale. The feasibility of building molecular-based robot arms, gears, rotors, and motors has been demonstrated in the years since Drexler’s thesis, as I discuss below.
  • The robot arm tip: Drexler’s follow-up book in 1992, Nanosystems: molecular machinery, manufacturing, and computation, provided a number of feasible chemistries for the tip of the robot arm that would be capable of grasping (using appropriate atomic force fields) a molecular fragment, or even a single atom, and then depositing it in a desired location. We know from the chemical vapor deposition process used to construct artificial diamonds that it is feasible to remove individual carbon atoms, as well as molecular fragments that include carbon, and then place them in another location through precisely controlled chemical reactions at the tip. The process to build artificial diamond is a chaotic process involving trillions of atoms, but the underlying process has been harnessed to design a robot arm tip that can remove hydrogen atoms from a source material and deposit it at desired location in a molecular machine being constructed. In this proposal, the tiny machines are built out of a diamond-like (called “diamondoid”) material. In addition to having great strength, the material can be doped with impurities in a precise fashion to create electronic components such as transistors. Simulations have shown that gears, levers, motors, and other mechanical systems can also be constructed from these carbon arrays. Additional proposals have been made in the years since, including several innovative designs by Ralph Merkle name=”_ednref2″>[2]. In recent years, there has been a great deal of attention on carbon nanotubes, comprised of hexagonal arrays of carbon atoms assembled in three dimensions, which are also capable of providing both mechanical and electronic functions at the molecular level.
  • The assembler’s internal environment needs to prevent environmental impurities from interfering with the delicate assembly process. Drexler’s proposal is to maintain a near vacuum and build the assembler walls out of the same diamondoid material that the assembler itself is capable of making.
  • The energy required for the assembly process can be provided either through electricity or through chemical energy. Drexler proposed a chemical process with the fuel interlaced with the raw building material. More recent proposals utilize nanoengineered fuel cells incorporating hydrogen and oxygen or glucose and oxygen.

Although many configurations have been proposed, the typical assembler has been described as a tabletop unit that can manufacture any physically possible product for which we have a software description. Products can range from computers, clothes, and works of art to cooked meals. Larger products, such as furniture, cars, or even houses, can be built in a modular fashion, or using larger assemblers. Of particular importance, an assembler can create copies of itself. The incremental cost of creating any physical product, including the assemblers themselves, would be pennies per pound, basically the cost of the raw materials. The real cost, of course, would be the value of the information describing each type of product, that is the software that controls the assembly process. Thus everything of value in the world, including physical objects, would be comprised essentially of information. We are not that far from this situation today, since the “information content” of products is rapidly asymptoting to 100 percent of their value.

In operation, the centralized data store sends out commands simultaneously to all of the assembly robots. There would be trillions of robots in an assembler, each executing the same instruction at the same time. The assembler creates these molecular robots by starting with a small number and then using these robots to create additional ones in an iterative fashion, until the requisite number of robots has been created.

Each local robot has a local data storage that specifies the type of mechanism it is building. This local data storage is used to mask the global instructions being sent from the centralized data store so that certain instructions are blocked and local parameters are filled in. In this way, even though all of the assemblers are receiving the same sequence of instructions, there is a level of customization to the part being built by each molecular robot. Each robot extracts the raw materials it needs, which includes individual carbon atoms and molecular fragments, from the source material. This source material also includes the requisite chemical fuel. All of the requisite design requirements, including routing the instructions and the source material, were described in detail in Drexler’s two classic works.

The Biological Assembler

Nature shows that molecules can serve as machines because living things work by means of such machinery. Enzymes are molecular machines that make, break, and rearrange the bonds holding other molecules together. Muscles are driven by molecular machines that haul fibers past one another. DNA serves as a data-storage system, transmitting digital instructions to molecular machines, the ribosomes, that manufacture protein molecules. And these protein molecules, in turn, make up most of the molecular machinery.

— Eric Drexler

The ultimate existence proof of the feasibility of a molecular assembler is life itself. Indeed, as we deepen out understanding of the information basis of life processes, we are discovering specific ideas to address the design requirements of a generalized molecular assembler. For example, proposals have been made to use a molecular energy source of glucose and ATP similar to that used by biological cells.

Consider how biology solves each of the design challenges of a Drexler assembler. The ribosome represents both the computer and the construction robot. Life does not use centralized data storage, but provides the entire code to every cell. The ability to restrict the local data storage of a nanoengineered robot to only a small part of the assembly code (using the “broadcast” architecture), particularly when doing self-replication, is one critical way nanotechnology can be engineered to be safer than biology.

With the advent of full-scale nanotechnology in the 2020s, we will have the potential to replace biology’s genetic information repository in the cell nucleus with a nanoengineered system that would maintain the genetic code and simulate the actions of RNA, the ribosome, and other elements of the computer in biology’s assembler. There would be significant benefits in doing this. We could eliminate the accumulation of DNA transcription errors, one major source of the aging process. We could introduce DNA changes to essentially reprogram our genes (something we’ll be able to do long before this scenario, using gene-therapy techniques).

With such a nanoengineered system, the recommended broadcast architecture could enable us to turn off unwanted replication, thereby defeating cancer, autoimmune reactions, and other disease processes. Although most of these disease processes will have already been defeated by genetic engineering, reengineering the computer of life using nanotechnology could eliminate any remaining obstacles and create a level of durability and flexibility that goes vastly beyond the inherent capabilities of biology.

Life’s local data storage is, of course, the DNA strands, broken into specific genes on the chromosomes. The task of instruction-masking (blocking genes that do not contribute to a particular cell type) is controlled by the short RNA molecules and peptides that govern gene expression. The internal environment the ribosome is able to function in is the particular chemical environment maintained inside the cell, which includes a particular acid-alkaline equilibrium (pH between 6.8 and 7.1 in human cells) and other chemical balances needed for the delicate operations of the ribosome. The cell wall is responsible for protecting this internal cellular environment from disturbance by the outside world.

The robot arm tip would use the ribosome’s ability to implement enzymatic reactions to break off each amino acid, each bound to a specific transfer RNA, and to connect it to its adjoining amino acid using a peptide bond.

However, the goal of molecular manufacturing is not merely to replicate the molecular assembly capabilities of biology. Biological systems are limited to building systems from protein, which has profound limitations in strength and speed. Nanobots built from diamondoid gears and rotors can be thousands of times faster and stronger than biological cells. The comparison is even more dramatic with regard to computation: the switching speed of nanotube-based computation would be millions of times faster than the extremely slow transaction speed of the electrochemical switching used in mammalian interneuronal connections (typically around 200 transactions per second, although the nonlinear transactions that take place in the dendrites and synapses are more complex than single computations).

The concept of a diamondoid assembler described above uses a consistent input material (for construction and fuel). This is one of several protections against molecule-scale replication of robots in an uncontrolled fashion in the outside world. Biology’s replication robot, the ribosome, also requires carefully controlled source and fuel materials, which are provided by our digestive system. As nano-based replicators become more sophisticated, more capable of extracting carbon atoms and carbon-based molecular fragments from less well-controlled source materials, and able to operate outside of controlled replicator enclosures such as in the biological world, they will have the potential to present a grave threat to that world, particularly in view of the vastly greater strength and speed of nano-based replicators over any biological system. This is, of course, the source of great controversy, which is alluded to in the Drexler-Smalley debate article and letters.

In the decade since publication of Drexler’s Nanosystems, each aspect of Drexler’s conceptual designs has been strengthened through additional design proposals, supercomputer simulations, and, most importantly, actual construction of molecular machines. Boston College chemistry professor T. Ross Kelly reported in the journal Nature that his construction of a chemically-powered nanomotor was built from 78 atoms.[3] A biomolecular research group headed by C. D. Montemagno created an ATP-fueled nanomotor.[4] Another molecule-sized motor fueled by solar energy was created by Ben Feringa at the University of Groningen in the Netherlands out of 58 atoms.[5] Similar progress has been made on other molecular-scale mechanical components such as gears, rotors, and levers. Systems demonstrating the use of chemical energy and acoustic energy (as originally described by Drexler) have been designed, simulated, and, in many cases, actually constructed. Substantial progress has been made in developing various types of electronic components from molecule-scale devices, particularly in the area of carbon nanotubes, an area that Smalley has pioneered.

Fat and Sticky Fingers

In the wake of rapidly expanding development of each facet of future nanotechnology systems, no serious flaw to Drexler’s universal assembler concept has been discovered or described. Smalley’s highly publicized objection in Scientific American [6] was based on a distorted description of the Drexler proposal; it ignored the extensive body of work in the past decade. As a pioneer of carbon nanotubes, Smalley has gone back and forth between enthusiasm and skepticism, having written that “nanotechnology holds the answer, to the extent there are answers, to most of our pressing material needs in energy, health, communication, transportation, food, water ….”

Smalley describes Drexler’s assembler as consisting of five to ten “fingers” (manipulator arms) to hold, move, and place each atom in the machine being constructed. He then goes on to point out that there isn’t room for so many fingers in the cramped space that a nanobot assembly robot has to work (which he calls the “fat fingers” problem) and that these fingers would have difficulty letting go of their atomic cargo because of molecular attraction forces (the “sticky fingers” problem). Smalley describes the “intricate three-dimensional waltz that is carried out” by five to fifteen atoms in a typical chemical reaction. Drexler’s proposal doesn’t look anything like the straw man description that Smalley criticizes. Drexler’s proposal, and most of those that have followed, have a single probe, or “finger.”

Moreover, there have been extensive description and analyses of viable tip chemistries that do not involve grasping and placing atoms as if they were mechanical pieces to be deposited in place. For example, the feasibility of moving hydrogen atoms using Drexler’s “propynyl hydrogen abstraction” tip title=””>[7] has been extensively confirmed in the intervening years. name=”_ednref8″>[8] The ability of the scanning probe microscope (SPM), developed at IBM in 1981, and the more sophisticated atomic force microscope to place individual atoms through specific reactions of a tip with a molecular-scale structure provide additional existence proofs. Indeed, if Smalley’s critique were valid, none of us would be here to discuss it because life itself would be impossible.

Smalley also objects that despite “working furiously . . . generating even a tiny amount of a product would take [a nanobot] … millions of years.” Smalley is correct, of course, that an assembler with only one nanobot wouldn’t produce any appreciable quantities of a product. However, the basic concept of nanotechnology is that we will need trillions of nanobots to accomplish meaningful results. This is also the source of the safety concerns that have received ample attention. Creating trillions of nanobots at reasonable cost will require the nanobots to make themselves. This self-replication solves the economic issue while introducing grave dangers. Biology used the same solution to create organisms with trillions of cells, and indeed we find that virtually all diseases derive from biology’s self-replication process gone awry.

Earlier challenges to the concepts underlying nanotechnology have also been effectively addressed. Critics pointed out that nanobots would be subject to bombardment by thermal vibration of nuclei, atoms, and molecules. This is one reason conceptual designers of nanotechnology have emphasized building structural components from diamondoid or carbon nanotubes. Increasing the strength or stiffness of a system reduces its susceptibility to thermal effects. Analysis of these designs have shown them to be thousands of times more stable in the presence of thermal effects than biological systems, so they can operate in a far wider temperature range[9].

Similar challenges were made regarding positional uncertainty from quantum effects, based on the extremely small feature size of nanoengineered devices. Quantum effects are significant for an electron, but a single carbon atom nucleus is more than 20,000 times more massive than an electron. A nanobot will be constructed from hundreds of thousands to millions of carbon and other atoms, so a nanobot will be billions of times more massive than an electron. Plugging this ratio in the fundamental equation for quantum positional uncertainty shows this to be an insignificant factor.

Power has represented another challenge. Drexler’s original proposals involved glucose-oxygen fuel cells, which have held up well in feasibility studies. An advantage of the glucose-oxygen approach is that nanomedicine applications can harness the glucose, oxygen, and ATP resources already provided by the human digestive system. A nanoscale motor was recently created using propellers made of nickel and powered by an ATP-based enzyme. title=””>[10]

However, recent progress in implementing MEMS-scale and even nanoscale hydrogen-oxygen fuel cells have provided an alternative approach. Hydrogen-oxygen fuel cells, with hydrogen provided by safe methanol fuel, have made substantial progress in recent years. A small company in Massachusetts, Integrated Fuel Cell Technologies, Inc.[11] has demonstrated a MEMS-based fuel cell. Each postage-stamp- sized device contains thousands of microscopic fuel cells and includes the fuel lines and electronic controls. NEC plans to introduce fuel cells based on nanotubes in 2004 for notebook computers and other portable electronics. They claim their small power sources will power devices for up to 40 hours before the user needs to change the methanol canister.

The Debate Heats Up

On April 16, 2003, Drexler responded to Smalley’s Scientific American article with an open letter. He cited 20 years of research by himself and others and responded specifically to the fat and sticky fingers objection. As I discussed above, molecular assemblers were never described as having fingers at all, but rather precise positioning of reactive molecules. Drexler cited biological enzymes and ribosomes as examples of precise molecular assembly in the natural world. Drexler closes by quoting Smalley’s own observation that “when a scientist says something is possible, they’re probably underestimating how long it will take. But if they say it’s impossible, they’re probably wrong.”

Three more rounds of this debate were published today. Smalley responds to Drexler’s open letter by backing off of his fat and sticky fingers objection and acknowledging that enzymes and ribosomes do indeed engage in the precise molecular assembly that Smalley had earlier indicated was impossible. Smalley says biological enzymes only work in water and that such water-based chemistry is limited to biological structures such as “wood, flesh and bone.” As Drexler has stated[12], this is erroneous. Many enzymes, even those that ordinarily work in water, can also function in anhydrous organic solvents and some enzymes can operate on substrates in the vapor phase, with no liquid at all.name=”_ednref13″>[13].

Smalley goes on to state (without any derivation or citations) that enzymatic-like reactions can only take place with biological enzymes. This is also erroneous. It is easy to see why biological evolution adopted water-based chemistry. Water is the most abundant substance found on our planet. It also comprises 70 to 90 percent of our bodies, our food, and indeed of all organic matter. Most people think of water as fairly simple, but it is a far more complex phenomenon than conventional wisdom suggests.

As every grade school child knows, water is comprised of molecules, each containing two atoms of hydrogen and one atom of oxygen, the most commonly known chemical formula, H 2O. However, consider some of water’s complications and their implications. In a liquid state, the two hydrogen atoms make a 104.5° angle with the oxygen atom, which increases to 109.5° when water freezes. This is why water molecules are more spread out in the form of ice, providing it with a lower density than liquid water. This is why ice floats.

Although the overall water molecule is electrically neutral, the placement of the electrons creates polarization effects. The side with the hydrogen atoms is relatively positive in electrical charge, whereas the oxygen side is slightly negative. So water molecules do not exist in isolation, rather they combine with one another in small groups to assume, typically, pentagonal or hexagonal shapesname=”_ednref14″>[14]. These multi-molecule structures can change back and forth between hexagonal and pentagonal configurations 100 billion times a second. At room temperature, only about 3 percent of the clusters are hexagonal, but this increases to 100 percent as the water gets colder. This is why snowflakes are hexagonal.

These three-dimensional electrical properties of water are quite powerful and can break apart the strong chemical bonds of other compounds. Consider what happens when you put salt into water. Salt is quite stable when dry, but is quickly torn apart into its ionic components when placed in water. The negatively charged oxygen side of the water molecules attracts positively charged sodium ions (Na+), while the positively charged hydrogen side of the water molecules attracts the negatively charged chlorine ions (Cl). In the dry form of salt, the sodium and chlorine atoms are tightly bound together, but these bonds are easily broken by the electrical charge of the water molecules. Water is considered “the universal solvent” and is involved in most of the biochemical pathways in our bodies. So we can regard the chemistry of life on our planet primarily as water chemistry.

However, the primary thrust of our technology has been to develop systems that are not limited to the restrictions of biological evolution, which exclusively adopted water-based chemistry and proteins as its foundation. Biological systems can fly, but if you want to fly at 30,000 feet and at hundreds or thousands of miles per hour, you would use our modern technology, not proteins. Biological systems such as human brains can remember things and do calculations, but if you want to do data mining on billions of items of information, you would want to use our electronic technology, not unassisted human brains.

Smalley is ignoring the past decade of research on alternative means of positioning molecular fragments using precisely guided molecular reactions. Precisely controlled synthesis of diamondoid (diamond-like material formed into precise patterns) has been extensively studied, including the ability to remove a single hydrogen atom from a hydrogenated diamond surface. title=””>[15] Related research supporting the feasibility of hydrogen abstraction and precisely-guided diamondoid synthesis has been conducted at the Materials and Process Simulation Center at Caltech; the Department of Materials Science and Engineering at North Carolina State University; the Institute for Molecular Manufacturing, the University of Kentucky; the United States Naval Academy, and the Xerox Palo Alto Research Center.[16]

Smalley is also ignoring the well-established scanning probe microscope mentioned above, which uses precisely controlled molecular reactions. Building on these concepts, Ralph Merkle has described tip reactions that can involve up to four reactants.[17] There is extensive literature on site-specific reactions that can be precisely guided and that would be feasible for the tip chemistry in a molecular assembler.[18] Smalley ignores this body of literature when he maintains that only biological enzymes in water can perform this type of reaction. Recently, many tools that go beyond SPMs are emerging that can reliably manipulate atoms and molecular fragments.

On September 3, 2003, Drexler responded to Smalley’s response by alluding once again to the extensive body of literature that Smalley ignores. He cites the analogy to a modern factory, only at a nano-scale. He cites analyses of transition state theory indicating that positional control would be feasible at megahertz frequencies for appropriately selected reactants.

The latest installment of this debate is a follow-up letter by Smalley. This letter is short on specifics and science and long on imprecise metaphors that avoid the key issues. He writes, for example, that “much like you can’t make a boy and a girl fall in love with each other simply by pushing them together, you cannot make precise chemistry occur as desired between two molecular objects with simple mechanical motion…cannot be done simply by mushing two molecular objects together.” He again acknowledges that enzymes do in fact accomplish this, but refuses to acknowledge that such reactions could take place outside of a biological-like system: “this is why I led you…..to talk about real chemistry with real enzymes….any such system will need a liquid medium. For the enzymes we know about, that liquid will have to be water, and the types of things that can be synthesized with water around cannot be much broader than meat and bone of biology.”

I can understand Drexler’s frustration in this debate because I have had many critics that do not bother to read or understand the data and arguments that I have presented for my own conceptions of future technologies. Smalley’s argument is of the form that “we don’t have ‘X’ today, therefore ‘X’ is impossible.” I encounter this class of argument repeatedly in the area of artificial intelligence. Critics will cite the limitations of today’s systems as proof that such limitations are inherent and can never be overcome. These critics ignore the extensive list of contemporary examples of AI (for example, airplanes and weapons that fly and guide themselves, automated diagnosis of electrocardiograms and blood cell images, automated detection of credit card fraud, automated investment programs that routinely outperform human analysts, telephone-based natural language response systems, and hundreds of others) that represent working systems that are commercially available today that were only research programs a decade ago.

Those of us who attempt to project into the future based on well-grounded methodologies are at a disadvantage. Certain future realities may be inevitable, but they are not yet manifest, so they are easy to deny. There was a small body of thought at the beginning of the 20th century that heavier-than-air flight was feasible, but mainstream skeptics could simply point out that if it was so feasible, why had it never been demonstrated? In 1990, Kasparov scoffed at the idea that machine chess players could ever possibly defeat him. When it happened in 1997, observers were quick to dismiss the achievement by dismissing the importance of chess.

Smalley reveals at least part of his motives at the end of his most recent letter when he writes:

“A few weeks ago I gave a talk on nanotechnology and energy titled ‘Be a Scientist, Save the World’ to about 700 middle and high school students in the Spring Branch ISD, a large public school system here in the Houston area. Leading up to my visit the students were asked to ‘write an essay on ‘why I am a Nanogeek. Hundreds responded, and I had the privilege of reading the top 30 essays, picking my favorite top 5. Of the essays I read, nearly half assumed that self-replicating nanobots were possible, and most were deeply worried about what would happen in their future as these nanobots spread around the world. I did what I could to allay their fears, but there is no question that many of these youngsters have been told a bedtime story that is deeply troubling. You and people around you have scared our children.”

I would point out to Smalley that earlier critics also expressed skepticism that either world-wide communication networks or software viruses that would spread across them were feasible. Today, we have both the benefits and the damage from both of these capabilities. However, along with the danger of software viruses has also emerged a technological immune system. While it does not completely protect us, few people would advocate eliminating the Internet in order to eliminate software viruses. We are obtaining far more benefit than damage from this latest example of intertwined promise and peril.

Smalley’s approach to reassuring the public about the potential abuse of this future technology is not the right strategy. Denying the feasibility of both the promise and the peril of molecular assembly will ultimately backfire and fail to guide research in the needed constructive direction. By the 2020s, molecular assembly will provide tools to effectively combat poverty, clean up our environment, overcome disease, extend human longevity, and many other worthwhile pursuits.

Like every other technology that humankind has created, it can also be used to amplify and enable our destructive side. It is important that we approach this technology in a knowledgeable manner to gain the profound benefits it promises, while avoiding its dangers. Drexler and his colleagues at the Foresight Institute have been in the forefront of developing the ethical guidelines and design considerations needed to guide the technology in a safe and constructive direction.

Denying the feasibility of an impending technological transformation is a short-sighted strategy.

Notes

[1] Chemical & Engineering News, December 1, 2003

[2] Ralph C. Merkle, “A proposed ‘metabolism’ for a hydrocarbon assembler,” Nanotechnology 8 (1997): 149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html.

[3] T.R. Kelly, H. De Silva, R.A. Silva, “Unidirectional rotary motion in a molecular system,” Nature 401 (September 9, 1999): 150-152.

[4] C.D. Montemagno, G.D. Bachan, “Constructing nanomechanical devices powered by biomolecular motors,” Nanotechnology 10 (1999): 225-231; G. D. Bachand, C.D. Montemagno, “Constructing organic / inorganic NEMS devices powered by biomolecular motors,” Biomedical Microdevices 2 (2000): 179-184.

[5] N. Koumura, R.W. Zijlstra, R.A. van Delden, N. Harada, B.L. Feringa, “Light-driven monodirectional molecular rotor,” Nature 401 (September 9, 1999): 152-155.

[6] Richard E. Smalley, “Of chemistry, love, and nanobots,” Scientific American 285 (September, 2001): 76-77. http://smalley.rice.edu/rick’s%20publications/SA285-76.pdf.

[7] Nanosystems: molecular machinery, manufacturing, and computation, by K. Eric Drexler, Wiley 1992.

[8] See for example, Theoretical Studies of a Hydrogen Abstraction Tool for Nanotechnology, by Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, and William A. Goddard III, Nanotechnology 2, 1991 pages 187-195.

[9] See equation and explanation on page 3 of “That’s Impossible!” How good scientists reach bad conclusions by Ralph C. Merkle, http://www.zyvex.com/nanotech/impossible.html.

[10] Montemagno, C., and Bachand G. 1999 Nanotechnology 10 225.

[11] By way of disclosure, the author is an advisor and investor in this company.

[12] Chemical & Engineering News, December 1, 2003

[13] A. Zaks and A.M. Klibanov in Science (1984, 224:1249-51)

[14] “The apparent simplicity of the water molecule belies the enormous complexity of its interactions with other molecules, including other water molecules” (A. Soper. 2002. “Water and ice.” Science 297: 1288-1289). There is much that is still up for debate, as shown by the numerous articles still being published about this most basic of molecules, H20. For example, D. Klug. 2001. “Glassy water.” Science 294:2305-2306; P. Geissler et al., 2001. “Autoionization in liquid water.” Science 291(5511):2121-2124; J.K. Gregory et al. 1997. “The water dipole moment in water clusters.” Science 275:814-817; and K. Liu et al. 1996. “Water clusters.” Science 271:929-933;

A water molecule has slightly negative and slightly positive ends, which means water molecules interact with other water molecules to form networks. The partially positive hydrogen atom on one molecule is attracted to the partially negative oxygen on a neighboring molecule (hydrogen bonding). Three-dimensional hexamers involving 6 molecules are thought to be particularly stable, though none of these clusters lasts longer than a few picoseconds.

The polarity of water results in a number of anomalous properties. One of the best known is that the solid phase (ice) is less dense than the liquid phase. This is because the volume of water varies with the temperature, and the volume increases by about 9% on freezing. Due to hydrogen bonding, water also has a higher-than-expected boiling point.

[15] http://www.foresight.org/SciAmDebate/SciAmResponse.html, http://www.imm.org/SciAmDebate2/smalley.html, http://www.rfreitas.com/Nano/DimerTool.htm.

[16] The analysis of the hydrogen abstraction tool has involved many people, including: Donald W. Brenner, Richard J. Colton, K. Eric Drexler, William A. Goddard, III, J. A. Harrison, Jason K. Perry, Ralph C. Merkle, Charles B. Musgrave, O. A. Shenderova, Susan B. Sinnott, and Carter T. White.

[17] Ralph C. Merkle, “A proposed ‘metabolism’ for a hydrocarbon assembler,” Nanotechnology 8(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html

[18] Wilson Ho, Hyojune Lee, “Single bond formation and characterization with a scanning tunneling microscope,” Science 286(26 November 1999):1719-1722; http://www.physics.uci.edu/~wilsonho/stm-iets.html.

K. Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley & Sons, New York, 1992, Chapter 8.

Ralph C. Merkle, “A proposed ‘metabolism’ for a hydrocarbon assembler,” Nanotechnology 8(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html.

Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, William A. Goddard III, “Theoretical studies of a hydrogen abstraction tool for nanotechnology,” Nanotechnology 2(1991):187-195; http://www.zyvex.com/nanotech/Habs/Habs.html.

Michael Page, Donald W. Brenner, “Hydrogen abstraction from a diamond surface: Ab initio quantum chemical study using constrained isobutane as a model,” J. Am. Chem. Soc. 113(1991):3270-3274.

Susan B. Sinnott, Richard J. Colton, Carter T. White, Donald W. Brenner, “Surface patterning by atomically-controlled chemical forces: molecular dynamics simulations,” Surf. Sci. 316(1994):L1055-L1060.

D.W. Brenner, S.B. Sinnott, J.A. Harrison, O.A. Shenderova, “Simulated engineering of nanostructures,” Nanotechnology 7(1996):161-167; http://www.zyvex.com/nanotech/nano4/brennerPaper.pdf

S.P. Walch, W.A. Goddard III, R.C. Merkle, “Theoretical studies of reactions on diamond surfaces,” Fifth Foresight Conference on Molecular Nanotechnology, 1997; http://www.foresight.org/Conferences/MNT05/Abstracts/Walcabst.html.

Stephen P. Walch, Ralph C. Merkle, “Theoretical studies of diamond mechanosynthesis reactions,” Nanotechnology 9(1998):285-296.

Fedor N. Dzegilenko, Deepak Srivastava, Subhash Saini, “Simulations of carbon nanotube tip assisted mechano-chemical reactions on a diamond surface,” Nanotechnology 9(December 1998):325-330.

J.W. Lyding, K. Hess, G.C. Abeln, D.S. Thompson, J.S. Moore, M.C. Hersam, E.T. Foley, J. Lee, Z. Chen, S.T. Hwang, H. Choi, P.H. Avouris, I.C. Kizilyalli, “UHV-STM nanofabrication and hydrogen/deuterium desorption from silicon surfaces: implications for CMOS technology,” Appl. Surf. Sci. 130(1998):221-230.

E.T. Foley, A.F. Kam, J.W. Lyding, P.H. Avouris, P. H. (1998), “Cryogenic UHV-STM study of hydrogen and deuterium desorption from Si(100),” Phys. Rev. Lett. 80(1998):1336-1339.

M.C. Hersam, G.C. Abeln, J.W. Lyding, “An approach for efficiently locating and electrically contacting nanostructures fabricated via UHV-STM lithography on Si(100),” Microelectronic Engineering 47(1999):235-.

L.J. Lauhon, W. Ho, “Inducing and observing the abstraction of a single hydrogen atom in bimolecular reaction with a scanning tunneling microscope,” J. Phys. Chem. 105(2000):3987-3992.

Ralph C. Merkle, Robert A. Freitas Jr., “Theoretical analysis of a carbon-carbon dimer placement tool for diamond mechanosynthesis,” J. Nanosci. Nanotechnol. 3(August 2003):319-324. http://www.rfreitas.com/Nano/JNNDimerTool.pdf

Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, “Theoretical Analysis of Diamond Mechanosynthesis. Part I. Stability of C2 Mediated Growth of Nanocrystalline Diamond C(110) Surface,” J. Comp. Theor. Nanosci. 1(March 2004). In press.

David J. Mann, Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, “Theoretical Analysis of Diamond Mechanosynthesis. Part II. C2 Mediated Growth of Diamond C(110) Surface via Si/Ge-Triadamantane Dimer Placement Tools,” J. Comp. Theor. Nanosci. 1(March 2004). In press.

© 2003 KurzweilAI.net