Transcending Moore’s Law with Molecular Electronics and Nanotechnology

September 27, 2004 by Steve T. Jurvetson

While the future is becoming more difficult to predict with each passing year, we should expect an accelerating pace of technological change. Nanotechnology is the next great technology wave and the next phase of Moore’s Law. Nanotech innovations enable myriad disruptive businesses that were not possible before, driven by entrepreneurship.

Much of our future context will be defined by the accelerating proliferation of information technology·as it innervates society and begins to subsume matter into code. It is a period of exponential growth in the impact of the learning-doing cycle where the power of biology, IT and nanotech compounds the advances in each formerly discrete domain.

Originally published in Nanotechnology Law & BusinessMarch 2004. Published on KurzweilAI.net September 27, 2004.

The history of technology is one of disruption and exponential growth, epitomized in Moore’s law, and generalized to many basic technological capabilities that are compounding independently from the economy. More than a niche subject of interest only to chip designers, the continued march of Moore’s Law will affect all of the sciences, just as nanotech will affect all industries. Thinking about Moore’s Law in the abstract provides a framework for predicting the future of computation and the transition to a new substrate: molecular electronics. An analysis of progress in molecular electronics provides a detailed example of the commercialization challenges and opportunities common to many nanotechnologies.

Introduction to Technology Exponentials:

Despite a natural human tendency to presume linearity, accelerating change from positive feedback is a common pattern in technology and evolution. We are now crossing a threshold where the pace of disruptive shifts is no longer inter-generational and begins to have a meaningful impact over the span of careers and eventually product cycles.

As early stage VCs, we look for disruptive businesses run by entrepreneurs who want to change the world. To be successful, we have to identify technology waves early and act upon those beliefs. At DFJ, we believe that nanotech is the next great technology wave, the nexus of scientific innovation that revolutionizes most industries and indirectly affects the fabric of society. Historians will look back on the upcoming epoch with no less portent than the Industrial Revolution.

The aforementioned are some long-term trends. Today, from a seed-stage venture capitalist perspective (with a broad sampling of the entrepreneurial pool), we are seeing more innovation than ever before. And we are investing in more new companies than ever before.

In the medium term, disruptive technological progress is relatively decoupled from economic cycles. For example, for the past 40 years in the semiconductor industry, Moore’s Law has not wavered in the face of dramatic economic cycles. Ray Kurzweil’s abstraction of Moore’s Law (from transistor-centricity to computational capability and storage capacity) shows an uninterrupted exponential curve for over 100 years, again without perturbation during the Great Depression or the World Wars. Similar exponentials can be seen in Internet connectivity, medical imaging resolution, genes mapped and solved 3D protein structures. In each case, the level of analysis is not products or companies, but basic technological capabilities.

In his forthcoming book, Kurzweil summarizes the exponentiation of our technological capabilities, and our evolution, with the near-term shorthand: the next 20 years of technological progress will be equivalent to the entire 20th century. For most of us, who do not recall what life was like one hundred years ago, the metaphor is a bit abstract. In 1900, in the U.S., there were only 144 miles of paved road, and most Americans (94%+) were born at home, without a telephone, and never graduated high school. Most (86%+) did not have a bathtub at home or reliable access to electricity. Consider how much technology-driven change has compounded over the past century, and consider that an equivalent amount of progress will occur in one human generation, by 2020. It boggles the mind, until one dwells on genetics, nanotechnology, and their intersection. Exponential progress perpetually pierces the linear presumptions of our intuition. “Future Shock” is no longer on an inter-generational time-scale.

The history of humanity is that we use our tools and our knowledge to build better tools and expand the bounds of our learning. We are entering an era of exponential growth in our capabilities in biotech, molecular engineering and computing. The cross-fertilization of these formerly discrete domains compounds our rate of learning and our engineering capabilities across the spectrum. With the digitization of biology and matter, technologists from myriad backgrounds can decode and engage the information systems of biology as never before. And this inspires new approaches to bottom-up manufacturing, self-assembly, and layered complex systems development.

Moore’s Law:

Moore’s Law is commonly reported as a doubling of transistor density every 18 months. But this is not something the co-founder of Intel, Gordon Moore, has ever said. It is a nice blending of his two predictions; in 1965, he predicted an annual doubling of transistor counts in the most cost effective chip and revised it in 1975 to every 24 months. With a little hand waving, most reports attribute 18 months to Moore’s Law, but there is quite a bit of variability. The popular perception of Moore’s Law is that computer chips are compounding in their complexity at near constant per unit cost. This is one of the many abstractions of Moore’s Law, and it relates to the compounding of transistor density in two dimensions. Others relate to speed (the signals have less distance to travel) and computational power (speed x density).

So as to not miss the long-term trend while sorting out the details, we will focus on the 100-year abstraction of Moore’s Law below. But we should digress for a moment to underscore the importance of continued progress in Moore’s law to a broad set of industries.

Importance of Moore’s Law:

Moore’s Law drives chips, communications and computers and has become the primary driver in drug discovery and bioinformatics, medical imaging and diagnostics. Over time, the lab sciences become information sciences, modeled on a computer rather than trial and error experimentation.

NASA Ames shut down their wind tunnels this year. As Moore’s Law provided enough computational power to model turbulence and airflow, there was no longer a need to test iterative physical design variations of aircraft in the wind tunnels, and the pace of innovative design exploration dramatically accelerated.

Eli Lilly processed 100x fewer molecules this year than they did 15 years ago. But their annual productivity in drug discovery did not drop proportionately; it went up over the same period. “Fewer atoms and more bits” is their coda.

Accurate simulation demands computational power, and once a sufficient threshold has been crossed, simulation acts as an innovation accelerant over physical experimentation. Many more questions can be answered per day.

Recent accuracy thresholds have been crossed in diverse areas, such as modeling the weather (predicting a thunderstorm six hours in advance) and automobile collisions (a relief for the crash test dummies), and the thresholds have yet to be crossed for many areas, such as protein folding dynamics.

Long Term Abstraction of Moore’s Law:

Unless you work for a chip company and focus on fab-yield optimization, you do not care about transistor counts. Integrated circuit customers do not buy transistors. Consumers of technology purchase computational speed and data storage density. When recast in these terms, Moore’s Law is no longer a transistor-centric metric, and this abstraction allows for longer-term analysis.

The exponential curve of Moore’s Law extends smoothly back in time for over 100 years, long before the invention of the semiconductor. Through five paradigm shifts&#8212such as electro-mechanical calculators and vacuum tube computers&#8212the computational power that $1000 buys has doubled every two years. For the past 30 years, it has been doubling every year.

Each horizontal line on this logarithmic graph represents a 100x improvement. A straight diagonal line would be an exponential, or geometrically compounding, curve of progress. Kurzweil plots a slightly upward curving line&#8212a double exponential.

Each dot represents a human drama. They did not realize that they were on a predictive curve. Each dot represents an attempt to build the best computer with the tools of the day. Of course, we use these computers to make better design software and manufacturing control algorithms. And so the progress continues.

One machine was used in the 1890 Census; one cracked the Nazi Enigma cipher in World War II; one predicted Eisenhower’s win in the Presidential election. And there is the Apple ][, and the Cray 1, and just to make sure the curve had not petered out recently, I looked up the cheapest PC available for sale on Wal*Mart.com, and that is the green dot that I have added to the upper right corner of the graph.

And notice the relative immunity to economic cycles. The Great Depression and the World Wars and various recessions do not introduce a meaningful delay in the progress of Moore’s Law. Certainly, the adoption rates, revenue, profits and inventory levels of the computer companies behind the various dots on the graph may go though wild oscillations, but the long-term trend emerges nevertheless.

Any one technology, such as the CMOS transistor, follows an elongated S-shaped curve of slow progress during initial development, upward progress during a rapid adoption phase, and then slower growth from market saturation over time. But a more generalized capability, such as computation, storage, or bandwidth, tends to follow a pure exponential&#8212bridging across a variety of technologies and their cascade of S-curves.

If history is any guide, Moore’s Law will continue on and will jump to a different substrate than CMOS silicon. It has done so five times in the past, and will need to again in the future.

Problems With the Current Paradigm:

Intel co-founder Gordon Moore has chuckled at those who have predicted the imminent demise of Moore’s Law in decades past. But the traditional semiconductor chip is finally approaching some fundamental physical limits. Moore recently admitted that Moore’s Law, in its current form, with CMOS silicon, will run out of gas in 2017.

One of the problems is that the chips are getting very hot. The following graph of power density is also a logarithmic scale:

This provides the impetus for chip cooling companies, like Nanocoolers, to provide a breakthrough solution for removing 100 Watts per square centimeter. In the long term, the paradigm has to change.

Another physical limit is the atomic limit&#8212the indivisibility of atoms. Intel’s current gate oxide is 1.2nm thick. Intel’s 45nm process is expected to have a gate oxide that is only 3 atoms thick. It is hard to imagine many more doublings from there, even with further innovation in insulating materials. Intel has recently announced a breakthrough in a nano-structured gate oxide (high k dielectric) and metal contact materials that should enable the 45nm node to come on line in 2007. None of the industry participants has a CMOS roadmap for the next 50 years.

A major issue with thin gate oxides, and one that will also come to the fore with high-k dielectrics, is quantum mechanical tunneling. As the oxide becomes thinner, the gate current can approach and even exceed the channel current so that the transistor cannot be controlled by the gate.

Another problem is the escalating cost of a semiconductor fab plant, which is doubling every three years, a phenomenon dubbed Moore’s Second Law. Human ingenuity keeps shrinking the CMOS transistor, but with increasingly expensive manufacturing facilities&#8212currently $3 billion per fab.

A large component of fab cost is the lithography equipment that patterns the wafers with successive sub-micron layers. Nanoimprint lithography from companies like Molecular Imprints can dramatically lower cost and leave room for further improvement from the field of molecular electronics.

We have been investing in a variety of companies, such as Coatue, D-Wave, FlexICs, Nantero, and ZettaCore that are working on the next paradigm shift to extend Moore’s Law beyond 2017. One near term extension to Moore’s Law focuses on the cost side of the equation. Imagine rolls of wallpaper embedded with inexpensive transistors. FlexICs deposits traditional transistors at room temperature on plastic, a much cheaper bulk process than growing and cutting crystalline silicon ingots.

Molecular Electronics:

The primary contender for the post-silicon computation paradigm is molecular electronics, a nano-scale alternative to the CMOS transistor. Eventually, molecular switches will revolutionize computation by scaling into the third dimension&#8212overcoming the planar deposition limitations of CMOS. Initially, they will substitute for the transistor bottleneck on an otherwise standard silicon process with standard external I/O interfaces.

For example, Nantero employs carbon nanotubes suspended above metal electrodes on silicon to create high-density nonvolatile memory chips (the weak Van der Waals bond can hold a deflected tube in place indefinitely with no power drain). Carbon nanotubes are small (~10 atoms wide), 30x stronger than steel at 1/6 the weight, and perform the functions of wires, capacitors and transistors with better speed, power, density and cost. Cheap nonvolatile memory enables important advances, such as “instant-on” PCs.

Other companies, such as Hewlett Packard and ZettaCore, are combining organic chemistry with a silicon substrate to create memory elements that self-assemble using chemical bonds that form along pre-patterned regions of exposed silicon.

There are several reasons why molecular electronics is the next paradigm for Moore’s Law:

Size: Molecular electronics has the potential to dramatically extend the miniaturization that has driven the density and speed advantages of the integrated circuit (IC) phase of Moore’s Law. In 2002, using a STM to manipulate individual carbon monoxide molecules, IBM built a 3-input sorter by arranging those molecules precisely on a copper surface. It is 260,000x smaller than the equivalent circuit built in the most modern chip plant.

For a memorable sense of the difference in scale, consider a single drop of water. There are more molecules in a single drop of water than all transistors ever built. Think of the transistors in every memory chip and every processor ever built&#8212there are about 100x more molecules in a drop of water. Sure, water molecules are small, but an important part of the comparison depends on the 3D volume of a drop. Every IC, in contrast, is a thin veneer of computation on a thick and inert substrate.

Power: One of the reasons that transistors are not stacked into 3D volumes today is that the silicon would melt. The inefficiency of the modern transistor is staggering. It is much less efficient at its task than the internal combustion engine. The brain provides an existence proof of what is possible; it is 100 million times more efficient in power/calculation than our best processors. Sure it is slow (under a kHz) but it is massively interconnected (with 100 trillion synapses between 60 billion neurons), and it is folded into a 3D volume. Power per calculation will dominate clock speed as the metric of merit for the future of computation.

Manufacturing Cost: Many of the molecular electronics designs use simple spin coating or molecular self-assembly of organic compounds. The process complexity is embodied in the synthesized molecular structures, and so they can literally be splashed on to a prepared silicon wafer. The complexity is not in the deposition or the manufacturing process or the systems engineering. Much of the conceptual difference of nanotech products derives from a biological metaphor: complexity builds from the bottom up and pivots about conformational changes, weak bonds, and surfaces. It is not engineered from the top with precise manipulation and static placement.

Low Temperature Manufacturing: Biology does not tend to assemble complexity at 1000 degrees in a high vacuum. It tends to be room temperature or body temperature. In a manufacturing domain, this opens the possibility of cheap plastic substrates instead of expensive silicon ingots.

Elegance: In addition to these advantages, some of the molecular electronics approaches offer elegant solutions to non-volatile and inherently digital storage. We go through unnatural acts with CMOS silicon to get an inherently analog and leaky medium to approximate a digital and non-volatile abstraction that we depend on for our design methodology. Many of the molecular electronic approaches are inherently digital, and some are inherently non-volatile.

Other research projects, from quantum computing to using DNA as a structural material for directed assembly of carbon nanotubes, have one thing in common: they are all nanotechnology.

Why the term “Nanotechnology”?

Nanotech is often defined as the manipulation and control of matter at the nanometer scale (critical dimensions of 1-100nm). It is a bit unusual to describe a technology by a length scale. We certainly didn’t get very excited by “inch-o-technology.” As venture capitalists, we start to get interested when there are unique properties of matter that emerge at the nanoscale, and that are not exploitable at the macroscale world of today’s engineered products. We like to ask the startups that we are investing in: “Why now? Why couldn’t you have started this business ten years ago?” Our portfolio of nanotech startups have a common thread in their response to this question&#8212recent developments in the capacity to understand and engineer nanoscale materials have enabled new products that could not have been developed at larger scale.

There are various unique properties of matter that are expressed at the nanoscale and are quite foreign to our “bulk statistical” senses (we do not see single photons or quanta of electric charge; we feel bulk phenomena, like friction, at the statistical or emergent macroscale). At the nanoscale, the bulk approximations of Newtonian physics are revealed for their inaccuracy, and give way to quantum physics. Nanotechnology is more than a linear improvement with scale; everything changes. Quantum entanglement, tunneling, ballistic transport, frictionless rotation of superfluids, and several other phenomena have been regarded as “spooky” by many of the smartest scientists, even Einstein, upon first exposure.

For a simple example of nanotech’s discontinuous divergence from the “bulk” sciences, consider the simple aluminum Coke can. If you take the inert aluminum metal in that can and grind it down into a powder of 20-30nm particles, it will spontaneously explode in air. It becomes a rocket fuel catalyst. The energetic properties of matter change at that scale. The surface area to volume ratios become relevant, and even the inter-atomic distances in a metal lattice change from surface effects.

Innovation from the Edge:

Disruptive innovation, the driver of growth and renewal, occurs at the edge. In startups, innovation occurs out of the mainstream, away from the warmth of the herd. In biological evolution, innovative mutations take hold at the physical edge of the population, at the edge of survival. In complexity theory, structure and complexity emerge at the edge of chaos&#8212the dividing line between predictable regularity and chaotic indeterminacy. And in science, meaningful disruptive innovation occurs at the inter-disciplinary interstices between formal academic disciplines.

Herein lies much of the excitement about nanotechnology: in the richness of human communication about science. Nanotech exposes the core areas of overlap in the fundamental sciences, the place where quantum physics and quantum chemistry can cross-pollinate with ideas from the life sciences.

Over time, each of the academic disciplines develops its own proprietary systems vernacular that isolates it from neighboring disciplines. Nanoscale science requires scientists to cut across the scientific languages to unite the isolated islands of innovation.

Nanotech is the nexus of the sciences.

In academic centers and government labs, nanotech is fostering new conversations. At Stanford, Duke and many other schools, the new nanotech buildings are physically located at the symbolic hub of the schools of engineering, computer science and medicine.

Nanotech is the nexus of the sciences, but outside of the science and research itself, the nanotech umbrella conveys no business synergy whatsoever. The marketing, distribution and sales of a nanotech solar cell, memory chip or drug delivery capsule will be completely different from each other, and will present few opportunities for common learning or synergy.

Market Timing:

As an umbrella term for a myriad of technologies spanning multiple industries, nanotech will eventually disrupt these industries over different time frames&#8212but most are long-term opportunities. Electronics, energy, drug delivery and materials are areas of active nanotech research today. Medicine and bulk manufacturing are future opportunities. The NSF predicts that nanotech will have a trillion dollar impact on various industries inside of 15 years.

Of course, if one thinks far enough in the future, every industry will be eventually revolutionized by a fundamental capability for molecular manufacturing&#8212from the inorganic structures to the organic and even the biological. Analog manufacturing becomes digital, engendering a profound restructuring of the substrate of the physical world.

The science futurism and predictions of potential nanotech products has a near term benefit. It helps attract some of the best and brightest scientists to work on hard problems that are stepping-stones to the future vision. Scientists relish in exploring the frontier of the unknown, and nanotech embodies the inner frontier.

Given that much of the abstract potential of nanotech is a question of “when” not “if”, the challenge for the venture capitalist is one of market timing. When should we be investing, and in which sub-sectors? It is as if we need to pull the sea of possibilities through an intellectual chromatograph to tease apart the various segments into a timeline of probable progression. That is an ongoing process of data collection (e.g., the growing pool of business plan submissions), business and technology analysis, and intuition.

Two touchstone events for the scientific enthusiasm for the timing of nanotech were the decoding of the human genome and the dazzling visual images from the Scanning Tunneling Microscope (e.g., the arrangement of individual Xenon atoms into the IBM logo). They represent the digitization of biology and matter, symbolic milestones for accelerated learning and simulation-driven innovation.

And more recently, nanotech publication has proliferated, much like the early days of the Internet. Beside the popular press, the number of scientific publications on nanotech has grown 10x in the past ten years. According to the U.S. Patent Office, the number of nanotech patents granted each year has skyrocketed 3x in the past seven years. Ripe with symbolism, IBM has more lawyers than engineers working on nanotech.

With the recent codification of the National Nanotech Initiative into law, federal funding will continue to fill the pipeline of nanotech research. With $847 million earmarked for 2004, nanotech was a rarity in the tight budget process; it received more funding than was requested. And now nanotech is second only to the space race for federal funding of science. And the U.S. is not alone in funding nanotechnology. Unlike many previous technological areas, we aren’t even in the lead. Japan outspends the U.S. each year on nanotech research. In 2003, the U.S. government spending was one fourth of the world total.

Federal funding is the seed corn for nanotech entrepreneurship. All of our nanotech portfolio companies are spin-offs (with negotiated IP transfers) from universities or government labs, and all got their start with federal funding. Often these companies need specialized equipment and expensive laboratories to do the early tinkering that will germinate a new breakthrough. These are typically lacking in the proverbial garage of the entrepreneur at home.

And corporate investors have discovered a keen interest in nanotechnology, with internal R&D, external investments in startups, and acquisitions of promising companies, such as AMD’s recent acquisition of the molecular electronics company Coatue.

Despite all of this excitement, there are a fair number of investment dead-ends, and so we continue to refine the filters we use in selecting companies to back. Every entrepreneur wants to present their business as fitting an appropriate timeline to commercialization. How can we guide our intuition on which of these entrepreneurs are right?

The Vertical Integration Question:

Nanotech involves the reengineering of the lowest level physical layer of a system, and so a natural business question arises: How far forward do you need to vertically integrate before you can sell a product on the open market? For example, in molecular electronics, if you can ship a DRAM-compatible chip, you have found a horizontal layer of standardization, and further vertical integration is not necessary. If you have an incompatible 3D memory block, you may have to vertically integrate to the storage subsystem level, or further, to bring product to market. That may require industry partnerships, and will, in general, take more time and money as change is introduced farther up the product stack. 3D logic with massive interconnectivity may require a new computer design and a new form of software; this would take the longest to commercialize. And most startups on this end of the spectrum would seek partnerships to bring their vision to market. The success and timeliness of that endeavor will depend on many factors, including IP protection, the magnitude of improvement, the vertical tier at which that value is recognized, the number of potential partners, and the degree of tooling and other industry accommodations.

Product development timelines are impacted by the cycle time of the R&D feedback loop. For example, outdoor lifetime testing for organic LEDs will take longer than in silico simulation spins of digital products. If the product requires partners in the R&D loop or multiple nested tiers of testing, it will take longer to commercialize.

The “Interface Problem”:

As we think about the startup opportunities in nanotechnology, an uncertain financial environment underscores the importance of market timing and revenue opportunities over the next five years. Of the various paths to nanotech, which are 20-year quests in search of a government grant, and which are market-driven businesses that will attract venture capital? Are there co-factors of production that require a whole industry to be in place before a company ships product?

As a thought experiment, imagine that I could hand you today any nanotech marvel of your design&#8212a molecular machine as advanced as you would like. What would it be? A supercomputer? A bloodstream submarine? A matter compiler capable of producing diamond rods or arbitrary physical objects? Pick something.

Now, imagine some of the complexities: Did it blow off my hand as I offer it to you? Can it autonomously move to its intended destination? What is its energy source? How do you communicate with it?

These questions draw the “interface problem” into sharp focus: Does your design require an entire nanotech industry to support, power, and “interface” to your molecular machine? As an analogy, imagine that you have one of the latest Pentium processors out of Intel’s wafer fab. How would you make use of the Pentium chip? You then need to wire-bond the chip to a larger lead frame in a package that connects to a larger printed circuit board, fed by a bulky power supply that connects to the electrical power grid. Each of these successive layers relies on the larger-scale precursors from above (which were developed in reverse chronological order), and the entire hierarchy is needed to access the potential of the microchip.

For molecular nanotech, where is the scaling hierarchy?

Today’s business-driven paths to nanotech diverge into two strategies to cross the “interface” chasm&#8212the biologically inspired bottom-up path, and the top-down approach of the semiconductor industry. The non-biological MEMS developers are addressing current markets in the micro-world while pursuing an ever-shrinking spiral of miniaturization that builds the relevant infrastructure tiers along the way. Not surprisingly, this is very similar to the path that has been followed in the semiconductor industry, and many of its adherents see nanotech as inevitable, but in the distant future.

On the other hand, biological manipulation presents myriad opportunities to effect great change in the near-term. Drug development, tissue engineering, and genetic engineering are all powerfully impacted by the molecular manipulation capabilities available to us today. And genetically modified microbes, whether by artificial evolution or directed gene splicing, give researchers the ability to build structures from the bottom up.

The Top Down “Chip Path”:

This path is consonant with the original vision of physicist Richard Feynman (in his 1959 lecture at Caltech) of the iterative miniaturization of our tools down to the nano scale. Some companies, like Zyvex, are pursuing the gradual shrinking of semiconductor manufacturing technology from the micro-electro-mechanical systems (MEMS) of today into the nanometer domain of NEMS. SiWave engineers and manufactures MEMS structures with applications in the consumer electronics, biomedical and communications markets. These precision mechanical devices are built utilizing a customized semiconductor fab.

MEMS technologies have already revolutionized the automotive industry with airbag sensors and the printing sector with ink jet nozzles, and are on track to do the same in medical devices, photonic switches for communications and mobile phones. In-Stat/MDR forecasts that the $4.7 billion of MEMS revenue in 2003 will grow to $8.3 billion by 2007. But progress is constrained by the pace (and cost) of the semiconductor equipment industry, and by the long turnaround time for fab runs. Microfabrica in Torrance, CA, is seeking to overcome these limitations to expand the market for MEMS to 3D structures in more materials than just silicon and with rapid turnaround times.

Many of the nanotech advances in storage, semiconductors and molecular electronics can be improved, or in some cases enabled, by tools that allow for the manipulation of matter at the nanoscale. Here are three examples:

• Nanolithography

Molecular Imprints is commercializing a unique imprint lithographic technology developed at the University of Texas at Austin. The technology uses photo-curable liquids and etched quartz plates to dramatically reduce the cost of nanoscale lithography. This lithography approach, recently added to the ITRS Roadmap, has special advantages for applications in the areas of nano-devices, MEMS, microfluidics, optical components and devices, as well as molecular electronics.

• Optical Traps

Arryx has developed a breakthrough in nano-material manipulation. They generate hundreds of independently controllable laser tweezers that can manipulate molecular objects in 3D (move, rotate, cut, place), all from one laser source passing through an adaptive hologram. The applications span from cell sorting, to carbon nanotube placement, to continuous material handling. They can even manipulate the organelles inside an unruptured living cell (and weigh the DNA in the nucleus).

• Metrology

Imago’s LEAP atom probe microscope is being used by the chip and disk drive industries to produce 3D pictures that depict both chemistry and structure of items on an atom-by-atom basis.  Unlike traditional microscopes, which zoom in to see an item on a microscopic level, Imago’s nanoscope analyzes structures, one atom at a time, and "zooms out" as it digitally reconstructs the item of interest at a rate of millions of atoms per minute.  This creates an unprecedented level of visibility and information at the atomic level.

Advances in nanoscale tools help us control and analyze matter more precisely, which in turn, allows us to produce better tools.

To summarize, the top-down path is designed and engineered with:

• Semiconductor industry adjacencies (with the benefits of market extensions and revenue along the way and the limitation of planar manufacturing techniques)

• Interfaces of scale inherited from the top

The Biological Bottom Up Path:

In contrast to the top-down path, the biological bottom up archetype is:

• Grown via replication, evolution, and self assembly in a 3D, fluid medium

• Constrained at interfaces to the inorganic world

• Limited by learning and theory gaps (in systems biology, complexity theory and the pruning rules of emergence)

• Bootstrapped by a powerful pre-existing hierarchy of interpreters of digital molecular code.

To elaborate on this last point, the ribosome takes digital instructions in the form of mRNA and manufactures almost everything we care about in our bodies from a sequential concatenation of amino acids into proteins. The ribosome is a wonderful existence proof of the power and robustness of a molecular machine. It is roughly 20nm on a side and consists of only 99 thousand atoms. Biological systems are replicating machines that parse molecular code (DNA) and a variety of feedback to grow macro-scale beings. These highly evolved systems can be hijacked and reprogrammed to great effect.

So how does this help with the development of molecular electronics or nanotech manufacturing? The biological bootstrap provides a more immediate path to nanotech futures. Biology provides us with a library of pre-built components and subsystems that can be repurposed and reused, and scientists in various labs are well underway in re-engineering the information systems of biology.

For example, researchers at NASA Ames are taking self-assembling heat shock proteins from thermophiles and genetically modifying them so that they will deposit a regular array of electrodes with a 17nm spacing. This could be useful for patterned magnetic media in the disk drive industry or electrodes in a polymer solar cell.

At MIT, researchers are using accelerated artificial evolution to rapidly breed M13 bacteriophage to infect bacteria in such a way that they bind and organize semiconducting materials with molecular precision.

At IBEA, Craig Venter and Hamilton Smith are leading the Minimal Genome Project. They take the Mycoplasma genitalium from the human urogenital tract, and strip out 200 unnecessary genes, thereby creating the simplest organism that can self-replicate. Then they plan to layer new functionality on to this artificial genome, such as the ability to generate hydrogen from water using the sun’s energy for photonic hydrolysis.

The limiting factor is our understanding of these complex systems, but our pace of learning has been compounding exponentially. We will learn more about genetics and the origins of disease in the next 10 years than we have in all of human history. And for the minimal genome microbes, the possibility of understanding the entire proteome and metabolic pathways seems tantalizingly close to achievable. These simpler organisms have a simple “one gene: one protein” mapping, and lack the nested loops of feedback that make the human genetic code so rich.

Hybrid Molecular Electronics Example:

In the near term, there are myriad companies who are leveraging the power of organic self-assembly (bottom up) and the market interface advantages of top down design. The top down substrate constrains the domain of self-assembly.

Based in Denver, ZettaCore builds molecular memories from energetically elegant molecules that are similar to chlorophyll. ZettaCore’s synthetic organic porphyrin molecule self-assembles on exposed silicon. These molecules, called multiporphyrin nanostructures, can be oxidized and reduced (electrons removed or replaced) in a way that is stable, reproducible, and reversible. In this way, the molecules can be used as a reliable storage medium for electronic devices. Furthermore, the molecules can be engineered to store multiple bits of information and to maintain that information for relatively long periods of time before needing to be refreshed.

Recall the water drop to transistor count comparison, and realize that these multiporphyrins have already demonstrated up to eight stable digital states per molecule.

The technology has future potential to scale to 3D circuits with minimal power dissipation, but initially it will enhance the weakest element of an otherwise standard 2D memory chip. The ZettaCore memory chip looks like a standard memory chip to the end customer; nobody needs to know that it has “nano inside.” The I/O pads, sense amps, row decoders and wiring interconnect are produced with a standard semiconductor process. As a final manufacturing step, the molecules are splashed on the wafer where they self-assemble in the pre-defined regions of exposed metal.

From a business perspective, the hybrid product design allows an immediate market entry because the memory chip defines a standard product feature set, and the molecular electronics manufacturing process need not change any of the prior manufacturing steps. The inter-dependencies with the standard silicon manufacturing steps are also avoided given this late coupling; the fab can process wafers as they do now before spin coating the molecules. In contrast, new materials for gate oxides or metal interconnects can have a number of effects on other processing steps that need to be tested, which introduces delay (as was seen with copper interconnect).

For these reasons, ZettaCore is currently in the lead in the commercialization of molecular electronics, with a working megabit chip, technology tested to a trillion read/write cycles, and manufacturing partners. In a symbolic nod to the future, Intel co-founder Les Vadasz (badge #3), has just joined the Board of Directors of ZettaCore. He was formerly the design manager for the world’s first DRAM, EPROM and microprocessor.

Generalizing from the ZettaCore experience, the early revenue in molecular electronics will likely come from simple 1D structures such as chemical sensors and self-assembled 2D arrays on standard substrates, such as memory chips, sensor arrays, displays, CCDs for cameras and solar cells.

IP and business model:

Beyond product development timelines, the path to commercialization is dramatically impacted by the cost and scale of the manufacturing ramp. Partnerships with industry incumbents can be the accelerant or albatross for market entry.

The strength of the IP protection for nanotech relates to the business models that can be safely pursued. For example, if the composition of matter patents afford the nanotech startup the same degree of protection as a biotech startup, then a “biotech licensing model” may be possible in nanotech. For example, a molecular electronics company could partner with a large semiconductor company for manufacturing, sales and marketing, just as a biotech company partners with a big pharma partner for clinical trials, marketing, sales and distribution. In both cases, the cost to the big partner is on the order of $100 million, and the startup earns a royalty on future product sales.

Notice how the transaction costs and viability of this business model option pivots around the strength of IP protection. A software business, on the other end of the IP spectrum, would be very cautious about sharing their source code with Microsoft in the hopes of forming a partnership based on royalties.

Manufacturing partnerships are common in the semiconductor industry, with the “fabless” business model. This layering of the value chain separates the formerly integrated functions of product conceptualization, design, manufacturing, testing, and packaging. This has happened in the semiconductor industry because the capital cost of manufacturing is so large. The fabless model is a useful way for a small company with a good idea to bring its own product to market, but the company then has to face the issue of gaining access to its market and funding the development of marketing, distribution, and sales.

Having looked at the molecular electronics example in some depth, we can now move up the abstraction ladder to aggregates, complex systems, and the potential to advance the capabilities of Moore’s Law in software.

Systems, Software, and other Abstractions:

Unlike memory chips, which have a regular array of elements, processors and logic chips are limited by the rats’ nest of wires that span the chip on multiple layers. The bottleneck in logic chip design is not raw numbers of transistors, but a design approach that can utilize all of that capability in a timely fashion. For a solution, several next generation processor companies have redesigned “systems on silicon” with a distributed computing bent; wiring bottlenecks are localized, and chip designers can be more productive by using a high-level programming language, instead of wiring diagrams and logic gates. Chip design benefits from the abstraction hierarchy of computer science.

Compared to the relentless march of Moore’s Law, the cognitive capability of humans is relatively fixed. We have relied on the compounding power of our tools to achieve exponential progress. To take advantage of accelerating hardware power, we must further develop layers of abstraction in software to manage the underlying complexity. For the next 1000-fold improvement in computing, the imperative will shift to the growth of distributed complex systems. Our inspiration will likely come from biology.

As we race to interpret the now complete map of the human genome, and embark upon deciphering the proteome, the accelerating pace of learning is not only opening doors to the better diagnosis and treatment of disease, it is also a source of inspiration for much more powerful models for computer programming and complex systems development.

Biological Muse:

Many of the interesting software challenges relate to growing complex systems or have other biological metaphors as inspiration. Some of the interesting areas include: Biomimetics, Artificial Evolution, Genetic Algorithms, A-life, Emergence, IBM’s Autonomic Computing initiative, Viral Marketing, Mesh, Hives, Neural Networks and the Subsumption architecture in robotics. The Santa Fe Institute just launched a BioComp research initiative.

In short, biology inspires IT and IT drives biology.

But how inspirational are the information systems of biology? If we took your entire genetic code–the entire biological program that resulted in your cells, organs, body and mind–and burned it into a CD, it would be smaller than Microsoft Office. Just as images and text can be stored digitally, two digital bits can encode for the four DNA bases (A,T,C and G) resulting in a 750MB file that can be compressed for the preponderance of structural filler in the DNA chain.

If, as many scientists believe, most of the human genome consists of vestigial evolutionary remnants that serve no useful purpose, then we could compress it to 60MB of concentrated information. Having recently reinstalled Office, I am humbled by the comparison between its relatively simple capabilities and the wonder of human life. Much of the power in bio-processing comes from the use of non-linear fuzzy logic and feedback in the electrical, physical and chemical domains.

For example, in a fetus, the initial inter-neuronal connections, or "wiring" of the brain, follow chemical gradients. The massive number of inter-neuron connections in an adult brain could not be simply encoded in our DNA, even if the entire DNA sequence was dedicated to this one task. There are on the order of 100 trillion synaptic connections between 60 billion neurons in your brain.

This incredibly complex system is not ‘installed’ like Microsoft Office from your DNA. It is grown, first through widespread connectivity sprouting from ‘static storms’ of positive electro-chemical feedback, and then through the pruning of many underused connections through continuous usage-based feedback. In fact, at the age of 2 to 3 years old, humans hit their peak with a quadrillion synaptic connections, and twice the energy burn of an adult brain.

The brain has already served as an inspirational model for artificial intelligence (AI) programmers. The neural network approach to AI involves the fully interconnected wiring of nodes, and then the iterative adjustment of the strength of these connections through numerous training exercises and the back-propagation of feedback through the system.

Moving beyond rules-based AI systems, these artificial neural networks are capable of many human-like tasks, such as speech and visual pattern recognition with a tolerance for noise and other errors. These systems shine precisely in the areas where traditional programming approaches fail.

The coding efficiency of our DNA extends beyond the leverage of numerous feedback loops to the complex interactions between genes. The regulatory genes produce proteins that respond to external or internal signals to regulate the activity of previously produced proteins or other genes. The result is a complex mesh of direct and indirect controls.

This nested complexity implies that genetic re-engineering can be a very tricky endeavor if we have partial system-wide knowledge about the side effects of tweaking any one gene. For example, recent experiments show that genetically enhanced memory comes at the expense of enhanced sensitivity to pain.

By analogy, our genetic code is a dense network of nested hyperlinks, much like the evolving Web. Computer programmers already tap into the power and efficiency of indirect pointers and recursive loops. More recently, biological systems have inspired research in evolutionary programming, where computer programs are competitively grown in a simulated environment of natural selection and mutation. These efforts could transcend the local optimization inherent to natural evolution.

But therein lies great complexity. We have little experience with the long-term effects of the artificial evolution of complex systems. Early subsystem work can be deterministic of emergent and higher-level capabilities, as with the neuron (witness the Cambrian explosion of structural complexity and intelligence in biological systems once the neuron enabled something other than nearest-neighbor inter-cellular communication. Prior to the neuron, most multi-cellular organisms were small blobs).

Recent breakthroughs in robotics were inspired by the "subsumption architecture" of biological evolution&#8212using a layered approach to assembling reactive rules into complete control systems from the bottom up. The low-level reflexes are developed early on, and remain unchanged as complexity builds. Early subsystem work in any subsumptive system can have profound effects on its higher order constructs. We may not have a predictive model of these downstream effects as we are developing the architectural equivalent of the neuron.

The Web is the first distributed experiment in biological growth in technological systems. Peer-to-peer software development and the rise of low-cost Web-connected embedded systems give the possibility that complex artificial systems will arise on the Internet, rather than on one programmer’s desktop. We already use biological metaphors, such as viral marketing to describe the network economy.

Nanotech Accelerants: quantum simulation and high-throughput experimentation:

We have already discussed the migration of the lab sciences to the innovation cycles of the information sciences and Moore’s Law. Advances in multi-scale molecular modeling are helping some companies design complex molecular systems in silico. But the quantum effects that underlie the unique properties of nano-scale systems are a double-edged sword. Although scientists have known for nearly 100 years how to write down the equations that an engineer needs to solve in order to understand any quantum system, no computer has ever been built that is powerful enough to solve them. Even today’s most powerful supercomputers choke on systems bigger than a single water molecule.

This means that the behavior of nano-scale systems can only be reliably studied by empirical methods&#8212building something in a lab, and poking and prodding it to see what happens.

This observation is distressing on several counts. We would like to design and visualize nano-scale products in the tradition of mechanical engineering, using CAD-like programs. Unfortunately this future can never be accurately realized using traditional computer architectures. The structures of interest to nano-scale scientists present intractable computational challenges to traditional computers.

The shortfall in our ability to use computers to shorten and cheapen the design cycles of nano-scale products has serious business ramifications. If the development of all nano-scale products fundamentally requires long R&D cycles and significant investment, the nascent nanotechnology industry will face many of the difficulties that the biotechnology industry faces, without having a parallel to the pharmaceutical industry to shepherd products to markets.

In a wonderful turn of poetic elegance, quantum mechanics itself turns out to be the solution to this quandary. Machines known as quantum computers, built to harness some simple properties of quantum systems, can perform accurate simulations of any nano-scale system of comparable complexity. The type of simulation that a quantum computer does results in an exact prediction of how a system will behave in nature&#8212something that is literally impossible for any traditional computer, no matter how powerful.

Once quantum computers become available, engineers working at the nano-scale will be able to use them to model and design nano-scale systems just like today’s aerospace engineers model and design airplanes&#8212completely virtually&#8212with no wind tunnels (or their chemical analogues).

This may seem strange, but really it’s not. Think of it like this: conventional computers are really good at modeling conventional (that is, non-quantum) stuff&#8212like automobiles and airplanes. Quantum computers are really good at modeling quantum stuff. Each type of computer speaks a different language.

Based in Vancouver, Canada, D-Wave is building a quantum computer using aluminum-based circuits. The company projects that by 2008 it will be building thumbnail-sized chips with more computing power than the aggregate total of all computers on the planet today and ever built in history, when applied to simulating the behavior and predicting the properties of nano-scale systems&#8212highlighting the vast difference in capabilities of quantum and conventional computers. This would be of great value to the development of the nanotechnology industry. And it’s a jaw-dropping claim. Professor David Deutsch of Oxford summarized: “Quantum computers have the potential to solve problems that would take a classical computer longer than the age of the universe.”

While any physical experiment can be regarded as a complex computation, we will need quantum computers to transcend Moore’s law into the quantum domain to make this equivalence realizable. In the meantime, scientists will perform experiments. Until recently, the methods used for the discovery of new functional materials differed little from those used by scientists and engineers a hundred years ago. It was very much a manual, skilled labor-intensive process. One sample was prepared from millions of possibilities, then it was tested, the results recorded and the process repeated. Discoveries routinely took years.

Companies like Affymetrix, Intematix and Symyx have made major improvements in a new methodology: high throughput experimentation. For example, Intematix performs high throughput synthesis and screening of materials to produce and characterize these materials for a wide range of technology applications. This technology platform enables them to discover compound materials solutions more than one hundred times faster than conventional methods. Initial materials developed have application in wireless communications, fuel cells, batteries, x-ray imaging, semiconductors, LEDs, and phosphors.

Combinatorial materials discovery replaces the old traditional method by generating a multitude of combinations&#8212possibly all feasible combinations&#8212of a set of raw materials simultaneously. This "Materials Library" contains all combinations of a set of materials, and they can be quickly tested in parallel by automated methods similar to those used in the combinatorial chemistry and the pharmaceutical industry. What used to take years to develop now only takes months.

Timeline:

Given our discussion of the various factors affecting the commercialization of nanotech-nologies, how do we see them sequencing?

• Early Revenue

– Tools and bulk materials (powders, composites). Several revenue stage and public companies already exist in this category.

– 1D chemical and biological sensors. Out of body medical sensors and diagnostics

– Larger MEMS-scale devices

• Medium Term

– 2D Nanoelectronics: memory, displays, solar cells

– Hierarchically-structured nanomaterials

– Hybrid Bio-nano, efficient energy storage and conversion

– Passive drug delivery & diagnostics, improved implantable medical devices

• Long Term

– 3D Nanoelectronics

– Nanomedicine, therapeutics, and artificial chromosomes

– Quantum computers used in small molecule design

– Machine-phase manufacturing

– The safest long-term prediction is that the most important nanotech developments will be the unforeseen opportunities, something that we could not predict today.

In the long term, nanotechnology research could ultimately enable miniaturization to a magnitude never before previously seen, and could restructure and digitize the basis of manufacturing&#8212such that matter becomes code. Like the digitization of music, the importance is not just in the fidelity of reproduction, but in the decoupling of content from distribution. New opportunities arise once a product is digitized, such as online music swapping&#8212transforming an industry.

With replicating molecular machines, physical production itself migrates to the rapid innovation cycle of information technology. With physical goods, the basis of manufacturing governs inventory planning and logistics, and the optimal distribution and retail supply chain has undergone little radical change for many decades. Flexible, low-cost manufacturing near the point of consumption could transform the physical goods economy, and even change our notion of ownership&#8212especially for infrequently used objects.

These are some profound changes to the manufacturing of everything, which ripples through the fabric of society. The science futurists have pondered the implications of being able to manufacture anything for $1 per pound. And as some of these technologies couple tightly to our biology, it will draw into question the nature and extensibility of our humanity.

Genes, Memes and Digital Expression:

These changes may not be welcomed smoothly, especially with regard to reengineering the human germ line. At the societal level, we will likely try to curtail “genetic free speech” and the evolution of evolvability. Larry Lessig predicts that we will recapitulate the 200-year debate about the First Amendment to the Constitution. Pressures to curtail free genetic expression will focus on the dangers of “bad speech”, and others will argue that good genetic expression will crowd out the bad, as it did with mimetic evolution (in the scientific method and the free exchange of ideas). Artificial chromosomes with adult trigger events can decouple the agency debate about parental control. And, with a touch of irony, China may lead the charge.

We subconsciously cling to the selfish notion that humanity is the endpoint of evolution. In the debates about machine intelligence and genetic enhancements, there is a common and deeply rooted fear about being surpassed&#8212in our lifetime. When framed as a question of parenthood (would you want your great grandchild to be smarter and healthier than you?), the emotion often shifts from a selfish sense of supremacy to a universal human search for symbolic immortality.

Summary:

While the future is becoming more difficult to predict with each passing year, we should expect an accelerating pace of technological change. We conclude that nanotechnology is the next great technology wave and the next phase of Moore’s Law. Nanotech innovations enable myriad disruptive businesses that were not possible before, driven by entrepreneurship.

Much of our future context will be defined by the accelerating proliferation of information technology&#8212as it innervates society and begins to subsume matter into code. It is a period of exponential growth in the impact of the learning-doing cycle where the power of biology, IT and nanotech compounds the advances in each formerly discrete domain.

So, at DFJ, we conclude that it is a great time to invest in startups. As in evolution and the Cambrian explosion, many will become extinct. But some will change the world. So we pursue the strategy of a diversified portfolio, or in other words, we try to make a broad bet on mammals.

© 2003 Steve T. Jurvetson. Reprinted with permission.