Statement for Extropy Institute Vital Progress Summit

February 18, 2004 by Ray Kurzweil

Responding to the Presidential Bioethics Council report, “Beyond Therapy,” Ray Kurzweil has written a keynote statement for the Extropy Institute’s Vital Progress Summit, an Internet virtual discussion and debate.

Published on Extropy Institute Vital Progress Summit site and KurzweilAI.net, February 18, 2004

Technology has always been a double-edged sword, bringing us longer and healthier life spans, freedom from physical and mental drudgery, and many new creative possibilities on the one hand, while introducing new and salient dangers on the other. Technology empowers both our creative and destructive natures. Genetic engineering is in the early stages of enormous strides in reversing disease and aging processes.

Ubiquitous nanotechnology, now about two decades away, will continue an exponential expansion of these benefits. These technologies will create extraordinary wealth, thereby overcoming poverty, and enabling us to provide for all of our material needs by transforming inexpensive raw materials and information into virtually any type of product. Lingering problems from our waning industrial age will be overcome. We will be able to reverse remaining environmental destruction.

Nanoengineered fuel cells and solar cells will provide clean energy. Nanobots in our physical bodies will destroy pathogens, remove debris such as misformed proteins and protofibrils, repair DNA, and reverse aging. We will be able to redesign all of the systems in our bodies and brains to be far more capable and durable. And that’s only the beginning.

There are also salient dangers. The means and knowledge exists in a routine college bioengineering lab to create unfriendly pathogens more dangerous than nuclear weapons. Unrestrained nanobot replication ("unrestrained" being the operative word here) would endanger all physical entities, biological or otherwise. As for "unfriendly" AI, that’s the most daunting challenge of all because intelligence is inherently the most powerful force in the Universe.

Awareness of these dangers has resulted in calls for broad relinquishment. Bill McKibben, the environmentalist who was one of the first to warn against global warming, takes the position that we have sufficient technology and that further progress should end. In his latest book titled "Enough: Staying Human in an Engineered Age," he metaphorically compares technology to beer and writes that "one beer is good, two beers may be better; eight beers, you’re almost certainly going to regret." McKibben’s metaphor comparing continued engineering to gluttony misses the point, and ignores the extensive suffering that remains in the human world, which we will be in a position to alleviate through sustained technological progress.

Another level of relinquishment, one recommended in Bill Joy’s Wired magazine cover story, would be to forego certain fields–nanotechnology, for example–that might be regarded as too dangerous. But such sweeping strokes of relinquishment are equally untenable. Nanotechnology is simply the inevitable end result of the persistent trend towards miniaturization that pervades all of technology. It is far from a single centralized effort, but is being pursued by a myriad of projects with many diverse goals.

Abandonment of broad areas of technology will only push them underground, where development would continue unimpeded by ethics and regulation. In such a situation, it would be the less-stable, less-responsible practitioners (e.g., terrorists) who would have all the expertise.

The siren calls for broad relinquishment are effective because they paint a picture of future dangers as if they were released on today’s unprepared world. The reality is that the sophistication and power of our defensive technologies and knowledge will grow along with the dangers. When we have "gray goo" (unrestrained nanobot replication), we will also have "blue goo" ("police" nanobots that combat the "bad" nanobots). The story of the 21st century has not yet been written, so we cannot say with assurance that we will successfully avoid all misuse. But the surest way to prevent the development of the defensive technologies would be to relinquish the pursuit of knowledge in broad areas. This was the primary moral of the novel Brave New World.

Consider software viruses. We have been able to largely control harmful software virus replication because the requisite knowledge is widely available to responsible practitioners. Attempts to restrict this knowledge would have created a far less stable situation. Responses to new challenges would have been far slower, and it is likely that the balance would have shifted towards the more destructive applications (that is, the software pathogens). Stopping the "GNR" technologies is not feasible, at least not without adopting a totalitarian system, and pursuit of such broad forms of relinquishment will only distract us from the vital task in front of us. In terms of public policy, the task at hand is to rapidly develop the defensive steps needed, which include ethical standards, legal standards, and defensive technologies. It is quite clearly a race. There is simply no alternative. We cannot relinquish our way out of this challenge.

There have been useful proposals for protective strategies, such as Ralph Merkle’s "broadcast" architecture, in which replicating entities need to obtain replication codes from a secure server. We need to realize, of course, that each level of protection will only work to a certain level of sophistication.

The "meta" lesson here is that we will need to place society’s highest priority during the 21st century on continuing to advance the defensive technologies and to keep them one or more steps ahead of destructive misuse. In this way, we can realize the profound promise of these accelerating technologies, while managing the peril.