In Response to

July 25, 2001 by Ray Kurzweil

Although George Gilder and Richard Vigilante share Ray Kurzweil’s grave concerns about Bill Joy’s apparently neo-Luddite calls for relinguishing broad areas of technology, Kurzweil is critical of Gilder and Vigilante’s skepticism regarding the feasibility of the dangers.

Portions of this response were published in The American Spectator, March 2001. Published on KurzweilAI.net July 25, 2001.

George Gilder’s “Stop Everything…It’s Techno-Horror!” can be read here.

Fundamentally, George Gilder and Richard Vigilante and I share a deeply critical reaction to Bill Joy’s prescription of relinquishment of “our pursuit of certain types of knowledge.” Just as George Soros attracted attention by criticizing the capitalist system of which he was a primary beneficiary, the credibility of Joy’s treatise on the dangers of future technology has been enhanced by his reputation as a primary architect of contemporary technology. Being a technologist, Joy claims not to be anti-technology, saying that we should keep the beneficial technologies, and relinquish only those dangerous ones, like nanotechnology. The problem with Joy’s view is that the dangerous technologies are exactly the same as the beneficial ones. The same biotechnology tools and knowledge that will save millions of future lives from cancer and other diseases could potentially provide a terrorist with the means for creating a bioengineered pathogen. The same nanotechnology that will eventually help clean up the environment and provide material products at almost no cost are the same technologies that could be misused to introduce new nonbiological pathogens.

I call this the deeply intertwined promise and peril of technology, and it’s not a new story. Technology empowers both our creative and destructive natures. Stalin’s tanks and Hitler’s trains used technology. Yet few people today would really want to go back to the short (human live span less than half of today’s), brutish, disease-filled, poverty-stricken, labor-intensive, disaster-prone lives that ninety-nine percent of the human race struggled through a few centuries ago.

We can’t have the benefits without at least the potential dangers. The only way to avoid the dangerous technologies would be to relinquish essentially all of technology. And the only way to accomplish that would be a totalitarian system (e.g., Brave New World) in which the state has exclusive use of technology to prevent everyone else from advancing technology. Joy’s recommendation does not go that far obviously, but his call for relinquishing broad areas of the pursuit of knowledge is based on an unrealistic assumption that we can parse safe and risky areas of knowledge.

Gilder and Vigilante write, “in the event of. . . an unplanned bio-catastrophe, we would be far better off with a powerful and multifarious biotech industry with long and diverse experience in handling such perils, constraining them, and inventing remedies than if we had “relinquished” these technologies to a small elite of government scientists, their work closely classified and shrouded in secrecy.”

I agree quite hardily with this eloquent perspective. Consider as a contemporary test case, how we have dealt with one recent technological challenge. There exists today a new form of fully nonbiological self-replicating entity that didn’t exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the “immune system” that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a tiny fraction of the benefit we receive from the computers and communication links that harbor them.

One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive future nanotechnology. Although true, this only strengthens my observation. The fact that computer viruses are not usually deadly to humans (although they can be if they intrude on mission critical systems such as airplanes and intensive care units) only means that more people are willing to create and release them. It also means that our response to the danger is relatively relaxed. Conversely, when it comes to future self replicating entities that may be potentially lethal on a large scale, our response on all levels will be vastly more intense.

Joy’s treatise is effective because he paints a picture of future dangers as if they were released on today’s unprepared world. The reality is that the sophistication and power of our defensive technologies and knowledge will grow along with the dangers. When we have gray goo, we will also have blue goo (“police” nanobots that combat the “bad” nanobots). The story of the twenty-first century has not yet been written, so we cannot say with assurance that we will successfully avoid all misuse. But the surest way to prevent the development of the defensive technologies would be to relinquish the pursuit of knowledge in broad areas, which would only drive these efforts underground where they would be dominated by the least reliable practitioners (e.g., the terrorists).

There is still a great deal of suffering in the world. Are we going to tell the millions of cancer patients that we’re canceling all cancer research despite very promising emerging treatments because the same technology might be abused by a terrorist? Consider the following tongue-in-cheek announcement, which I read during a radio debate with Joy: “Sun Microsystems announced today that it was relinquishing all research and development that might improve the intelligence of its software, the computational power of its computers, or the effectiveness of its networks due to concerns that the inevitable result of progress in these fields may lead to profound and irreversible dangers to the environment and even to the human race itself. ‘Better to be safe than sorry,’ Sun’s Chief Scientist Bill Joy was quoted as saying. Trading of Sun shares was automatically halted in accordance with Nasdaq trading rules after dropping by 90 percent in the first hour of trading.” Joy did not find my mock announcement amusing, but my point is a serious one: advancement in a broad array of technologies is an economic imperative.

Although I agree with Gilder and Vigilante’s opposition to the essentially totalitarian nature of the call for relinquishment of broad areas of the pursuit of knowledge and technology, their American Spectator article directs a significant portion of its argument against the technical feasibility of the dangers. This is not the best strategy in my view to counter Joy’s thesis. We don’t have to look further than today to see that technology is a double-edged sword.

They write, for example, that “But there are, to date, no nanobots,” and go on to cast doubt on their feasibility. Of course, it is the nature of future technology that it doesn’t exist today. But Gilder, as the author of two outstanding books (The Microcosm and The Telecosm) that document the exponential growth of diverse technologies, recognizes that these trends are not likely to stop any time soon. Combined with the equally compelling trend of miniaturization (we’re currently shrinking both electronic and mechanical technology by a factor of 5.6 per linear dimension per decade), it is reasonable to conclude that technologies such as nanobots are inevitable within a few decades. There are many positive reasons that nanobots will be developed including dramatic implications for health, the environment, and the economy.

Gilder and Vigilante refer to the “Joy-Drexler-Lovins, GNR, trinity of Techno-Horror.” I would suggest not including Eric Drexler in this line-up. As the principal original theorist of the feasibility of technology on a nanometer scale, and the founder, along with his wife, Christine Peterson, of the Foresight Institute, a leading nanotechnology think tank, Drexler is hardly anti-nanotechnology. I would not call Drexler’s vision “bipolar” and “manic-depressive” because his original treatise describes the potential dangers of self-replicating entities built on a nanometer scale (which, incidentally does not mean that the entities are one nanometer in size, but rather that key features are measured in nanometers). We clearly don’t consider the nuclear power industry to be anti-nuclear power, but we would nonetheless expect them to recognize the potential dangers of a reactor melt-down, and to take stringent steps to avoid such a disaster.

The Foresight Institute has been developing ethical guidelines and technology strategies to avoid potential dangers of future nanotechnology, but that doesn’t make them anti-nanotechnology. An example of an ethical guideline is the avoidance of physical entities that can self-replicate in a natural environment. An example of a technology strategy is what nanotechnologist Ralph Merkle calls the “Broadcast Architecture.” Merkle’s idea is that replicating entities would have to obtain self-replicating codes from a centralized secure server, which would guard against undesirable replication. The Broadcast Architecture is impossible in the biological world, which represents at least one way in which nanotechnology can be made safer than biotechnology.

Much of Gilder and Vigilante’s criticism of the feasibility of future technologies centers on genetic algorithms and other self-organizing programs as if the plan was to simply (and mindlessly) recreate the powers of the natural world by rerunning evolution. We find a variety of self-organizing paradigms in the few dozen regions of the human brain that we currently have an understanding of (with several hundred regions left to be reverse engineered). Self-organization is a powerful concept, but it is hardly automatic. We use a variety of self-organizing methods in my own field of pattern recognition, and they are critical to achieving a variety of intelligent behaviors. But the accelerating progression of technology is not fueled by an automatic process of simulating evolution. Rather, it the result of many interacting trends: vastly more powerful computation and communication technologies, about which Gilder has written so extensively, the exponentially shrinking size of technology, our exponentially growing knowledge of the human biogenetic system, and the human brain and nervous system, and many other salient accelerating and intersecting developments.

Gilder and Vigilante cite Joy’s “respectful” quotations of Unabomber Ted Kaczynski. These Kaczynski quotes come from my book, and I cited them to analyze specifically where Kaczynski’s thinking goes wrong. For example, I quoted the following statement from his Unabomber manifesto: “You can’t get rid of the ‘bad’ parts of technology and retain only the ‘good’ parts. Take modern medicine, for example. Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields. Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can’t have much progress in medicine without the whole technological system and everything that goes with it.”

As far as it goes, this statement of Kaczynski is essentially correct. Where Kaczynski and I part company (and I am sure Gilder and Vigilante as well) is his conclusion that the “bad” parts greatly outweigh the good parts. Given this, it is only logical to get rid of all further technology development. Joy’s position is that we relinquish only the “bad” parts, but on this point I believe that Kaczynski’s articulation of the infeasibility of such parsing is correct. We have a fundamental choice to make. Kaczynski stands for violent suppression of the pursuit of knowledge, and the values of freedom that go along with it. Joy would relinquish only broad areas of knowledge and leave this task presumably to some sort of government enforcement. But nanotechnology is not a simple unified field, it is rather the inevitable end result of the ongoing exponential trend of miniaturization in all areas of technology, which continues to move forward on hundreds of fronts.

Gilder has written with great enthusiasm and insight in his books and newsletters of the exponential growth of many technologies, including Gilder’s Law on the explosion of bandwidth. In my own writings, I have shown how the exponential growth of the power of technology is pervasive and affects a great multiplicity of areas. The impact of these interacting and accelerating revolutions is significant in the short-term (i.e., over years), but revolutionary in the long term (i.e., over decades). I believe that the most cogent strategy to oppose the allure of the suppression of the pursuit of knowledge is not to deny the potential dangers of future technology nor the theoretical feasibility of disastrous scenarios, but rather to build the case that the continued relatively open pursuit of knowledge is the most reliable (albeit not foolproof) way to reap the promise while avoiding the peril of profound twenty-first century technologies.

I believe that Gilder and Vigilante and I are in essential agreement on this issue. They write the following which persuasively articulates the point: