Embrace, Don’t Relinquish, the Future

February 21, 2001 by Max More

Extropy Institute head Max More finds Bill Joy’s Wired essay uninformed, unworkable, and even unethical because it will slow down progress in medicine and other vital areas, he believes.

Originally published May 7, 2000 at Extropy.org. Published on KurzweilAI.net February 26, 2001. Read Ray Kurzweil’s response to Bill Joy here.

When a scientist publishes a paper, her peers expect to see evidence that she has read prior work relevant to her topic. They expect the scientist to have studied the field thoroughly before contributing a paper, especially in a controversial field. Bill Joy, as Chief Scientist at Sun Microsystems, should understand this. In reading his essay “Why the Future Doesn’t Need Us” I am struck by his public show of ignorance of thinking about future technologies, his unrealistic thoughts about “relinquishment”, and his slighting of those who have considered these issues deeply as lacking in common sense. At the same time, I appreciate his courage in publicly laying out his fears.

As a philosopher, I find his comments about losing our humanity to be frustratingly offhand. I will address this issue in a separate response. Here I wish to focus on Joy’s call for relinquishment of the technologies of genetic engineering, molecular nanotechnology, and robotics (and all associated fields). As someone who has thought about these issues for many years, I wish to challenge Joy’s relinquishment policy on two grounds: First, it’s unworkable. Second, it’s ethically appalling. (A third reason–that in practice it would result in authoritarian control while still failing to achieve its purpose–I will leave for a separate response.)

According to Joy’s extensive essay, his apocalyptic thinking was set off by hearing a conversation between Ray Kurzweil and Hans Moravec. Apart from attending a Foresight Institute conference back in 1989, Joy shows no sign of having read any of the writings or listening to any of the talks of those who have devoted themselves to the issues he raises. Despite the brilliant clarity of Kurzweil’s writing, Joy still isn’t clear whether we are supposed to “become robots or fuse with robots or something like that.” He gives no credit to the years of work by the Foresight Institute not only in promoting the idea of nanotechnology but in planning for its potential dangers by considering both technical and knowledge and policy-based approaches. Certainly we here at Extropy Institute–a multi-disciplinary think tank and educational organization devoted to the human future–never heard from Joy before he released his missive to the masses.

Someone in Joy’s influential position has a responsibility to delve into prior thinking on these issues before scaring a public already unreasonably afraid of some advanced technologies, including genetic engineering. I find it incredible that Joy cites Carl Sagan, one of my intellectual inspirations in the course of criticizing we leading advocates of 21st century technologies as lacking in common sense. Those who advocate obviously unrealistic policies such as global relinquishment should not make accusations about common sense. This would be less galling if Joy had actually bothered to find out what we advocates for the future had to say over the last twelve years. (In 1988, a year before the Foresight conference that Joy attended, we founded Extropy magazine which evolved into Extropy Institute–a transhumanist organization devoted to “Incubating Better Futures”.) Joy also accuses us of lacking humility while in an interview he draws a (misleading) parallel between himself and Einstein’s 1939 letter to President Roosevelt.

While acknowledging the tremendously beneficial possibilities of emerging technologies, Bill Joy judges them too dangerous for us to handle. The only acceptable course in his view is relinquishment. He wants everyone in the world “to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge”. Joy joins the centuries-old procession of theocrats, autocrats, and technocrats in attacking our pursuit of unlimited knowledge. He mentions the myth of Pandora’s box. He might have thrown in the anti-humanistic and anti-transhumanistic myths of the Garden of Eden, the Tower of Babel, and the demise of Icarus. Moving from myth to reality, he should have been explicit in describing the necessary means deployed throughout history: burning books, proscribing the reading of dangerous ideas, state control of science.

PART 1: RELINQUISHMENT CANNOT WORK

The first of my objections to relinquishment has already been well made by Ray Kurzweil. Joy’s fantasies about relinquishment ride on the assumption that “we could agree, as a species” to hold back from developing the “GNR” technologies (genetic engineering, nanotechnology, and robotics) and presumably any enabling or related technologies. Perhaps Joy’s experience in having a staff of engineers to do his bidding have blinded him to the incredibly obvious fact that the six billion humans on this planet do not and will not agree to relinquish technologies that offer massive benefits as well as defensive and offensive military capabilities.

We have failed to prevent the spread of nuclear weapons technology, despite its terrifying nature and relative ease in detection. How are we to prevent all companies, all governments, all hidden groups in the world from working on these technologies? Bill, all six billion of these people–many desperately in need of the material and medical benefits offered by these technologies–will not read the Dalai Lama and go along with your master plan. Relinquishment is a utopian fantasy worthy of the most blinkered hippies of the ’60s. Adding coercive enforcement to the mix moves the idea from utopian fantasy to frightening dystopia.

Ray Kurzweil points to a fine-grained relinquishment that can at least reduce the dangers of runaway technologies among those willing to play this game. Nanotechnology pioneer Eric Drexler has long recommended designing nanomachines that will quickly cease functioning if not fed some essential and naturally uncommon ingredient. Ralph Merkle’s broadcast architecture offers another way to develop nanomachines under control. These and other proposals can reduce the hazards of accidental nanotechnological disasters.

However, we can pursue intelligent design, ethical guidelines, and oversight only piecemeal, not universally. Less cautious or less benevolent developers will refuse even this fine-grained relinquishment. That fact makes it imperative to accelerate the development of advanced technologies in open societies. Only by possessing the most advanced technological knowledge can we hope to defend ourselves against the attacks and accidents from outside our sphere of influence. We should be pushing for better understanding of nanotech defenses, accelerated decoding and deactivation of genetically-engineered pathogens, and putting more thought into means of limiting runaway independent superintelligent AI.

I will not address genetic engineering since I regard this as an insignificant danger compared to those of nanotechnology and runaway artificial intelligence (AI). The dangers of runaway artificial superintelligence have received less attention than those of nanotechnology. Perhaps this is because the prospect of AI seems to move further away every time we take a step forward. Bill Joy cites only Hans Moravec on this issue, perhaps because Moravec’s view is the most frightening available. In Moravec’s view of the future, superintelligent machines, initially harnessed for human benefit, soon leave us behind. In the most pessimistic Terminator-like scenario, they might remove us from the scene as an annoyance. Oddly, despite having read Kurzweil’s book, Joy never discusses Ray’s thoroughly different (and more plausible) scenario. In Ray’s future projections, we gradually augment ourselves with computer and robotic technology, becoming superhumanly intelligent. Moravec’s apartheid of human and machine is replaced with the integration of biology and technology.

While a little research would have shown Joy that extropian and other transhumanist thinkers have indeed addressed the danger of explosively evolving, unfriendly AI, I grant that we must continue to address this issue. Again, global relinquishment is not an option. Rather than a futile effort to prevent AI development, we should concentrate on warding off dangers within our circle of influence and developing preventative measures against rogue AIs.

Human beings are the dominant species on this planet. Joy wants to protect our dominance by blocking the development of smarter and more powerful beings. I find it odd that Joy, working at a company like Sun Microsystems, can think only of the old corporate strategy where dominant companies attempted to suppress disruptive innovations. Perhaps he should take a look at Cisco Systems, or Microsoft, both of which have adopted a different strategy: Embrace and extend. Humanity would do well to borrow from the new business strategists’ approach. Realistically, we cannot prevent the rise of non-biological intelligence. We can embrace it and extend ourselves to incorporate it. The more quickly and continuously we absorb computational advances, the easier it will be and the less risk of a technology runway. Absorption and integration will include economic interweaving of these emerging technologies with our organizations as well as directly interfacing our biology with sensors, displays, computers, and other devices. This way we avoid an us-versus-them situation. They become part of us.

PART 2: RELINQUISHMENT IS UNETHICAL

Some people reach ethical conclusions by consulting an ultimate authority. Their authority gives them answers that are received and applied without questioning. For those of us who prefer a more rational approach to ethical thought, reaching a conclusion involves consulting our basic values then carefully deciding which of the available paths ahead will best reflect those values. Our factual beliefs about how the world works will therefore profoundly affect our moral reasoning. Two individuals may share values but reach differing conclusions due to divergent factual beliefs. I suspect that my ethical disagreement with Joy over relinquishment results both from differing beliefs about the facts and differing basic values.

Joy assigns a high probability to the extinction of humanity if we do not relinquish certain emerging technologies. Joy’s implicit calculus reminds me of Pascal’s Wager. Finding no rational basis for accepting or rejecting belief in a God, Pascal claimed that belief was the best bet. Choosing not to believe had minimal benefits and the possibility of an infinitely high cost (eternal damnation). Choosing to believe carried small costs and offered potentially infinite rewards (eternity in Heaven). Now, the extinction of the human race is not as bad as eternity in Hell, but most of us would agree that it’s a utterly rotten result. If relinquishment can drastically reduce the odds of such a large loss, while costing us little, then relinquishment is the rational and moral choice. A clear, simple, easy answer. Alas, Joy, like Pascal, loads the dice to produce his desired result.

I view the chances of success for global relinquishment as practically zero. Worse, I believe that partial relinquishment will frighteningly increase the chances of disaster by disarming the responsible while leaving powerful abilities in the hands of those full of hatred, resentment, and authoritarian ambition. We may find a place for the fine-grained voluntary relinquishment of inherently dangerous means where safer technological paths are available. But unilateral relinquishment means unilateral disarmament. I can only hope that Bill Joy never becomes a successful Neville Chamberlain of 21st century technologies. In place of relinquishment, we would do better to accelerate our development of these technologies, while focusing on developing protections against and responses to their destructive uses.

My assessment of the costs of relinquishment differ from Joy’s for another reason. Billions of people continue to suffer illness, damage, starvation, and all the plethora of woes humanity has had to endure through the ages. The emerging technologies of genetic engineering, molecular nanotechnology, and biological-technological interfaces offer solutions to these problems. Joy would stop progress in robotics, artificial intelligence, and related fields. Too bad for those now regaining hearing and sight thanks to implants. Too bad for the billions who will continue to die of numerous diseases that could be dispatched through genetic and nanotechnological solutions. I cannot reconcile the deliberate indulgence of continued suffering with any plausible ethical perspective.

Like Joy, I too worry about the extinction of human beings. I see it happening everyday, one by one. We call this serial extinction of humanity “aging and death”. Because aging and death have always been with us and have seemed inevitable, we often rationalize this serial extinction as natural and even desirable. We cry out against the sudden death of large numbers of humans. But, unless it touches someone close, we rarely concern ourselves with the drip, drip, drip of individual lives decaying and disintegrating into nothingness. Some day, not too far in the future, people will look back on our complacency and rationalizations with horror and disgust. They will wonder why people gathered in crowds to protest genetic modification of crops yet never demonstrated in favor of accelerating anti-aging research. Holding back from developing the technologies targeted by Joy will not only shift power into the hands of the destroyers, it will mean an unforgivable lassitude and complicity in the face of entropy and death.

Joy’s concerns about technological dangers may seem responsible. But his unbalanced fear-mongering and lack on emphasis of the enormous benefits can only put a drag on progress. We are already seeing fear, ignorance, and various hidden agendas spurring resistance to genetic research and biotechnology. Of course we must take care in how we develop these technologies. But we must also recognize how they can tackle cancer, heart disease, birth defects, crippling accidents, Parkinson’s disease, schizophrenia, depression, chronic pain, aging and death.

On the basis of Joy’s recent writing and speaking, I have to assume that we disagree not only about the facts but also in our basic values. Joy seems to value safety, stability, and caution above all. I value relief of humanity’s historical ills, challenge, and the drive to transcend our existing limitations, whether biological, intellectual, emotional, or spiritual.

Joy quotes the fragmented yet brilliant figure of Friedrich Nietzsche to support his call for an abandonment of the unfettered pursuit of knowledge. Nietzsche is telling the reader that our trust in science “cannot owe its origin to a calculus of utility; it must have originated in spite of the fact that the disutility and dangerousness of the ‘will to truth’, or ‘truth at any price’ is proved to it constantly.” Joy has understood Nietzsche so poorly that he thinks Nietzsche here is supporting his call for relinquishing the unchained quest for knowledge in favor of safety and comfort. Nietzsche was no friend to “utility”. He despised the English Utilitarian philosophers for their enthroning pleasure or happiness as the ultimate value. Even a cursory reading of Nietzsche should make it obvious that he valued not comfort, ease, or certainty. Nietzsche liked the dangerousness of the will to truth. He liked that the search for knowledge endangered dogma and the comforts and delusions of dogma.

Nietzsche’s Zarathustra says: “The most cautious people ask today: ‘How may man still be preserved?'” He might have been talking of Bill Joy when he continues: “Zarathustra, however, asks as the sole and first one to do so: `How shall man be overcome?”… “Overcome for me these masters of the present, o my brothers–these petty people: they are the overman’s greatest danger!” If we interpret Nietzsche’s inchoate notion of the overman as the transhumans who will emerge from the integration of biology and the technologies feared by Joy, we can see with whom Nietzsche would likely side. I will limit myself to one more quotation from Nietzsche:

And life itself confided this secret to me: “Behold,” it said, “I am that which must always overcome itself. Indeed, you call it a will to procreate or a drive to an end, to something higher, farther, more manifold: but all this is one… Rather would I perish than forswear this; and verily, where there is perishing… there life sacrifices itself–for [more] power… Whatever I create and however much I live it–soon I must oppose it and my life; … ‘will to existence’: that will does not exist… not will to life but… will to power. There is much that life esteems more highly than life itself. Zarathustra II 12 (K: 248)

Like Nietzsche, I find mere survival ethically and spiritually inadequate. Even if, contrary to my view, relinquishment improved our odds of survival, that would not make it the most ethical choice if we value the unfettered search for knowledge and intellectual, emotional, and spiritual progress. Does that mean doing nothing while technology surges ahead? No. We can minimize the dangers, ease the cultural transition, and accelerate the arrival of benefits in two ways: We can develop a sophisticated philosophical perspective on the issues. And we can seek to use new technologies to enhance emotional and psychological health, freeing ourselves from the irrationalities and destructiveness built into the genes of our species.

We should be spurring understanding of emotions and the neural basis of feeling and motivation. I’ve seen some good work in this area (such as Joseph LeDoux’s The Emotional Brain), but until very recently cognitive science has ignored emotions. If we are to flourish in the presence of incredible new technological abilities, we would do well to focus on using them to debug human nature. Power can corrupt, but knowledge that brings the power to self-modify so as to refine our psychology can ward off corruption and destruction. I have spoken on this topic more than I have yet publicly written, but I would stress the importance of advancing our abilities for refinement of our own emotions.

Improving philosophical understanding will speed the absorption and integration of new technologies. If we continue to approach rapid and profound technological change with philosophical worldviews rooted in old myths and pre-scientific story-making, we will needlessly fear change, miss out on potential advances, and be caught unprepared. When the announcement came from Scotland proclaiming the first successful mammalian cloning, the Catholic Pope issued a statement opposing cloning on grounds that made no sense. (His vague objection would apply equally to identical twins.) President Clinton and other leaders also automatically moved to ban human cloning, with no indication of clear thinking based in science and philosophy.

Extropians and other transhumanists have been developing philosophical thinking fitting to these powerful emerging technologies. In our books, essays, talks, and email forums, we have explored a vast range of philosophical issues in depth. Just last year in August 1999, I chaired Extropy Institute’s fourth conference: Biotech Futures: Challenges and Choices of Life Extension and Genetic Engineering. The conference laid out the likely path of emerging technologies and dissected issues raised. In my own talk, I analyzed implicit philosophical mistakes that engender fear and resistance to the changes we anticipate. I summarized our own goals in a letter to Mother Nature, and have laid out some guiding values in The Extropian Principles.

Bill Joy’s essay and subsequent talks may feed the public’s fear and misunderstanding of our potential future. On the other hand, perhaps his thoughts will raise interest in the philosophical, ethical, and policy issues in a productive way. As a philosopher committed to incubating better futures, I along with my colleagues in Extropy Institute welcome constructive input from Joy in this continuing learning process. Humanity is on the edge of a grand evolutionary leap. Let’s not pull back from the edge, but by all means let’s check our flight equipment as we prepare for takeoff.