We’re underestimating the risk of human extinction

March 7, 2012 | Source: The Atlantic

(Credit: Wikipedia Commons)

Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.

Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.

“For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems,” he said.”You could also have risks associated with certain advancements in synthetic biology. …

“We’re also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out … [pathogenic] viruses. …

“I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. …

“In the longer run, I think artificial intelligence—once it gains human and then superhuman capabilities—will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals. …

“Hollywood renditions of existential risk scenarios are usually quite bad. For instance, the artificial intelligence risk is usually represented by an invasion of a robot army that is fought off by some muscular human hero wielding a machine gun or something like that. If we are going to go extinct because of artificial intelligence, it’s not going to be because there’s this battle between humans and robots with laser eyes. …There isn’t a lot of good literature on existential risk, and that one needs to think of these things not in terms of vivid scenarios, but rather in more abstract terms. …

“If one day you have the ability to create a machine intelligence that is greater than human intelligence, how would you control it, how would you make sure it was human-friendly and safe? There is work that can be done there. …

“There is no particular reason to think that we might reach some intermediate stage where we would harness the energy of one star like our sun. By the time we can do that I suspect we’ll be able to engage in large-scale space colonization, to spread into the galaxy and then beyond, so I don’t think harnessing the single star is a relevant step on the ladder.

Regarding theĀ Kardashev Scale (which plots the advancement of a civilization according to its ability to harness energy, specifically the energy of its planet, its star, and then finally the galaxy): “If I wanted some sort of scheme that laid out the stages of civilization, the period before machine super intelligence and the period after super machine intelligence would be a more relevant dichotomy. When you look at what’s valuable or interesting in examining these stages, it’s going to be what is done with these future resources and technologies, as opposed to their structure. …

“If we think of the space of possible modes of being as a large cathedral, then humanity in its current stage might be like a little cowering infant sitting in the corner of that cathedral having only the most limited sense of what is possible.”