Let the AIs, not us, formulate a billion-year plan!
October 12, 2012 by Robert L. Blum
In What our civilization needs is a billion-year plan, posted on KurzweilAI September 23, 2012, Lt Col Peter Garretson calls for a long-term plan to assure humanity’s survival, “moving everyone and everything we value off Earth.”
He cites the coming big extinction events for planet Earth, including asteroid collisions, the Sun engulfing the Earth during its transformation to a red giant, and ultimately, the heat death of the Universe. Human survival, he argues, justifies an ambitious future space program, “with articulated goals of space development and space settlement … pushing the technology and logistical capabilities to be able to attain those goals.”
To accomplish this, he predicts, people (perhaps augmented) will be the great interstellar engineers — in charge of intelligent civilization over the next few billions of years.
Unfortunately, Garretson does not mention the single most important development in the future: the coming technological Singularity, when machine intelligence will surpass human intelligence, leading to “technological change so rapid and profound it will represent a rupture in the fabric of human history,” according to Ray Kurzweil.
Is humanity capable of planning beyond the Singularity?
I agree that long-term planning is essential, but our political process in the U.S. barely allows planning beyond the next election. (Look at any session of Congress on C-SPAN for an hour. Does that look like the face of wisdom that we want to draft a billion year plan?)
Looking at the march of evolution through the eyes of Teilhard de Chardin or Arthur C. Clarke, a prevailing belief (held strongly by me) is that humanity is not the last word in intelligence or its highest expression. Rather, we are just a warm-up act — a stepping stone to what comes next in evolution.
But meanwhile, humanity (all 7 billion of us) is a mixed curse. Throughout our history, we have seen the face of evil with the megadeaths of the Stalin Era, with Hitler during the Third Reich, in Cambodia during the Pol Pot regime, during the Rwanda massacres, and currently in Syria. (The Civil War, the bloodiest war in U.S. history was a mere 150 years ago.) That is humanity’s inhumanity to man.
To other species we are even worse — they don’t even count. Except among environmentalists, there is usually not much protest, as the scourge of our propagation envelopes Planet Earth with new homes, roads, buildings, agribusinesses, and dumps. The loss of habitat is leading to an extinction rate comparable to that of the Cretaceous era with an asteroid strike 65 million years ago.
The biosphere of Planet Earth is one large garbage dump. We have poisoned the soil, rivers, oceans, and atmosphere. The exhaust from our civilization creates more than 98% of new CO2 flowing into the atmosphere. It is likely that the IPCC has underestimated the extent to which Earth will warm during this century: a 5 degrees C rise is quite possible according to Paul Ehrlich).
A catastrophic disruption to agriculture may precipitate global resource wars. The numerous feedback mechanisms among global warming, global toxification, declining ecological services (e.g., death of pollinators), and overpopulation all point toward collapse (video). This litany of threats has been widely documented by Paul Ehrlich, James Hansen, and others.
As another 2.5 billion people are added to the planet by 2050, catastrophic collapse may be all but inevitable. We may not make it to the Singularity.
Can future enhanced humans solve these problems?
If our civilization produced literary and scientific giants like Shakespeare and Einstein, isn’t it reasonable to expect that humans augmented by new drugs, stem cells, implants, etc. will be even more talented and will be the astronauts who will build Garretson’s Dyson spheres and travel to the stars?
My answer is no. First, despite future medical advances, drugs, biologicals, and devices intended for human use always require lengthy and costly testing. At present, a new drug typically requires at least ten years and a billion dollars to develop. Frustratingly slow, speaking as a former emergency-room physician!
Several neurobiologists have published articles and videos that promote scanning and uploading the detailed microanatomy of the human brain (Sebastian Seung, Stephen Smith, Randal Koene, Ken Hayworth, and Anders Sandberg.)
I’m optimistic that such an approach will help to elucidate the principles of neurophysiology, but I have reservations (some are discussed here): the role of local fields and oscillations, the roles of glia and of gap junctions, and unexplained intricacy at synapses). I side more with Tony Movshon than with Sebastian Seung in this must-see debate.
But, suppose my “head freezer” friends succeed in being immortalized as was Han Solo in Star Wars when he was embedded in carbonite. Whether run on a mainframe or downloaded into a new titanium exoskeleton in 2100, Humanity 2.0 will still have all our current psychological failings (detailed by Daniel Kahneman in Thinking Fast and Slow).
More fundamentally, early design commitments, frozen into us as we evolved from single cells over billions of years were superbly adapted to local materials and conditions on Earth but are not suitable for space. It’s time for replacement. You cannot make a (bio) neuron that spikes at 3 Ghz or that conducts neural impulses at 300 million meters per second. (The way to speed up a cheetah is not by strapping on a jet engine.)
I also reject Kurzweil’s premise that we will merge with AIs — that’s like merging that cheetah with a jet plane.
AIs as the next intelligence carriers
Instead of humans, post-Singularity AIs (not us) will be the highest intelligences. And they will be calling the shots on design and execution of massive space-based engineering projects a la O’Neill and Dyson.
Long-term, humanity (whether augmented, re-engineered, or uploaded) will be left in the dust by the machines, who will stand in relation to us as we to bacteria. OK, that has a heavy-metal Skynet ring to it, so let me replace it immediately by a term I’ve come to love (from David Grinspoon’s book Lonely Planets): the Immortals.
Who are the Immortals? Perhaps we know who we want them to be: wise, superintelligent, compassionate, and just. And powerful! More powerful than a light-speed rocket, able to leap into intergalactic space in a single bound, and imbued with truth, justice, and the Western democratic way!
Whatever we choose to call them, further evolution of themselves and their tools will be in their hands and not ours. While future advances will greatly benefit humans, humans will be replaced as the helmsmen of a space-faring civilization before the Singularity — probably by 2040 (Philip K. Dick nailed this prediction in Blade Runner).
The evolving prototypes that will eventually leap to the stars will be electronic — informed by human design and concerns, but not constrained by them. Their decisions and wisdom will encompass all that is on the Web and all that is perceived by the world’s sensors. With a solar system full of effectors they will accomplish engineering that we cannot imagine. That is how they will begin their evolution and their journey to the stars.
So let’s leave the really long term planning (post-Singularity) to the Immortals.
Sometime before the Cambrian era 500 million years ago, the first differentiated, multicellular creatures arose. As the reproductive unit changed from a single cell to a multicellular organism, the individual cells had surrendered their autonomy for a greater chance of survival.
I think about the coming superorganism as something that will (at least initially) encompass human beings and confer upon them greater survival and quality of life (see Greg Stock’s Metaman). Just as the Web will embrace all of humanity and our culture, machines will evolve that understand and contribute to the Web.
Robots will autonomously update their databases and plans from the Web. The “rise of the machines” and their gradual metamorphosis into the wise Immortals won’t take place overnight. This will be a gradual evolution, dictated as always by “technology push and demand pull” (initially from human consumers, later from AI consumers).
So what projects should we humans undertake now?
These predictions will not happen automatically as a consequence of accelerating technology. They will require concerted science and engineering specifically focused on AI, including machine learning, robotics, computer vision, and knowledge representation; non-von Neumann architectures including neuromorphic engineering and other large-scale parallel designs; materials science; neurobiology; neural nets and cognitive science (to mine their principles), and the mathematics of dynamical and stochastic systems (among others).
Funding this type of R&D is civilization’s near-term path to the stars.
To advance the ball down the field, a thriving, productive, high-tech human civilization may be required for another century or two. Whatever slows or halts that development might kill this development.
As Garretson has pointed out, the spoilers include all those near-term, extinction level catastrophes that could derail the phase transition of intelligence: asteroids, propagation and rogue use of WMDs (nuclear and biologic), accidental worldwide war, pandemics, ecologic calamities, resource depletion, natural disasters, economic and societal chaos, etc.
There are also possible theoretical spoilers. Perhaps it is simply not possible to create intelligence or consciousness at parity with humans for as-yet-unknown reasons, famously argued by Roger Penrose. But recent progress in computer vision (Google Driverless Car) makes me intensely skeptical of these limitations.
Primates in Space
Unfortunately, the current funding environment for science and engineering is extremely limited, so NASA and DOE have had to kill promising projects in annual and decadal reviews.
For example, it was two decades before the spectacularly successful Kepler telescope was funded. The SETI Project, formerly a part of NASA Ames, lost its funding 15 years ago. The promising Terrestrial Planet Finder was cut and even the Hubble repair mission — another spectacular success — had to beg for funding. The vital follow-on to Hubble, the James Webb Telescope (JWST), has limped along with continued funding always in doubt, and the launch is now pushed back to 2018.
Manned space missions typically cost 100X the price of unmanned missions without a commensurate return. Put another way, a single manned expedition may kill scores of science-based, unmanned robotic probes and telescopes. The example du jour is the MSL/Curiosity rover now on Mars.
It cost (a mere) 2.5 billion dollars, in comparison to a manned mission that will cost 100X as much, if funded. My views on manned spaceflight coincide with those of Astronomer Royal Sir Martin Rees (video). I favor humans on the Moon but not on Mars — the key difference is travel time.
Putting humans into space requires the launch of consumables (food, water, shielding, medical supplies) and engenders great concern and over-engineering to assure safe return of the astronauts. Mars missions that are being sketched out for the late 2020s would involve at least two other launches to pre-position caches of consumables.
In 2012 it is easy to tout the superior dexterity, adaptability, intelligence, and autonomy of humans over robots. In 2025 to 2030, when the earliest manned Mars missions might be launching, it is far less clear whether astronauts’ superior abilities will justify their 100-fold expense. When we hit 2100, it’s surely game-over for what I call “primates in space” — which began with the Astrochimps Ham and Enos that orbited in the early sixties.
My view is that humans or other species will go to the stars, only if the Immortals (the AIs) think it is desirable/cost-effective to do so. I think they will want to transport us away from Earth to prevent our destruction.
Just as our biologists delight in the manifold diversity of Nature, I believe that the Immortals will be interested in preserving and studying us and many other species. Like anthropologists they may find value in studying our primitive culture.
If they decide to transport us, they will easily be able to do so. An old idea (on which I based an unpublished novel that my cell biologist son grew up with) is simply to transport the DNA sequences of a collection of humans and other animals to a remote in vitro fertilization machine constructed from local material in a distant world.
So, what’s to become of humanity? My view is that we primates will be on Planet Earth for a long time (even if augmented). My hope is that we mature into a wise old race of beings living in harmony with our biosphere.
Humans may even achieve immortality as predicted by Aubrey de Grey’s SENS (but perhaps not on his ambitious timescale).
I glossed over the crucial notion of whether the Immortal AIs will share our values or our feelings — and the key issue of Friendly AI. I can only prognosticate that they will share our values over the next several decades of development. In doing so (by incorporating human values into their utility functions or emulating human emotions) they will assure their commercial success as assistants (SIRI and Asimo), researchers (Watson), or drivers of our cars (Google car). (I’m personally skeptical of the efficacy of friendly AI long-term, concurring with Hugo de Garis; but read Steve Omohundro’s defense here.)
To paraphrase the quote from Lee Valentine that closes Garretson’s article: Mine the sky (by the AIs). Defend the Earth (by humans now, later by AIs). Settle the stars (by the Immortals — our mind children — if we do our job as good parents).