book review | The Intelligent Universe: Foreword by Ray Kurzweil
February 2, 2007
The universe might end in intelligent life, not a Big Crunch or oblivion in an infinite expansion, says James Gardner in The Intelligent Universe: AI, ET, and the Emerging Mind of the Cosmos (February 2007).
Gardner envisions a final state of the cosmos in which a highly evolved form of group intelligence — a cosmic community — marshals the assets of matter and energy bequeathed by the Big Bang and engineers a cosmic renewal: the birth of a new baby universe endowed with the same life-giving propensity that our cosmos enjoys.
“My first book, Biocosm, was one long argument that the cosmos possesses a utility function (i.e., some value or outcome that is being maximized) and that the specific utility function of our cosmos is propagation of baby universes exhibiting the same life-friendly physical qualities as their parent-universe, a sort of cosmic reproductive organ,” Gardner said.
“The purpose of this book is to tell an extraordinary story. You will meet a senior NASA official whose passion is investigating the probable impact on religion of the discovery of extraterrestrial intelligence; a computer scientist who is coaxing software to undergo a special kind of Darwinian evolution, thus becoming ever more adept and financially valuable over time; and a technology prophet who, in my view, is the true contemporary heir to Darwin’s intellectual legacy.
“You will also meet a fascinating cast of non-human players likely to have leading roles on tomorrow’s cosmic stage. They include:
• Super-smart machines capable of out-thinking humans without breaking a sweat.
• Speedy and cost-efficient interstellar probes consisting of elaborate software algorithms capable of “living” in the innards of alien computers they may encounter on far-off planets.
• Intelligent extraterrestrials, which SETI researchers have not yet discovered but whose probable existence is strongly predicted by my Biocosm hypothesis.
“The Intelligent Universe, then, is a kind of projected travelogue — an imagined future history — of the cosmic journey that lies ahead. The foundation for that projection–the defining leitmotif of that imagined future–is a vision of the deep linkage between three ostensibly separate phenomena: the appearance of life, the emergence of intelligence, and the seemingly mindless physical evolution of the cosmos. In discussing these topics, the book will not only provide news dispatches from the frontiers of cosmological science but also offer musings about the philosophical implications of emerging scientific insights for our self-image as a species.”
“Gardner has taken Gaia to its furthest conceivable magnitude: extending the role and influence of life to the stars and beyond,” said Seth Shostak, Senior Astronomer at the SETI Institute. Gardner is a Kurzweil Network contributor and also a Lifeboat Foundation Scientific Advisory Board member, along with Shostak.
Ray Kurzweil’s foreword to James Gardner’s book, The Intelligent Universe, published by New Page Books in February 2007:
Consider that the price-performance of computation has grown at a superexponential rate for over a century. The doubling time (of computes per dollar) was three years in 1900 and two years in the middle of the 20th century; and priceperformance is now doubling each year.
This progression has been remarkably smooth and predictable through five paradigms of computing substrate: electromechanical calculators, relay-based computers, vacuum tubes, transistors, and now several decades of Moore’s Law (which is based on shrinking the size of key features on a flat integrated circuit).
The sixth paradigm — three-dimensional molecular computing — is already beginning to work and is waiting in the wings. We see similar smooth exponential progressions in every other aspect of information technology, a phenomenon I call the law of accelerating returns.
Where is all this headed? It is leading inexorably to the intelligent universe that Jim Gardner envisions.
Consider the following: As with all of the other manifestations of information technology, we are also making exponential gains in reverse-engineering the human brain. The spatial resolution in 3D volume of in-vivo brain scanning is doubling each year, and the latest generation of scanners is capable of imaging individual interneuronal connections and seeing them interact in real time.
For the first time, we can see the brain create our thoughts, and also see our thoughts create our brain (that is, we create new spines and synapses as we learn). The amount of data we are gathering about the brain is doubling each year, and we are showing that we can turn this data into working models and simulations.
Already, about 20 regions of the human brain have been modeled and simulated. We can then apply tests to the simulations and compare these results to the performance of the actual human brain regions. These tests have had impressive results, including one of a simulation of the cerebellum, the region responsible for physical skill, and which comprises about half of the neurons in the brain.
I make the case in my book (The Singularity is Near) that we will have models and simulations of all several hundred regions, including the cerebral cortex, within 20 years. Already, IBM is building a detailed simulation of a substantial portion of the cerebral cortex. The result of this activity will be greater insight into ourselves, as well as a dramatic expansion of the AI tool kit to incorporate all of the methods of human intelligence.
By 2029, sufficient computation to simulate the entire human brain, which I estimate at about 1016 (10 million billion) calculations per second (cps), will cost about a dollar. By that time, intelligent machines will combine the subtle and supple skills that humans now excel in (essentially our powers of pattern recognition) with ways in which machines are already superior, such as remembering trillions of facts accurately, searching quickly through vast databases, and downloading skills and knowledge.
But this will not be an alien invasion of intelligent machines. It will be an expression of our own civilization, as we have always used our technology to extend our physical and mental reach. We will merge with this technology by sending intelligent nanobots (blood-cell-sized computerized robots) into our brains through the capillaries to intimately interact with our biological neurons. If this scenario sounds very futuristic, I would point out that we already have blood-cell-sized devices that are performing sophisticated therapeutic functions in animals, such as curing Type I diabetes and identifying and destroying cancer cells. We already have a pea-sized device approved for human use that can be placed in patients’ brains to replace the biological neurons destroyed by Parkinson’s disease, the latest generation of which allows you to download new software to your neural implant from outside the patient.
If you consider what machines are already capable of, and apply a billion-fold increase in price-performance and capacity of computational technology over the next quarter century (while at the same time we shrink the key features of both electronic and mechanical technology by a factor of 100,000), you will get some idea of what will be feasible in 25 years.
By the mid-2040s, the nonbiological portion of the intelligence of our humanmachine civilization will be about a billion times greater than the biological portion (we have about 1026 cps among all human brains today; nonbiological intelligence in 2045 will provide about 1035 cps). Keep in mind that, as this happens, our civilization will be become capable of performing more ambitious engineering projects.
One of these projects will be to keep this exponential growth of computation going. Another will be to continually redesign the source code of our own intelligence. We cannot easily redesign human intelligence today, given that our biological intelligence is largely hard-wired. But our future — largely nonbiological — intelligence will be able to apply its own intelligence to redesign its own algorithms.
So what are the limits of computation? I show in my book that the ultimate one-kilogram computer (less than the weight of a typical notebook computer today) could perform about 1042 cps if we want to keep the device cool, and about 1050 cps if we allow it to get hot. By hot, I mean the temperature of a hydrogen bomb going off, so we are likely to asymptote to a figure just short of 1050 cps. Consider, however, that by the time we get to 1042 cps per kilogram of matter, our civilization will possess a vast amount of intelligent engineering capability to figure out how to get to 1043 cps, and then 1044 cps, and so on.
So what happens then? Once we saturate the ability of matter and energy to support computation, continuing the ongoing expansion of human intelligence and knowledge (which I see as the overall mission of our human-machine civilization), will require converting more and more matter into this ultimate computing substrate, sometimes referred to as “computronium.”
What is that limit? The overall solar system, which is dominated by the sun, has a mass of about 2 × 1030 kilograms. If we apply our 1050 cps per kilogram limit to this figure, we get a crude estimate of 1080 cps for the computational capacity of our solar system. There are some practical considerations here, in that we won’t want to convert the entire solar system into computronium, and some of it is not suitable for this purpose anyway. If we devoted 1/20th of 1 percent (.0005) of the matter of the solar system to computronium, we get capacities of 1069 cps for “cold” computing and 1077 cps for “hot” computing. I show in my book how we will get to these levels using the resources in our solar system within about a century.
I’d say that’s pretty rapid progress. Consider that in 1850, a state-of-the-art method to transmit messages was the Pony Express, and calculations were performed with an ink stylus on paper. Only 250 years later, we will have vastly expanded the intelligence of our civilization. Just taking the 1069 cps figure, if we compare that to the 1026 cps figure, which represents the capacity of all human biological intelligence today, that will represent an expansion by a factor of 1043 (10 million trillion trillion trillion).
Now for the intelligent universe. At this point, the ongoing expansion of our intelligence will require moving out into the rest of the universe. Indeed, this process will start before we saturate the resources in our midst. When this happens, we will immediately confront a key issue — the speed of light — which we understand to be the cosmic speed limit. But what is it a speed limit for? We can easily cite examples of phenomena that exceed the speed of light. For example, we know the universe to be expanding, and the speed with which galaxies recede from each other exceeds the speed of light if the distance between the two galaxies is greater than what is called the Hubble distance.
But the speed of light, as postulated by Einstein in his special theory of relativity, represents a limit on the speed with which we can transmit information. The phenomenon of receding galaxies does not violate Einstein’s theory because it is caused by space expanding, rather than the galaxies moving through space. As such, it does not help us to transmit information at speeds faster than the speed of light.
Another phenomenon that appears to exceed the speed of light is quantum disentanglement of two entangled particles. Two particles created together may be “quantum entangled,” meaning that if we resolve the ambiguity of a undetermined property (such as the phase of its spin) in one of the paired particles (by measuring it), it will also be resolved in the other particle as the same value, and at exactly the same time. There is the appearance of some sort of communication link between the two particles, and this phenomenon has been experimentally measured at many times the speed of light. But again, this does not allow us to transmit information (such as a file), because what is being “communicated” by quantum disentanglement is not information, but quantum randomness. As such, it can be used to generate profoundly random encryption codes (and that application has already been exploited in a new generation of quantum encryption devices), but it does not allow faster-than-light communication.
There are suggestions that the speed of light has changed slightly. In 2001, astronomer John Webb presented results that suggested that the speed of light may have changed by 4.5 parts out of 108 over the past 2 billion years. These observations need confirmation. That may not seem like much of a change, but it is the nature of engineering to take a subtle effect and amplify it. So perhaps there are ways to engineer a change in the speed of light.
The theory that the early universe went through a rapid expansion in an inflationary period does postulate a speed far greater than the speed of light, so we may be able to find an engineering approach to harnesses the conditions that existed in the early universe.
The most compelling idea of circumventing the speed of light is not to change it at all, but simply to find shortcuts to places in the universe that seem to be far away. The theory of general relativity does not rule out the existence of wormholes in time-space that could allow us to travel to a far-off location in a short period of time. California Institute of Technology physicists Michael Morris, Kip Thorne, and Uri Yurtsever have described theoretical methods to engineer wormholes to get to faraway locations in a brief period of time. The amount of energy required might make it difficult to set up a passageway for biological humans to pass through, but our exploration and colonization of the universe requires only nanobots.
Physicists David Hochberg and Thomas Kephart have shown how gravity was strong enough in the very early universe to have provided the energy required to spontaneously create massive numbers of self-stabilizing wormholes. A significant portion of these wormholes is likely to still be around and may be pervasive, providing a vast network of corridors that reach far and wide throughout the universe. It might be easier to discover and use these natural wormholes than to create new ones.
We have to regard these proposals to exceed or bypass the speed of light as speculative. But while this may be regarded as an interesting intellectual reflection today, it will be the primary issue confronting human civilization a century from now. And keep in mind that we’re talking about a civilization that will be trillions of trillions of times more capable than we are today. So one thing we can be confident of, is that if there is any way to transmit devices and information at speeds exceeding the speed of light (or circumventing it through wormholes), our future civilization will be both motivated and capable of discovering and exploiting that insight.
The price-performance of computation went from 10-5 to 108 cps per thousand dollars in the 20th century. We also went from about a million dollars to a trillion dollars in the amount of capital devoted to computation, so overall progress in nonbiological intelligence went from 10-2 to 1017 cps in the 20th century, which is still short of the human biological figure of 1026 cps. We will achieve around 1069 cps by the end of the 21st century. If we can circumvent the speed of light, we only need about another 20 orders of magnitude to convert the entire universe into computronium, and that can be done well within another century. On the other hand, if the speed of light remains unperturbed by the vast intelligence that will seek to overcome it, it will take billions of years. But it will still happen.
I make this case more fully in my book, and Jim makes it quite forcefully in this book. It is remarkable to me that almost all of the discussions of cosmology fail to mention the role of intelligence. In the common cosmological view, intelligence is just a bit of froth, something interesting that happens on the sidelines of the great cosmic story. But in the standard view, whether the universe winds up or down, ends up in fire (a great crunch and new Big Bang), or ice (an ever-expanding and ultimately dead universe), or something in-between, depends only on measures of dark matter, dark energy, and other parameters we have yet to discover. That the story of the universe is a story yet to be written by the intelligence it will spawn is almost never mentioned. This book will help to change the common “unintelligent” view.
So what will we do when our intelligence is in the range of a googol (10100) cps? One thing we may do is to engineer new universes. Similarly, our universe may be the creation of some superintelligences in another universe. In this case, there was an intelligent designer of our universe — that designer would be the evolved intelligence of some other universe that created ours. Perhaps our universe is a science fair experiment of a student in another universe. (Reading the news of the day, you might get the impression that this erstwhile adolescent superintelligence who designed our universe is not going to get a very good grade on his or her project.)
But the evolution of intelligence here on Earth is actually going very well. All of the vagaries (and tragedies) of human history, such as two world wars, the cold war, the great depression, and other notable events, did not make even the slightest dent in the ongoing exponential progressions I previously mentioned.
Clearly, the universe we live in does appear to be an intelligent design, in that the constants in nature are precisely what are required for the universe to have grown in complexity. If the cosmological constant, the Planck constant, and the many other constants of physics were set to just slightly different values, atoms, molecules, stars, planets, organisms, humans, and this book would have been impossible. As Jim Gardner says, “A multitude of…factors are fine-tuned with fantastic exactitude to a degree that renders the cosmos almost spookily bio-friendly.” How the rules of the universe happened to be just so is a profound question, one that Gardner explores in fascinating detail.
Or perhaps our universe is not someone’s science experiment, but rather the result of an evolutionary process. Leonard Susskind, the developer of string theory, and Lee Smolin, a theoretical physicist and expert on quantum gravity, have suggested that universes give rise to other universes in a natural, evolutionary process that gradually refines the natural constants. Smolin postulates that universes best able to product black holes are the ones that are most likely to reproduce. Smolin explains, “Reproduction through black holes leads to a multiverse in which the conditions for life are common — essentially because some of the conditions life requires, such as plentiful carbon, also boost the formation of stars massive enough to become black holes.”1
As an alternative to Smolin’s concept of it being a coincidence that black holes and biological life both need similar conditions (such as large amounts of carbon), Jim Gardner and I have put forth the conjecture that it is precisely the intelligence that derives from biological life and its technological creations that are likely to engineer new universes with intelligently set parameters. In this thesis, there is still an important role for black holes, because black holes represent the ultimate computer. Now that Stephen Hawking has conceded that we can get information out of a black hole (because the particles comprising the Hawking radiation remain quantum-entangled with particles flying into the black hole), the extreme density of matter and energy in a black hole make it the ultimate computer. If we think of evolving universes as the ultimate evolutionary algorithm, the utility function (that is, the property being optimized in an evolutionary process) would be its ability to produce intelligent computation.
This line of reasoning sheds some light on the Fermi paradox. The Drake formula provides a means to estimate the number of intelligent civilizations in a galaxy or in the universe. Essentially, the likelihood of a planet evolving biological life that has created sophisticated technology is tiny, but there are so many star systems, that there should still be many millions of such civilizations. Carl Sagan’s analysis of the Drake formula concludes that there should be around a million civilizations with advanced technology in our galaxy, while Frank Drake estimated around 10,000. And there are many billions of galaxies. Yet we don’t notice any of these intelligent civilizations, hence the paradox that Fermi described in his famous comment. As Jim Gardner and others have asked, where is everyone?
We can readily explain why any one of these civilizations might be quiet. Perhaps it destroyed itself. Perhaps it is following the Star Trek ethical guideline to avoid interference with primitive civilizations (such as ours). These explanations make sense for any one civilization, but it is not credible, in my view, that every one of the billions of technology capable civilizations that should exist has destroyed itself or decided to remain quiet.
The SETI project is sometimes described as trying to find a needle (evidence of a technical civilization) in a haystack (all the natural signals in the universe). But actually, any technologically sophisticated civilization would be generating trillions of trillions of needles (noticeably intelligent signals). Even if they have switched away from electromagnetic transmissions as a primary form of communication, there would still be vast artifacts of electromagnetic phenomenon generated by all of the many computational and communication processes that such a civilization would need to engage in.
Now let’s factor in the law of accelerating returns. The common wisdom (based on what I call the intuitive linear perspective) is that it would take many thousands, if not millions of years, for an early technological civilization to become capable of technology that spanned a solar system. But as I argued previously, because of the explosive nature of exponential growth, it will only take a quarter of a millennium (in our own case) to go from sending messages on horseback to saturating the matter and energy in our solar system with sublimely intelligent processes.
According to most analyses of the Drake equation, there should be billions of civilizations, and a substantial fraction of these should be ahead of us by millions of years. That’s enough time for many of them to be capable of vast galaxy-wide technologies. So how can it be that we haven’t noticed any of the trillions of trillions of “needles” that each of these billions of advanced civilizations should be creating?
My own conclusion is that they don’t exist. If it seems unlikely that we would be in the lead in the universe, here on the third planet of a humble star in an otherwise undistinguished galaxy, it’s no more perplexing than the existence of our universe with its ever so precisely tuned formulas to allow life to evolve in the first place.
It is not possible to do justice to this dilemma in a foreword. It would take a book to do that, and Jim Gardner has written that book. Muriel Rukeyser wrote, “The universe is made of stories, not atoms,” and in this book, Gardner tells us the universe’s own fascinating and unfinished story. Perhaps even more intriguing, Gardner relays in a clear and compelling manner the gripping stories of the rich, intellectual ferment from which our understanding of the universe is emerging.
1. “Smolin vs. Susskind: The Anthropic Principle.” Edge: The Third Culture, August 18, 2004.