Response to ‘The Singularity Is Always Near’
May 3, 2006 by Ray Kurzweil
In “The Singularity Is Always Near,” an essay in The Technium, an online “book in progress,” author Kevin Kelly critiques arguments on exponential growth made in Ray Kurzweil’s book, The Singularity Is Near. Kurzweil responds.
Allow me to clarify the metaphor implied by the term “singularity.” The metaphor implicit in the term “singularity” as applied to future human history is not to a point of infinity, but rather to the event horizon surrounding a black hole. Densities are not infinite at the event horizon but merely large enough such that it is difficult to see past the event horizon from outside.
I say difficult rather than impossible because the Hawking radiation emitted from the event horizon is likely to be quantum entangled with events inside the black hole, so there may be ways of retrieving the information. This was the concession made recently by Hawking. However, without getting into the details of this controversy, it is fair to say that seeing past the event horizon is difficult (impossible from a classical physics perspective) because the gravity of the black hole is strong enough to prevent classical information from inside the black hole getting out.
We can, however, use our intelligence to infer what life is like inside the event horizon even though seeing past the event horizon is effectively blocked. Similarly, we can use our intelligence to make meaningful statements about the world after the historical singularity, but seeing past this event horizon is difficult because of the profound transformation that it represents.
So discussions of infinity are not relevant. You are correct that exponential growth is smooth and continuous. From a mathematical perspective, an exponential looks the same everywhere and this applies to the exponential growth of the power (as expressed in price-performance, capacity, bandwidth, etc.) of information technologies. However, despite being smooth and continuous, exponential growth is nonetheless explosive once the curve reaches transformative levels. Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in one year, and then to 40,000 and then 80,000, it was of interest only to a few thousand scientists. When ten years later it went from 10 million nodes to 20 million, and then 40 million and 80 million, the appearance of this curve looks identical (especially when viewed on a log plot), but the consequences were profoundly more transformative. There is a point in the smooth exponential growth of these different aspects of information technology when they transform the world as we know it.
You cite the extension made by Kevin Drum of the log-log plot that I provide of key paradigm shifts in biological and technological evolution (which appears on page 17 of The Singularity Is Near). This extension is utterly invalid. You cannot extend in this way a log-log plot for just the reasons you cite. The only straight line that is valid to extend on a log plot is a straight line representing exponential growth when the time axis is on a linear scale and the a value (such as price-performance) is on a log scale. Then you can extend the progression, but even here you have to make sure that the paradigms to support this ongoing exponential progression are available and will not saturate. That is why I discuss at length the paradigms that will support ongoing exponential growth of both hardware and software capabilities. But it is not valid to extend the straight line when the time axis is on a log scale. The only point of these graphs is that there has been acceleration in paradigm shift in biological and technological evolution.
If you want to extend this type of progression, then you need to put time on a linear x axis and the number of years (for the paradigm shift or for adoption) as a log value on the y axis. Then it may be valid to extend the chart. I have a chart like this on page 50 of the book.
This acceleration is a key point. These charts show that technological evolution emerges smoothly from the biological evolution that created the technology creating species. You mention that an evolutionary process can create greater complexity—and greater intelligence—than existed prior to the process. And it is precisely that intelligence creating process that will go into hyper drive once we can master, understand, model, simulate, and extend the methods of human intelligence through reverse-engineering it and applying these methods to computational substrates of exponentially expanding capability.
That chimps are just below the threshold needed to understand their own intelligence is a result of the fact that they do not have the prerequisites to create technology. There were only a few small genetic changes, comprising a few tens of thousands of bytes of information, that distinguish us from our primate ancestors: a bigger skull (allowing a larger brain), a larger cerebral cortex, and a workable opposable appendage. There were a few other changes that other primates share to some extent such as mirror neurons and spindle cells
As I pointed out in my long now talk, a chimp’s hand looks similar but the pivot point of the thumb does not allow facile manipulation of the environment. In contrast, our human ability to look inside the human brain and to model and simulate and recreate the processes we encounter there has already been demonstrated. The scale and resolution of these simulations will continue to expand exponentially. I make the case that we will reverse-engineer the principles of operation of the several hundred information processing regions of the human brain within about twenty years and then apply these principles (along with the extensive tool kit we are creating through other means in the AI field) to computers that will be many times (by the 2040s, billions of times) more powerful than needed to simulate the human brain.
You write that “Kurzweil found that if you make a very crude comparison between the processing power of neurons in human brains and the processing powers of transistors in computers, you could map out the point at which computer intelligence will exceed human intelligence.” That is an oversimplification of my analysis. I provide in book four different approaches to estimating the amount of computation required to simulate all regions of the human brain based on actual functional recreations of brain regions. These all come up with answers in the same range, from 1014 to 1016 cps for creating a functional recreation of all regions of the human brain, so I’ve used 1016 cps as a conservative estimate.
This refers only to the hardware requirement. As noted above, I have an extensive analysis of the software requirements. While reverse-engineering the human brain is not the only source of intelligent algorithms (and, in fact, has not been a major source at all up until just recently because we did not have scanners that could see into the human with sufficient resolution until recently), my analysis of reverse-engineering the human brain is along the lines of an existence proof that we will have the software methods underlying human intelligence within a couple of decades.
Another important point in this analysis is that the complexity of the design of the human brain is about a billion times simpler than the actual complexity we find in the brain. This is due to the brain (like all biology) being a probabilistic recursively expanded fractal. This discussion goes beyond what I can write here (although it is in the book). We can ascertain the complexity of the design of the human brain because the design is contained in the genome and I show that the genome (including non-coding regions) only has about 30 to 100 million bytes of compressed information in it due to the massive redundancies in the genome.
So in summary, I agree that the singularity is not a discrete event. A single point of infinite growth or capability is not the metaphor being applied. Yes, the exponential growth of all facts of information technology is smooth, but is nonetheless explosive and transformative.
© 2006 Ray Kurzweil
This article is a response to Kevin Kelly’s essay, "The Singularity Is Always Near."