Excerpts from The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies

July 26, 2001 by Damien Broderick

Damien Broderick takes us to the edge of a technological Singularity, where the Internet reaches critical mass of interconnectivity and “wakes up,” and mountain ranges may mysteriously appear out of nowhere. Then again, is the rampant techno-optimism surrounding the imminent Singularity just exponential bogosity?

But what will happen at the Spike?

Originally published July 2001. Published on KurzweilAI.net July 26, 2001. From The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies, By Damien Broderick.

Vernor Vinge sketched his disturbing theory of the Singularity to a group of interested humans at the February 1995 meeting of the San Diego chapter of the Association of Computing Machinery–a wonderfully nineteenth-century name for a bunch of people hearing, no doubt with some skepticism, that humanity is due to be outstripped by computers. Vinge started with the obvious moves and countermoves. Yes, computing hardware per dollar has been zooming away at an exponential rate, but even exponential swells within specific processes are bound to saturate: they don’t go on improving forever.

Indeed, they often collapse catastrophically, as animal populations do after a prodigious bout of breeding that outstrips the carrying capacity of the landscape. Vinge hoped that this particular doleful fate was not in store for us. If we avoided it, however, we were in for something marvelously difficult to pin down, because imagination simply fails when change is pushed at a thousand times the customary rate, let alone a million times.

Despite the doubters, technologies with the wind behind them “are usually a superposition of saturating curves of the individual technologies” making up their component parts. As one curve falls away, another takes up the slack. Vinge insisted that current computing trends would continue for another three decades. For example, Internet nodes were increasing by a frantic 30 percent per month. And even that explosive increase could continue for a long time.

Vinge’s specialized audience understood the implications of this superposed exponential curve. It wasn’t anything as mundane as adding more and more (boring) cable channels to your television reception, or extra cells to the mobile phone system, or even hooking up remote, isolated parts of the world to the global telecommunications network. All of these advances are, in a sense, merely additive. If wretchedly poor people in the heart of some Third World country suddenly gain access to the telephone, to the global positioning satellites, it will improve their lives a little but it won’t revolutionize the world in utterly unpredictable ways.

True, the fall of Soviet communism and various other revolts against gray authority were said to have been catalyzed, dynamized, by the fax machine, the Internet, even the photocopier. But Vinge was speaking about a grander jump: closing a gulf between animal and mineral, between living brains and silicon or gallium arsenide hardware.

Machines as smart as people

The world telephone network, even with its billions of switches and speed-of-light exchange of data, will never “wake up” when it hits some critical density and find that…It’s Alive! But artificial intelligence programs run on very swift machines that might do just that. Techno-optimists suspect that human complexity could be mimicked by devices only a hundred or a thousand times better than existing hardware. If this were true, we’d reach human emulation somewhere between 2005 and 2030. (Two thousand five! Twenty oh-five! As Vinge gave the lecture, that was just a decade away.)

Others, sure that the mind and its supporting brain were not quite so easily emulated, concluded that processing is done down at the cellular level of the brain, rather than at the coarser “chunked” level of neural networks. That would give a million times as much power. If so, artificial intelligence able to match the human mind would take longer to arrive on the scene.

And there are some who believe AI just isn’t going to happen. The most notable of these was Sir Roger Penrose, Rouse Ball professor of mathematics at Oxford University, theorist of black holes and twistors (don’t ask), co-winner with Stephen Hawking of the 1988 Wolf Prize for their joint contribution to our understanding of the universe. Penrose was not an intellectual opponent to dismiss lightly, and he thought that consciousness–and that meant true intelligence, not just its clumsy counterfeit–depended on some as-yet-unknown physics, quantum gravity, which permitted the curious structure of living neurons to do calculations beyond the range of mere computers.

Vinge took this challenge seriously, but made the obvious retort: if there’s nothing metaphysical, nothing mystically unknowable in Penrose’s purported quantum neurons (and there isn’t, Penrose himself asserts), why then, that technical trick can be mastered as well. It’s just a matter of doing the research and development work. Maybe the R&D will take extra decades. The hair-raising point, the really significant consideration, is this: “What do you build five months after that! Or what does it build five months after that?” What does the AI do with its new consciousness, its ferocious, self-augmenting intellectual powers?

That is the edge of a technological Singularity, the place when the future starts to go completely opaque. Once a human-level machine takes charge of its own development, with its storage and internal connections and speed doubling every eighteen months or much faster, you get a superhuman-level machine in (historically speaking) the blink of an eye.

When is the Spike due to happen? Some fans of the Singularity, Vinge told his audience, have formed the 2014 Club. “Actually,” he added with a smile, “May thirteenth, two thousand fourteen.” And what happens a little after the Spike? Could be a striking event, Vinge supposed. “You could look out to the West and say, `I don’t remember a mountain range out there.’ “

Dr. David Brin, science fiction writer and author of The Transparent Society, has offered a comparable comment slightly less apocalyptic but unnerving for all that: “A good parent wants the best for his or her children, and for them to be better. And yet, it can be poignant to imagine them–or perhaps their grandchildren–living almost like gods, with omniscient knowledge and perception, and near immortality.”

But remember, one great truth about trends is this: the farther they’re projected into the future, the less reliable they are.

Trusting the trends–maybe

The unreliability of trends is due precisely to relentless, unpredictable change, which makes the future interesting but also renders it opaque. How can you guess what some research physicist or engineer will cook up in her lab ten years down the track, let alone forty? Science, we can agree, is exceptional in its intellectual and very practical shocks, its paradigm shifts, its discontinuities. Extending a current curve into the future is, when all is said and done, nothing but guesswork and faith.

Can we trust these exponential curves at all? They’re just artifacts, aren’t they, no better than an arbitrary choice of curve fitted to a bunch of data points usually representing quite distinct things. Yes, in a ferocious market economy there’s going to be huge commercial pressure to make the next processor chip smaller, packed more densely with transistors, a jump or two faster. So the companies that fabricate the gadgets fund research into a swath of possible breakthrough areas. Scads of brilliant, trained minds are now alive and working in their labs. Someone is bound to take the next step, assuming there is one and they can find it–

Hmm. But that means the data points for computer-power-per-dollar really independent of each other. Maybe the curve is bogus? After all, mightn’t rates of change have turned out to fluctuate wildly? What was to prevent a dog’s leg from showing up on the graph? How come the exponential curve drops so neatly over the last half century if all the data points are independent?

There is a somewhat squelching answer: it’s the next simplest curve to try after a straight line.

The simplest line between points on a graph is a linear plot, the straight line that matches equal components to equal intervals (first x, second x 1, third x 2, fourth x 3…). So you get, say, 1, 2, 3, 4, 5, and onto infinity, each gap the same as the one before it. The next obvious thing to try out is logarithmic plots, exponentials, which add ever larger components per interval (x<+>1, then x<+>2, then x<+>3…). Now you have, say, 3, 9, 27, 81, 243, and so on in a soaring, monstrous surge. A log plot leaves out the xs and maps the powers or exponents to which x is raised: 1, 2, 3, 4…, so that last runaway upward curve is transformed into yet another a straight line.

What we’re closing in on here is a seat-of-the-pants estimate of the bogosity factor. That’s an intuitive measure of how much wishful thinking is incorporated in your analysis, blended with balderdash, spin-doctoring, and sometimes a dash of outright chicanery. On a scale of one to 10, the “cold fusion” furor scored very well indeed for bogosity: perhaps 8.9 or 9. (There are still some serious labs, mostly in Japan, that persist in working on it, although nobody in those labs believes they are detecting nuclear fusion–maybe the excess is “zero point energy,” or something even more exotic. Skeptics deny there’s anything to be seen at all.)

What would win a score of 10? The Flat Earth theory, say. “Creation science,” certainly. An attempted revival of the phlogiston model of heat, happily abandoned once chemists learned that heat is not a substance, but just the dynamics of the ceaseless motion of atoms.

The bogosity index is culture-specific, you must understand, and certainly not fixed forever, because our knowledge of the universe is always provisional and open to reframing. Since that’s so, since we’re not talking Timeless Truth (which nobody knows), the great foundational structures of biological evolution, quantum theory, relativity, can safely be assigned a bogosity score of zero. For this year, at any rate. Many scientists rate parapsychology’s claims in the high 8s and 9s, and the prospects of nanotechnology in the near future only a little lower. I’m agnostic, at this point. Let’s give them both a 5.

Well, what about Moore’s law, the doubling of computer power every…year, two years, eighteen months, whatever…? An intriguing index of how relentlessly computing power really has been surging onward and upward is the track record of Big Prime Numbers calculated on available machines. A prime number is one that cannot be divided by any other number except itself and 1, and they have to be sought out laboriously, each one individually. There is no rule to predict when a prime will turn up in the sequence of numbers. In September 1996, Cray Research at Silicon Graphics found one with 378,632 digits. That’s a single number which, when written out, would fill about 200 pages of a book like this. Since then, the record for largest known prime has been broken no less than four times, due to the introduction of the Great Internet Mersenne Prime Search. GIMPS is a distributed computing research effort, which can be joined by anyone with a computer. It was estimated in 1997 that a one million digit prime would be found within a decade, but the reality was more astonishing. In January 1998, a 909,525 digit prime was found, close to that predicted million–but by June 1, 1999, a new prime with over two million decimal digits was located. This was just a year and a half later, seven years sooner than the original prediction.

Don’t be misled into thinking this means the size of the largest known prime, rather boringly, had merely doubled (one million to two million bottles of beer)–the reality is far more shocking. Look at it this way. Isaac Asimov once calculated that we could fill the entire volume of the universe with roughly 10<+>125 tiny protons cheek by jowl. Recall that this enormous number is equal to 10 followed by 125 zeroes. Compare our jump from a prime nearly one million digits long to the next known larger prime, which needs more than two million digits. This is a mind-boggling vast jump. It’s equivalent, on average, to doubling the size of the original number every eleven seconds during the entire sixteen months of the search.

The acceleration trend is not slowing, then, but in fact speeding up. I am indebted for this prime number analysis to Greg Jones, a Silicon Valley aerospace engineer who rejoices in the happy nickname of “Spike.” Applying an interesting method he discovered, we can estimate that there’s nearly an 80 percent chance that a three million digit prime will be discovered by November 2001.

Compare these recent mighty bounds of discovery to the first modest step along the way. In 1951, an early computer found a seventy-nine digit prime, which at the time seemed quite an achievement. Since then, a logarithmic plot of new record largest primes shows a remarkably straight line heading for the three million digit prime and beyond. On ordinary graph paper, that history would show an impossibly steep curve: in 1951 the first point is down below one hundred, by 1996 it was up past a third of a million, and by the end of 2001 it might top three million digits. You’d use up a lot of paper to draw that curve.

Moore’s law, on this evidence, seems to be holding up impressively well.

Futurology–bogus or not?

Well, then, what about the supposed rising curve of scientific attainment itself toward Singularity, to human-level artificial intelligence, to augmented human brains, to super intellects of fleshy or fabricated matter? That’s harder to estimate. Let’s also set its bogosity factor, for the moment, at 5.

There are more conservative expert opinions, of course. In June 1996, the U.S. Air Force released what amounts to an updating of that forty-year-old forecast used by Stine decades earlier. In a huge year-long project to estimate a range of future worlds possible by 2025, the study strove mightily to avoid palpable bogosity, while rather preening itself on speculating boldly “outside the box” (its term for the confines of convention) in its effort to capture what it dubbed “The Vigilant Edge.” The Air Force was determined not be caught napping. Under the direction of their chief of staff, General Ronald R. Fogleman, they sought to “generate ideas and concepts on the capabilities the United States will require to possess the dominant air and space force in the future.” Over two hundred participants took part directly, comprising fifteen scientists and technologists forming an operations analysis team at the Air Force Institute of Technology, cadets, more than seventy guest speakers, including Alvin Toffler, Kevin Kelly, and Dennis Meadows, “experts on creativity and critical thinking; science fiction writers and movie producers; scientists discussing swarming insects, communication capabilities, advances in energy; experts in propulsion systems; military historians; international relations specialists,” while two thousand interested parties were consulted via the Internet.

In 3300 pages of text, the report was organized around six scenarios of quite different 2025s: Gulliver’s Travails, “rampant nationalism, state and non-state sponsored terrorism, and fluid coalitions”; Zaibatsu, a “cyberpunk” future where “multinational corporations dominate international affairs and loosely cooperate in a syndicate to create a superficially benign world”; Digital Cacophony, a kind of pre-Spike United States of high computing power and sophistication, global databases, biotechnology and artificial organs, and virtual reality entertainment, but little order and much anxiety; King Khan, a Pacific Rim Sino-colossus dominating a First World sunk in gloom and austerity; the cheekily entitled Halves and Half Naughts, where fifteen percent of the world is rich while the rest grieves and seethes with nothing to lose; Crossroads 2015, an intermediate epoch after near-term war in Eurasia, with constrained rates of economic and technological growth.

None of these possible tomorrows takes the plunge into truly discontinuous technologies, not even the Digital Cacophony. This is how the planners saw that supposedly extreme “futuristic” case:

Electronic referenda have created pseudo-democracies, but nations and political allegiances have given way to a scramble for wealth amid explosive economic growth. Rapid proliferation of high technology and weapons of mass destruction provide individual independence but social isolation. The US military must cope with a multitude of high technology threats, particularly in cyberspace. The US world view is global, technological change exponential, and the world power grid dispersed. It is not a terribly bold vision after all, despite the Air Force’s consultation with experts on “nanotechnologies and microelectrical mechanical computer processing advances.”

An optimistic prediction

By contrast, a strong case can be made that a Spike is imminent, perhaps within a decade. Daniel G. Clemmensen argues it forcefully. Technical progress is speeding up, he notes, in large part because there are now more trained scientists and technologists adding their efforts together–or perhaps multiplying them!–so that less and less effort is needed to supply the essentials of life, even as the tally of what we regard as essential grows in its turn. Our available instruments thrust our progress forward like sails opened to a wind that has always blown but, until now, has never been adequately harnessed.

Movable type and the printing press are a classic instance of how new tools can catapult tens, hundreds, millions of human brains into fresh abilities, untouched cognitive landscapes. Such novel tools don’t just disperse stocks of raw data and processed information; they enable human brains to work more efficiently. In recent decades, the computer–first lumbering mainframe, then desktop, now worldwide network–has acted as a similar amplifier or accelerator, and its impact can only grow. These technical innovations are not just patches, stuck onto what ancient cultures always knew. They are thresholds, steps into larger habitats, unexplored mental ecologies.

“Between thresholds, the basic driving mechanisms cause the mathematical model to be an exponential with time. Each threshold appears mathematically as an increase in the mantissa.” The mantissa is the fractional part of an exponent or power to which a factor is raised–2<+>2 increasing to 2<+>2.4, say, and then to 2<+>2.7 until it reaches 2<+>3. “This is intended as an analogy,” Clemmensen adds hastily, “not as a rigorous or even a non-rigorous mathematical model!”

So again we are left asking: if it is only an analogy, might progress toward the Spike slow and run out of steam any day now? Clemmensen admits the ad hoc nature of the exponential curve but is adamant, even so, that it means just what it seems to mean:

“The technological singularity will occur at some point in the near future when we cross one of these thresholds. This threshold differs qualitatively from the previous ones, because it will enable us to begin generating the equivalent of more thresholds in very rapid succession. The curve will change from an exponential to a hyper-exponential. This is not actually a mathematical singularity, but the rate of progress will become so fast over such a short amount of time that there is no effective difference.”

Hyperexponential! Faster than a speeding bullet, and getting faster all the time, and then some. Machines and minds linked together, a kind of hybrid of current-model human and up-and-running Internet, achieving numerical critical mass (maybe around 2006) that will turn its searchlight gaze back upon its own hypercomplex gestalt and start rewriting itself, correcting its own structural defects, adding to its own capabilities–more memory, more processing power, better programs–and all of this within weeks of ignition. Isn’t this the promise of wild change we came in with, the declaration of apocalypse, of the Singularity, of the Spike? Clemmensen makes no bones about it:

“Within a short time, everything that can be known, will be known, and anything that is possible within the laws of physics will be achievable.”

It’s time to recall Drexler, in 1988, on exuberant expectations: “Nanotechnology will offer fertile ground for the generation of new bogosities. It includes ideas that sound wild, and these will suggest ideas that genuinely are wild. The wild-sounding ideas will attract flaky thinkers, drawn by whatever seems dramatic or unconventional.”

Putting a date on it

If there’s to be a Spike–whatever that really comprises–when is it due? In his novel, Vinge deliberately set up a devastating plague war at the turn of the millennium to postpone progress, and still his Singularity came by shortly after 2210. “I showed artificial intelligence and intelligence amplification proceeding at what I suspect is a snail’s pace. Sorry. I needed civilization to last long enough to hang a plot on it.” In reality, as he made clear in his 1993 NASA address, he expects it by perhaps 2025 or 2030.

That’s certainly the contrary of what most people will assume, learning that a Spike might peak in much less than a century. Impossibly rapid. It can’t be true. New Age folly, they’ll say. Wishful thinking of the dumbest, most self-serving kind. Vinge did not agree with that kind of dismissal in 1986 and, as we’ve seen, he has not changed his mind since. Instead, in the afterword to Marooned in Realtime, he offered a prediction, meant as science fact: “If we don’t have that general war, then it’s you…who will understand the Singularity in the only possible way–by living through it” (p. 270).

The impact of artificial intelligence

While Drexler’s portrait of a renovated world stresses the impact of nanominting, Vinge, as a computer scientist, has emphasized multiple breakthroughs in artificial and machine intelligence. Bringing those fields to fruition might very well require nanotech, needless to say, or at any rate comprehensive command of biology from the DNA level up (a feature of cutting-edge research that Vinge acknowledges). Either way, evolution will have passed out of the clumsy bumbling of accidental nature and into the purposeful domain of intelligence and imagination. We humans can simulate the effects of chosen changes–this, after all, is the function of foresight and planning and experiment–and thus bootstrap our designs thousands of times faster than natural selection can manage.

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye,” Vinge declares, “an exponential runaway beyond any hope of control. Developments that before were thought might only happen in `a million years’ (if ever) will likely happen in the next century.”

Why–to pose the question once more–must this feature of runaway change represent a singularity? Not just because it’s a spike on the graph of technological progress, but owing to its transforming impact upon human reality in its entirety. The strangest feature of such a graph, taken literally–and Vinge does look at it with the straightest of faces–is that the higher you rise on its curve, the faster it climbs ahead of you. We can’t catch up. We can’t even get to the top and then slide despairingly back to the base. “As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace,” Vinge points out. “Yet when it finally happens it may still be a great surprise and a greater unknown.”

Dan Clemmensen says that it is strictly illogical to try to predict the date of the Spike as much as a decade in advance, let alone what life will be like (if is life) after a singularity:

(1) The singularity will be precipitated by the emergence of an internet-based, self-augmenting superintelligence (SI, hence SIngularity), and (2) This will occur within ten years of when I wrote “Paths to the Singularity” (i.e., before 1 May, 2006.)

The most interesting new “insight” I’ve had is that it is illogical for anybody to agree with me, or with anybody who tries to predict the date of the singularity a decade in advance, whenever it may occur. The reason: technology is advancing “exponentially” or faster. This means that the bulk of the change in knowledge and capacity needed to precipitate the singularity will occur within the last year before the event.

In that sense, which I find persuasive even if the dating was preposterous, our enterprise in this book is both quixotic and impossible. It is–to return to the inevitable religious comparisons I’m trying so hard to skirt–akin to the futility of a theologian or a physicist attempting to understand the Mind of God (as Stephen Hawking rhetorically dubbed his own scientific efforts).

The Internet wakes up

One method that might already have begun its insidious Trojan Horse progress is the merging of the global Internet as a mind/machine interface, a development Vinge believes “is proceeding the fastest and may run us into the Singularity before anything else.” An extreme version has been suggested by systems expert Dan Clemmensen, who expects a single luckily placed researcher or hacker to bring the Net to life, as it were.

Suppose the increasing numbers of users of the Internet fetch its linkages to some “critical mass” of interconnectivity, so that it…wakes up. Raw computing power is not all that’s required, Clemmensen notes. “It’s possible that the final missing link will be a particular piece of software such as an information-visualization package, or a decision-support package, or a knowledge database. The point is that the Internet may then enter a super-critical condition in which a single seed may precipitate a phase change”–as very cold water can crystallize into a new form as ice. The seed program would borrow (or steal) computing resources to augment its own intelligence.

If this seems ridiculously optimistic, consider Clemmensen’s “SuperIntelligence Dream” scenario. The doubtful might well regard this as a nightmare, assuming it’s not altogether preposterous:

a researcher (probably a grad student at MIT, drinking Jolt cola at 2 a.m. and programming when he should be studying for an English exam) is attempting to enhance a decision-support system by interfacing it to a knowledge base and to a graphical information-presentation system. Because he’s interested in software development, the knowledge base is the one he set up last year as a class assignment in his software engineering class. He gets the system up, and (since he is currently working on this system) his first trial run is an attempt to optimize his prototype. He succeeds, and installs the next version. With this version, he optimizes the operating system.<+>4 Next, he optimizes his hacking program. He grabs all the work-stations in the dorm, via the net, and optimizes them. Then he reoptimizes his program to run in a distributed mode. Now (about 4 a.m., I think,) he hacks the campus routers, and then all computers on the campus, and then the web. He turns his attention to extending his knowledge base, probably by hacking the CYC database. By 6 a.m., he’s running in the whole Web. By the end of the trading day, he owns a controlling interest in a nice collection of companies on the New York Stock Exchange.

What we have here, you’ll have noticed, seems less like a superintelligent machine, or even human/AI hybrid, than a perfectly ordinary person with a powerful tool at his disposal. Clemmensen goes further, though, calling the combination “a human/computer collaboration whose intelligence is substantially augmented by its computer component. `Intelligence’ for the purposes of this discussion is very (!) narrowly defined as the quality that permits an entity to design and implement newer and better computer hardware and software.”

What’s more, he notes that if an entity with superior intelligence can augment its own abilities faster than a stupider entity (one as stupid as you or me, say), this constitutes a fast feedback loop. Here’s the recipe: human, plus the tremendous distributed hardware underlying the World Wide Web, plus novel software composed of filters and agents able to sort and combine huge amounts of data, plus knowledge bases such as CYC (an existing and growing “natural language” encyclopedia constructed by Douglas B. Lenat and many helpers). If Clemmensen is right, this blend will be more than human. What’s more, it will have the capacity to bootstrap its own intelligence to higher levels, faster and faster as it gets smarter and smarter.

Others are skeptical of this scenario. Economist and political scientist Robin Hanson notes that individual computers have been linking in to the Net for years now, and there hasn’t been any conspicuous takeoff. He regards the suggestion as “wishful thinking.” Many will agree with him. Some deny that the scenario is even possible, because the hundreds of thousands of individual systems comprising the Internet are well protected from invasion, defended behind what are called “firewalls.” It doesn’t mean, however, that someday–perhaps tomorrow–the critical level of connections will not be achieved, the right mix of human smarts and knowledge base and program package come together to gel into a self-bootstrapping superintelligence.

What then? Vinge has observed: “Even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare.”

Copyright (C) 2001 by Damien Broderick