Ray Kurzweil responds to “Ray Kurzweil does not understand the brain”

August 20, 2010 by Ray Kurzweil

While most of PZ Myers’ comments (in his blog post entitled “Ray Kurzweil does not understand the brain” posted on Pharyngula on August 17, 2010) do not deserve a response, I do want to set the record straight, as he completely mischaracterizes my thesis.

For starters, I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.

The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that the information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

It is true that the brain gains a great deal of information by interacting with its environment – it is an adaptive learning system. But we should not confuse the information that is learned with the innate design of the brain. The question we are trying to address is: what is the complexity of this system (that we call the brain) that makes it capable of self-organizing and learning from its environment?  The original source of that design is the genome (plus a small amount of information from the epigenetic machinery), so we can gain an estimate of the amount of information in this way.

But we can take a much more direct route to understanding the amount of information in the brain’s innate design, which I also discussed: to look at the brain itself. There, we also see massive redundancy. Yes there are trillions of connections, but they follow massively repeated patterns.

For example, the cerebellum (which has been modeled, simulated and tested) — the region responsible for part of our skill formation, like catching a fly ball — contains a module of four types of neurons. That module is repeated about ten billion times. The cortex, a region that only mammals have and that is responsible for our ability to think symbolically and in hierarchies of ideas, also has massive redundancy. It has a basic pattern-recognition module that is considerably more complex than the repeated module in the cerebellum, but that cortex module is repeated about a billion times. There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.

Yes, the system learns and adapts to its environment. We have sufficiently high-resolution in-vivo brain scanners now that we can see how our brain creates our thoughts and see our thoughts create our brain.  This type of plasticity or learning is an essential part of the paradigm and a capability of the brain’s design. The question is: how complex is the design of the system (the brain) that is capable of this level of self-organization in response to a complex environment?

To summarize, my discussion of the genome was one of several arguments for the information content of the brain prior to learning and adaptation, not a proposed method for reverse-engineering.

The goal of reverse-engineering the brain is the same as for any other biological or nonbiological system – to understand its principles of operation. We can then implement these methods using other substrates other than a biochemical system that sends messages at speeds that are a million times slower than contemporary electronics. The goal of engineering is to leverage and focus the powers of principles of operation that are understood, just as we have leveraged the power of Bernoulli’s principle to create the entire world of aviation.

As for the time frame, some of my critics claim that I underestimate the complexity of the problem. I have studied these issues for over four decades, so I believe I have a good appreciation for the level of challenge. What I would say is that my critics underestimate the power of the exponential growth of information technology.

Halfway through the genome project, the project’s original critics were still going strong, pointing out that we were halfway through the 15 year project and only 1 percent of the genome had been identified. The project was declared a failure by many skeptics at this point. But the project had been doubling in price-performance and capacity every year, and at one percent it was only seven doublings (at one year per doubling) away from completion. It was indeed completed seven years later. Similarly, my projection of a worldwide communication network tying together tens and ultimately hundreds of millions of people, emerging in the mid to late 1990s, was scoffed at in the 1980s, when the entire U.S. Defense Budget could only tie together a few thousand scientists with the ARPANET. But it happened as I predicted, and again this resulted from the power of exponential growth.

Linear thinking about the future is hardwired into our brains. Linear predictions of the future were quite sufficient when our brains were evolving. At that time, our most pressing problem was figuring out where that animal running after us was going to be in 20 seconds. Linear projections worked quite well thousands of years ago and became hardwired. But exponential growth is the reality of information technology.

We’ve seen smooth exponential growth in the price-performance and capacity of computing devices since the 1890 U.S. census, in the capacity of wireless data networks for over 100 years, and in biological technologies since before the genome project. There are dozens of other examples. This exponential progress applies to every aspect of the effort to reverse-engineer the brain.