Dialogue between Ray Kurzweil, Eric Drexler, and Robert Bradbury

December 3, 2002 by K. Eric Drexler, Ray Kurzweil, Robert Bradbury

What would it take to achieve successful cryonics reanimation of a fully functioning human brain, with memories intact? A conversation at the recent Alcor Conference on Extreme Life Extension between Ray Kurzweil and Eric Drexler sparked an email discussion of this question. They agreed that despite the challenges, the brain’s functions and memories can be represented surprisingly compactly, suggesting that successful reanimation of the brain may be achievable.

E-mail dialogue on November 23, 2002. Published on KurzweilAI.net Dec. 3, 2002. Comments by Robert Bradbury added Jan. 15, 2003.

Ray Kurzweil: Eric, I greatly enjoyed our brief opportunity to share ideas (difficulty of adding bits to quantum computing, cryonics reanimation, etc.). Also, it was exciting to hear your insightful perspective on the field you founded, now that it’s gone—from what was regarded in the mainstream anyway as beyond-the-fringe speculation—to, well, mainstream science and engineering.

I had a few questions and/or comments (depending on whether I’m understanding what you said correctly). Your lecture had a very high idea density, so I may have misheard some details.

With regard to cryonics reanimation, I fully agree with you that preserving structure (i.e., information) is the key requirement, that it is not necessary to preserve cellular functionality. I have every confidence that nanobots will be able to go in and fix every cell, indeed every little machine in every cell. The key is to preserve the information. And I’ll also grant that we could lose some of the information; after all, we lose some information every day of our lives anyway. But the primary information needs to be preserved. So we need to ask, what are the types of information required?

One is to identify the neuron cells, including their type. This is the easiest requirement. Unless the cryonics process has made a complete mess of things, the cells should be identifiable. By the time reanimation is feasible, we will fully understand the types of neurons and be able to readily identify them from the slightest clues. These neurons (or their equivalents) could then all be reconstructed.

The second requirement is the interconnections. This morphology is one key aspect of our knowledge and experience. We know that the brain is continually adding and pruning connections; it’s a primary aspect of its learning and self-organizing principle of operation. The interconnections are much finer than the neurons themselves (for example, with current brain imaging techniques, we can typically see the neurons but we do not yet clearly see the interneuronal connections). Again, I believe it’s likely that this can be preserved, provided that the vitrification has been done quickly enough. It would not be necessary that the connections be functional or even fully evident, as long as it can be inferred where they were. And it would be okay if some fraction were not identifiable.

It’s the third requirement that concerns me; the neurotransmitter concentrations, which are contained in structures that are finer yet than the interneuronal connections. These are, in my view, also critical aspects of the brain’s learning process. We see the analogue of the neurotransmitter concentrations in the simplified neural net models that I use routinely in my pattern recognition work. The learning of the net is reflected in the connection weights as well as the connection topology (some neural net methods allow for self-organization of the topology, some do not, but all provide for self-organization of the weights). Without the weights, the net has no competence.

If the very-fine-resolution neurotransmitter concentrations are not identifiable, the downside is not equivalent to merely an amnesia patient who has lost his memory of his name, profession, family members, etc. Our learning, reflected as it is in both interneuronal connection topology and neurotransmitter concentration patterns, underlies knowledge that is far broader than these routine forms of memory, including our "knowledge" of language, how to think, how to recognize objects, how to eat, how to walk and perform all of our skills, etc. Loss of this information would result in a brain with no competence at all. It would be worse than a newborn’s brain, which is at least designed to begin reorganizing itself. A brain with the connections intact but none of the neurotransmitter concentrations would have no competence of any kind and a connection pattern that would be too specific to relearn all of these skills and basic knowledge.

It’s not clear whether the current vitrification-preservation process maintains this vital type of information. We could readily conduct an experiment to find out. We could vitrify the brain of a mouse and then do a destructive scan while still vitrified to see if the neurotransmitter concentrations are still evident. We could also confirm that the connections are evident as well.

The type of long-term memory that an amnesia patient has lost is just one type of knowledge in the brain. At the deepest level, the brain’s self-organizing paradigm underlies our knowledge and all competency that we have gained since our fetal days (even prior to birth).

As a second issue, you said something about it being sufficient to just have preserved the big toe or the nose to reconstruct the brain. I’m not sure what you meant by that. Clearly none of the brain structure is revealed by body parts outside the brain. The only conceivable way one could restore a brain from the toe would be from the genome, which one can discover from any cell. And indeed, one could grow a brain from the genome. This would be, however, a fetal brain, which is a genetic clone of the original person, equivalent to an identical twin (displaced in time). One could even provide a learning and maturing experience for this brain in which the usual 20 odd years were sped up to 20 days or less, but this would still be just a biological clone, not the original person.

Finally, you said (if I heard you correctly) that the amount of information in the brain (presumably needed for reanimation) is about 1 gigabyte. My own estimates are quite different. It is true that genetic information is very low, although as I discussed above, genetic information is not at all sufficient to recreate a person. The genome has about 0.8 gigabytes of information. There is massive redundancy, however. For example, the sequence "ALU" is repeated 300,000 times. If one compresses the genome using standard data compression to remove redundancy, estimates are that one can achieve about 30 to 1 lossless compression, which brings us down to about 25 megabytes. About half of that comprises the brain, or about 12 megabytes. That’s the initial design plan.

If we consider the amount of information in a mature human brain, however, we have about 1011 neurons with 103 average fan-out of connections, for an estimated total of 1014 connections. For each connection, we need to specify (i) the neurons that this connection is connected to, (ii) some information about its pathway as the pathway affects analog aspects of its electrochemical information processing, and (iii) the neurotransmitter concentrations in associated synapses. If we estimate about 102 bytes of information to encode these details (which may be low), we have 1016 bytes, considerably more than the 109bytes that you mentioned.

One might ask: How do we get from 107 bytes that specify the brain in the genome to 1016 bytes in the mature brain? This is not hard to understand, since we do this type of meaningful data expansion routinely in our self-organizing software paradigms. For example, a genetic algorithm can be efficiently coded, but in turn creates data far greater in size than itself using a stochastic process, which in turn self-organizes in response to a complex environment (the problem space). The result of this process is meaningful information far greater than the original program. We know that this is exactly how the creation of the brain works. The genome specifies initially semi-random interneuronal connection wiring patterns in specific regions of the brain (random within certain constraints and rules), and these patterns (along with the neurotransmitter-concentration levels) then undergo their own internal evolutionary process to self-organize to reflect the interactions of that person with their experiences and environment. That is how we get from 107 bytes of brain specification in the genome to 1016 bytes of information in a mature brain. I think 109 bytes is a significant underestimate of the amount of information required to reanimate a mature human brain.

I’d be interested in your own reflections on these thoughts, with my best wishes.

Eric Drexler: Ray–Thanks for your comments and questions. Our thinking seems closely parallel on most points.

Regarding neurotransmitters, I think it is best to focus not on the molecules themselves and their concentrations, but rather on the machinery that synthesizes, transports, releases, senses, and recycles them. The state of this machinery must closely track long-term functional changes (i.e, long-term memory or LTM), and much of this machinery is an integral part of synaptic structure.

Regarding my toe-based reconstruction scenario [creating a brain from a bit of tissue containing intact DNA-Ed.], this is indeed no better than genetically based reconstruction together with loading of more-or-less default skills and memories—corresponding to a peculiar but profound state of amnesia. My point was merely that even this worst-case outcome is still what modern medicine would label a success: the patient walks out the door in good health. (Note that neurosurgeons seldom ask whether the patient who walks out is "the same patient" as the one who walked in.) Most of us wouldn’t look forward to such an outcome, of course, and we expect much better when suspension occurs under good conditions.

Information-theoretic content of long-term memory

Regarding the information content of the brain, both the input and output data sets for reconstruction must indeed be vastly larger than a gigabyte, for the reasons you outline. The lower number [109] corresponds to an estimate of the information-theoretic content of human long term memory found (according to Marvin Minsky) by researchers at Bell Labs. They tried various methods to get information into and out of human LTM, and couldn’t find learning rates above a few bits per second. Integrated over a lifespan, this
yields the above number. If this is so, it suggests that information storage in the brain is indeed massively redundant, perhaps for powerful function-enabling reasons. (Identifying redundancy this way, of course, gives no hint of how to construct a compression and decompression algorithm.)

Best wishes, with thanks for all you’ve done.

P.S. A Google search yields a discussion of the Bell Labs result by, yes, Ralph Merkle.

Ray Kurzweil: Okay, I think we’re converging on some commonality.

On the neurotransmitter concentration level issue, you wrote: "Regarding neurotransmitters, I think it is best to focus not on the molecules themselves and their concentrations, but rather on the machinery that synthesizes, transports, releases, senses, and recycles them. The state of this machinery must closely track long-term functional changes (i.e, LTM), and much of this machinery is an integral part of synaptic structure."

I would compare the "machinery" to any other memory machinery. If we have the design for a bit of memory in a DRAM system, then we basically know the mechanics for the other bits. It is true that in the brain there are hundreds of different mechanisms that we could call memory, but each of these mechanisms is repeated many millions of times. This machinery, however, is not something we would need to infer from the preserved brain of a suspended patient. By the time reanimation is feasible, we will have long since reverse-engineered these basic mechanisms of the human brain, and thus would know them all. What we do need specifically for a particular patient is the state of that person’s memory (again, memory referring to all skills). The state of my memory is not the same as that of someone else, so that is the whole point of preserving my brain.

And that state is contained in at least two forms: the interneuronal connection patterns (which we know is part of how the brain retains knowledge and is not a fixed structure) and the neurotransmitter concentration levels in the approximately 1014 synapses.

My concern is that this memory state information (particularly the neurotransmitter concentration levels) may not be retained by current methods. However, this is testable right now. We don’t have to wait 40 to 50 years to find this out. I think it should be a high priority to do this experiment on a mouse brain as I suggested above (for animal lovers, we could use a sick mouse).

You appear to be alluding to a somewhat different approach, which is to extract the "LTM," which is likely to be a far more compact structure than the thousands of trillions of bytes represented by the connection and neurotransmitter patterns (CNP). As I discuss below, I agree that the LTM is far more compact. However, we are not extracting an efficient LTM during cryo preservation, so the only way to obtain it during cryo reanimation would be to retain its inefficient representation in the CNP.

You bring up some interesting and important issues when you wrote, "Regarding my toe-based reconstruction scenario, this is indeed no better than genetically-based reconstruction together with loading of more-or-less default skills and memories—corresponding to a peculiar but profound state of amnesia. My point was merely that even this worst-case outcome is still what modern medicine would label a success: the patient walks out the door in good health."

I agree that this would be feasible by the time reanimation is feasible. The means for "loading" these "default skills and memories" is likely to be along the lines that I described above, to use "a learning and maturing experience for this brain in which the usual 20 odd years were sped up to 20 days or less." Since the human brain as currently designed does not allow for explicit "loading" of memories and skills, these attributes need to be gained from experience using the brain’s self-organizing approach. Thus we would have to use this type of experience-based approach. Nevertheless, the result you describe could be achieved. We could even include in these "loaded" (or learned) "skills and memories," the memory of having been the original person who was cryonically suspended, including having made the decision to be suspended, having become ill, and so on.

False reanimation

And this process would indeed appear to be a successful reanimation. The doctors would point to the "reanimated’ patient as the proof in the pudding. Interviews of this patient would reveal that he was very happy with the process, delighted that he made the decision to be cryonically suspended, grateful to Alcor and the doctors for their successful reanimation of him, and so on.

But this would be a false reanimation. This is clearly not the same person that was suspended. His "memories" of having made the decision to be suspended four or five decades earlier would be false memories. Given the technology available at this time, it would be feasible to create entirely new humans from a genetic code and an experience / learning loading program (which simulates the learning in a much higher speed substrate to create a design for the new person). So creating a new person would not be unusual. So all this process has accomplished is to create an entirely new person who happens to share the genetic code with the person who was originally suspended. It’s not the same person.

One might ask, "Who cares?" Well no one would care except for the originally suspended person. And he, after all, is not around to care. But as we look to cryonic suspension as a means towards providing a "second chance," we should care now about this potential scenario.

It brings up an issue which I have been concerned with, which is "false" reanimations.

Now one could even raise this issue (of a false reanimation) if the reanimated person does have the exact CNP of the original. One could take the philosophical position that this is still a different person. An argument for that is that once this technology is feasible, you could scan my CNP (perhaps while I’m sleeping) and create a CNP-identical copy of me. If you then come to me in the morning and say "good news, Ray, we successfully created your precise CNP-exact copy, we won’t be needing your old body and brain anymore," I may beg to differ. I would wish the new Ray well, but feel that he’s a different person. After all, I would still be here.

So even if I’m not still here, by the force of this thought experiment, he’s still a different person. As you and I discussed at the reception, if we are using the preserved person as a data repository, then it would be feasible to create more than one "reanimated" person. If they can’t all be the original person, then perhaps none of them are.

However, you might say that this argument is a subtle philosophical one, and that, after all, our actual particles are changing all the time anyway. But the scenario you described of creating a new person with the same genetic code, but with a very different CNP created through a learning simulation, is not just a matter of a subtle philosophical argument. This is clearly a different person. We have examples of this today in the case of identical twins. No one would say to an identical twin, "we don’t need you anymore because, after all, we still have your twin."

I would regard this scenario of a "false" reanimation as one of the potential failure modes of cryonics.

Reverse-engineering the brain

Finally, on the issue of the LTM (long term memory), I think this is a good point and an interesting perspective. I agree that an efficient implementation of the knowledge in a human brain (and I am referring here to knowledge in the broadest sense as not just classical long term memory, but all of our skills and competencies) would be far more compact that the 1016 bytes I have estimated for its actual implementation.

As we understand biological mechanisms in a variety of domains, we find that we can redesign them (as we reverse engineer their functionality) with about 106 greater efficiency. Although biological evolution was remarkable in its ingenuity, it did get stuck in particular paradigms.

It’s actually not permanently stuck in that its method of getting unstuck is to have one of its products, homo sapiens, discover and redesign these mechanisms.

We can point to several good examples of this comparison of our human engineered mechanisms to biological ones. One good example is Rob Freitas’ design for robotic blood cells, which are many orders of magnitude more efficient than their biological counterparts.

Another example is the reverse engineering of the human auditory system by Lloyd Watts and his colleagues. They have found that implementing the algorithms in software from the reverse engineering of specific brain regions requires about a factor of 106 less computation than the theoretical potential of the brain regions being emulated.

Another good example is the extraordinarily slow computing speed of the interneuronal connections, which have about a 5 millisecond reset time. Today’s conventional electronic circuits are already 100 million (108) times faster. Three-dimensional molecular circuits (e.g., nanotube-based circuitry) would be at least 109 times faster. Thus if we built a human brain equivalent with the same number of simulated neurons and connections (not just simulating the human brain with a smaller number of units that are operating at higher speeds), the resulting nanotube-based brain would operate at least 109 times faster than its biological counterpart.

Some of the inefficiency of the encoding of information in the human brain has a positive utility in that memory appears to have some holographic properties (meaningful information being distributed through a region), and this helps protect the information. It explains the usually gradual (as opposed to catastrophic) degradation of human memory and skill. But most of the inefficiency is not useful holographic encoding, but just this inherent inefficiency of biological mechanisms. My own estimate of this factor is around 106, which would reduce the LTM from my estimate of 1016 for the actual implementation to around 1010 for an efficient representation, but that is close enough to your and Minsky’s estimate of 109.

However, as you point out, we don’t know the compression/decompression algorithm, and are not in any event preserving this efficient representation of the LTM with the suspended patients. So we do need to preserve the inefficient representation.

With deep appreciation for your own contributions.

Eric Drexler: With respect to inferring memory state, the neurotransmitter-handling machinery in a synapse differs profoundly from the circuit structure in a DRAM cell. Memory cells in a chip are all functionally identical, each able to store and report different data from millisecond to millisecond; synapses in a brain are structurally diverse, and their differences encode relatively stable information. Charge stored in a DRAM cell varies without changes in its stable structure; long-term neurotransmitter levels in a synapse vary as a result of changes in its stable structure. The quantities of different enzymes, transport molecules, and so forth, determine the neurotransmitter properties relevant to LTM, hence neurotransmitter levels per se needn’t be preserved.

My discussion of the apparent information-theoretic size of human LTM wasn’t intended to suggest that such a compressed representation can or should be extracted from the detailed data describing brain structures. I expect that any restoration process will work with these far larger and more detailed data sets, without any great degree of intermediate compression. Nonetheless, the apparently huge gap between the essential mental information to be preserved and the vastly more detailed structural information is reassuring—and suggests that false reanimation, while possible, shouldn’t be expected when suspension occurs under good conditions. (Current medical practice has analogous problems of false life-saving, but these don’t define the field.)

Ray Kurzweil: I’d like to thank you for an engaging dialogue. I think we’ve converged to a pretty close common vision of these future scenarios. Your point is well taken that human memory (for all of its purposes), to the extent that it involves the neurotransmitters, is likely to be redundantly encoded. I agree that differences in the levels of certain molecules are likely to be also reflected in other differences, including structural differences. Most biological mechanisms that we do understand tend to have redundant information storage (although not all; some single-bit changes in the DNA can be catastrophic). I would point out, however, that we don’t yet understand the synaptic structures sufficiently to be fully confident that the differences in neurotransmitter levels that we need (for reanimation) are all redundantly indicated by structural changes. However, all of this can be tested with today’s technology, and I would suggest that this would be worthwhile.

I also agree that "the apparently huge gap between the essential mental information to be preserved and the vastly more detailed structural information is reassuring." This is one example in which the inefficiency of biology is helpful.

Eric Drexler: Thank you, Ray. I agree that we’ve found good agreement, and I also enjoyed the interchange.


Additional comments on Jan. 15, 2003 by Robert Bradbury

Robert Bradbury: First, it is reasonable to assume that within this decade we will know the precise crystal structure for all human proteins for which cryonics reanimation is feasible, using either X-ray, NMR or computational (e.g., Blue Gene) based methods. That should be almost all human proteins. Second, it seems likely that we will have both the experimental (yeast 2-hybrid) or computational (Blue Gene and extensions thereof and/or distributed protein modeling, via @Home) to determine how proteins that interact typically do so. So we will have the ability to completely understand what happens at synapses and to some extent model that computationally.

Now, Ray placed an emphasis on neurotransmitter "concentration" that Eric seemed to downplay. I tend to lean in Eric’s direction here. I don’t think the molecular concentration of specific neurotransmitters within a synapse is particularly critical for reanimating a brain. I do think the concentrations of the macroscale elements necessary for neurotransmitter release will need to be known. That is, one needs to be able to count mitochondria and synaptic vesicle size and type (contents) as well as the post-synaptic neurotransmitter receptors and the pre-synaptic reuptake receptors. It is the numbers of these "machines of transmission" that determines the Hebbian "weight" for each synapse, which is a point I think Ray was trying to make.

Furthermore, if there is some diffusion of neurotransmitters out of individual synapses, the location and density of nearby synapses may be important (see Rusakov & Kullmann below). Now, the counting of and determination of the location of these "macroscale" effectors of synapse activity is a much easier task than measuring the concentration of every neurotransmitter molecule in the synaptic cleft.

The neurotransmitter concentration may determine the instantaneous activity of the synapse, but I do not believe it holds the "weight" that Ray felt was important. That seems to be contained much more in the energy resources, enzymatic manufacturing capacity, and vesicle/receptor concentrations, which vary over much longer time periods. (The proteins have to be manufactured near the neuronal nucleus and be transported, relatively slowly, down to the terminal positions in the axons and dendrites.)

One can alter neurotransmitter concentrations and probably pulse-transmission probabilities at least within some range without disrupting the network terribly (risking false reanimation). SSRIs [Selective Serotonin Reuptake Inhibitors] and drugs used to treat Parkinson’s, such as L-dopa, are examples of drugs that may alter these aspects of interneuron communications. Of more concern to me is whether or not there will be hurdles in attempting a "cold" brain restart. One can compare this to the difficulties of restarting the brain of someone in a coma and/or someone who has drowned.

The structure of the brain may be largely preserved but one just may not be able to get it running again. This implies there is some state information contained within the normal level of background activity. We haven’t figured out yet how to "shock" the brain back into a functional pattern of activity.

Ray also mentioned vitrification. I know this is a hot topic within the cryonics community because of Greg Fahy’s efforts. But you have to realize that Greg is trying to get us to the point where we can preserve organs entirely without nanotech capabilities. I think vitrification is a red herring. Why? Because we will know the structure of just about everything in the brain under 50 nm in size. Once frozen, those structures do not change their structure or location significantly.

So I would argue that you could take a frozen head, drop it on the floor so it shatters into millions or billions of pieces and as long as it remains frozen, still successfully reassemble it (or scan it into an upload). In its disassembled state it is certainly one very large 3D jigsaw puzzle, but it can only be reassembled one correct way. Provided you have sufficient scanning and computational capacity, it shouldn’t be too difficult to figure out how to put it back together.

You have to keep in mind that all of the synapses have proteins binding the pre-synaptic side to the post-synaptic side (e.g., molecular velcro). The positions of those proteins on the synaptic surfaces are not specified at the genetic level and it seems unlikely that their locations would shift significantly during the freezing process (such that their number and approximate location could not be reconstructed).

As a result, each synapse should have a "molecular fingerprint" as to which pre-side goes with which post-side. So even if the freezing process pulls the synapse apart, it should be possible to reconstruct who the partners are. One needs to sit and study some freeze-fracture electron micrographs before this begins to become a clear idea for consideration.

So I think the essential components are the network configuration itself, the macroscale machinery architecture of the synapses and something that was not mentioned, the "transcriptional state of the nuclei of the neurons" (and perhaps glial cells), i.e., which genes are turned on/off. This may not be crucial for an instantaneous brain "reboot" but might be essential for having it function for more than brief periods (hours to days).

References

A good (relatively short but detailed) description of synapses and synaptic activity is Ch.5: Synaptic Activity from State University of New York at Albany.

Also see:

Understanding Neurological Functions through the Behavior of Molecules, Dr. Ryoji Yano

Three-Dimensional Structure of Synapses in the Brain and on the Web, J. C. Fiala, 2002 World Congress on Computational Intelligence, May 12-17, 2002

Assessing Accurate Sizes of Synaptic Vesicles in Nerve Terminals, Seongjai Kim, Harold L. Atwood & Robin L. Cooper

Extrasynaptic Glutamate Diffusion in the Hippocampus: Ultrastructural Constraints Uptake and Receptor Activation, Dimitri A. Rusakov & Dimitry M. Kullmann The J. of Neuroscience 18(9):3158-3170 (1 May 1998).

Ray Kurzweil: Robert, thanks for your interesting and thoughtful comments. I essentially agree with what you’re saying, albeit we don’t yet understand the mechanisms behind the "Hebbian weight" or other vital state information needed for a non-false reanimation. It would be good if this state information were fully represented by mitochondria and synaptic vesicle size and type (contents), post-synaptic neurotransmitter receptors and pre-synaptic reuptake receptors, i.e., by the number of these relatively large (compared to molecules) "machines of transmission."

Given that we have not yet reverse-engineered these mechanisms, I suppose it would be difficult to do a definitive experiment now to make sure we are preserving the requisite information.

I agree with your confidence that we will have reverse-engineered these mechanisms within the next one to two decades. I also agree that we need only preserve the information, and that reanimation technology will take full advantage of the knowledge of how these mechanisms work. Therefore the mechanisms don’t need to preserved in working order so long as the information is there. I agree that Fahy’s concerns apply primarily to revitalization without such detailed nanotech repair and reconstruction.

Of course, as I pointed out in the debate with Eric, such a complete reconstruction may essentially amount to creating a new brain/person with the cryonically preserved brain/body serving only as a blueprint, in which case it would just as easy to create more than one renaimated person. Eric responded to this notion by saying that the first one is the reanimated person and subsequent ones are just copies because after all, at that time, we could make copies of anyone anyway.

With regard to your jigsaw puzzle, that may be a difficult puzzle to put together, although I suppose we’ll have the computational horsepower to do it.