Critique of ‘Against Naive Uploadism’

May 10, 2011 by Randal A. Koene

(credit: stock image)

In “Against Naive Uploadism: Memory, Consciousness and Synaptic Homeostasis,” neuroscientist Seth Weisberg challenges the comparison of a neuron to a digital computer and the idea that an action potential (spike) fired by one neuron equals one calculation at each synapse. He also challenges the assumption that we are approaching computing power comparable to the human brain.

Overall, Seth Weisberg’s article is refreshing and clearly spoken, in that it lays out the differences between some of what we know is going on in the human brain and what we commonly think of in terms of computer programming.

Those explanations are enlightening, not only to transhumanists with a desire to see the accomplishment of substrate-independent minds, but also to those designing artificial intelligence with the aim to achieve or exceed human mental abilities in silico.

In fact, it is the frequently omitted, rather shallow consideration normally given to human brain processing and the potential importance of details and modular collaboration that prompted the inclusion of a neuro track at the AGI-11 conference in Mountain View this August, a track that I am chairing.

I wish Seth were right, that many indeed see the process of uploading to a substrate-independent existence as an “inevitable” step in human evolution. In reality, there is probably a small minority who even actively contemplate the possibility. The understanding of the mind and human experience as comprising phenomena elicited by purely mechanistic and computable processes is still a marginal one.

And the notion of possible steps beyond current human ability and state of existence exceeds the prevailing perception of the human species as the pinnacle and ultimate goal of evolution and nature. The unguided process of evolution is frequently misunderstood, even by those who accept its reality.

Neural vs. digital processing

Weisberg is correct to dispute the old computing power calculations that compare brain and digital processing. It is incorrect to attempt such comparisons without noting the significant differences in how processing takes place on the two platforms. One can note that in effect, the architecture of a single neuron can carry out processing that requires a large number of operations to model digitally, but at the same time, that it may be necessary to involve a large number of neurons to achieve any robust processing at all. It is difficult to compare the two platforms directly.

A correct comparison of processing power might instead attempt to take into account all of the input information arriving at the human brain, all of the behavioral output being generated, the knowledge being stored, skills learned, and thoughts contemplated, and then to estimate in some manner what processing capacity a digital platform would need to have to accomplish the same.

That is not an easy calculation to make. An alternative is to consider the digital processing power needed to carry out a highly accurate, real-time emulation of all of the biophysical processes of a brain that play a significant role in the processing that is of interest.

I should note though, that Ray Kurzweil has recently been explaining his statements more carefully, as evidenced for example during his presentation at the 2010 Singularity Summit, at which he was surrounded by neuroscientists (e.g., Terry Sejnowski) presenting at the same event.

Building substrate-independent minds

Seth Weisberg is entirely correct to note the many details that need to be taken into account and updated if a computer is to carry out a careful emulation of the processing activity in a brain. Those details are precisely the sort of details that are considered necessary for the conservative implementation of a substrate-independent mind (SIM), which we call whole brain emulation (WBE). It is possible that one may in future be able to upload to alternative implementations, but a whole brain emulation is presently the implementation that allows us to most confidently carry out research and development toward uploading and SIM.

I think it is important to read Seth Weisberg’s conclusions very carefully. For example, Seth states that we must understand which biophysical processes are “correlated with, lead to or actually are cognition, memory and consciousness” in order to make uploading to a SIM possible. I agree that we need to know which ones are involved. But I do not think that this equates to needing to have a full understanding of how those processes achieve our phenomenological experience, any more than it was necessary to fully understand aerodynamics to build the first airplanes.

If we are conservative about our decisions, and we err on the side of caution in the inclusion of biophysical processes, then it is possible to emulate something with partial understanding. There are probabilistic (Bayesian) methods by which one can construct successive models and can calculate whether the addition of a specific detail continues to improve the system’s overall ability to compute recorded activity and behavior or not.

Those methods are concrete examples of ways in which emulation to a specified precision may be accomplished without necessitating a full abstract understanding of all the actions and interactions modeled.