ARE WE SPIRITUAL MACHINES? | Chapter 8: Dembski’s Outdated Understanding: Response to William Dembski

June 7, 2001
Author:
Ray Kurzweil
Publisher:
Discovery Institute (2001)

Intelligence versus Consciousness

I cannot resist starting my response with an amusing misquotation. Dembski writes: “Those humans who refuse to upload themselves will be left in the dust, becoming ‘pets,’ as Kurzweil puts it, of the newly evolved computer intelligences.” This is indeed a quotation from my book. But the reference Dembski attributes to me is actually from Ted Kaczynski (the “Unabomber”), not someone whose views I concur with. I’m sure it’s an honest mistake, but a good example of how inattentive reading often results in people seeing only what they expect to see. That having been said, I will say that the misquotations from Dembski are not nearly on the massive scale of Searle.

Dembski is correct that with regard to human performance, indeed with regard to any of our objectively observed abilities and reactions, it is my view that what Dembski calls the materialist approach is valid. One might call this “capability materialism.” Capability materialism is based on the observation that biological neurons and their interconnections are made up of matter and energy, that their methods can be described, understood, and modeled with either replicas or functionally equivalent recreations. As I pointed out at length earlier, we are already recreating functionally equivalent recreations of substantial neuron clusters, and there are no fundamental barriers to extending this process to the several hundred neural regions we call the human brain. I use the word “capability” because this includes all of the rich, subtle, and diverse ways in which humans interact with the world, not just those narrower skills that one might label as intellectual. Indeed, our ability to understand and respond to emotions is more complex and diverse than our ability to process intellectual issues.

Searle, for example, acknowledges that human neurons are biological machines. Few serious observers have postulated capabilities or reactions of human neurons that require Dembski’s “extra-material factors.” In my view, relying on the patterns of matter and energy in the human body and brain to explain its behavior and proficiencies need not diminish our wonderment at its remarkable qualities. Dembski has an outdated understanding of the concept of “machine,” as I will detail below.

However, with regard to the issue of consciousness, I would have to say that Dembski and I are in agreement, although Dembski apparently does not realize this. He writes:

The great mistake in trying to understand the mind-body problem is to suppose that it is a scientific problem. It is not. It is a problem of ontology (i.e., that branch of metaphysics concerned with what exists).

If by the “mind-body problem,” Dembski means the issue of consciousness, then I agree with Dembski’s statement. As I explained in my first chapter in this book and in my response to Searle, there is no objective (i.e., scientific) method that can definitively measure or determine the subjective experience (i.e., the consciousness) of another entity. We can measure correlates of subjective experience (e.g., outward or inward behavior, i.e., patterns of neuron activity), and we can use these correlates to make arguments about the potential consciousness of another entity (such as an animal or a machine), but these arguments remain just that. Such observations do not constitute objective proof of another entity’s subjective experiences, i.e., of its consciousness. It comes down to the essential difference between the concepts of “objective” and “subjective.”

As I pointed out, however, with multiple quotations of John Searle (e.g., “human brains cause consciousness by a series of specific neurobiological processes in the brain”), Searle apparently does believe that the essential philosophical issue of consciousness is determined by what Dembski calls “tender-minded materialism.”

The arguments of scientist-philosophers such as Roger Penrose that consciousness in the human brain is somehow linked to quantum computing does not change the equation because quantum effects are properly part of the material world. Moreover there is nothing that prevents our utilizing quantum effects in our machines. Indeed, we are already doing this. The conventional transistor relies on the quantum effect of electron tunneling.

So the line-up on these issues is not as straightforward as might at first appear.

Dembski’s Limited Understanding of Machines and Emergent Patterns

Dembski writes:

[P]redictability is materialism’s main virtue… We long for freedom, immortality, and the beatific vision… The problem for the materialist, however, is that these aspirations cannot be redeemed in the coin of matter.

Unlike brains, computers are neat and precise . . . computers operate deterministically.

These and other statements of Dembski reveal a view of machines, or entities made up of patterns of matter and energy (i.e., “material” entities), that is limited to the literally simple-minded machines of nineteenth century automata. These machines with their hundreds, maybe thousands of parts were quite predictable and certainly not capable of longings for freedom and other such endearing qualities of the human entity. The same observations largely hold true for today’s machines with their billions of parts. But the same cannot necessarily be said for machines with millions of billions of interacting “parts,” entities with the complexity of the human brain and body.

First of all, it is incorrect to say that materialism is predictable. Even today’s computer programs routinely use simulated randomness. If one needs truly random events in a process, there are devices that can provide this as well. Fundamentally, everything we perceive in the material world is the result of many trillions of quantum events, each of which display profound and irreducible quantum randomness at the core of physical reality. The material world—at both the macro and micro levels—is anything but predictable.

Although many computer programs do operate the way Dembski describes, the predominant methods in my own field of pattern recognition use biological-inspired methods called “chaotic computing,” in which the unpredictable interaction of millions of processes, many of which contain random and unpredictable elements, provide unexpected yet appropriate answers to subtle questions of recognition. It is also important to point out that the bulk of human intelligence consists of just these sorts of pattern recognition processes.

As for our responses to emotions and our highest aspirations, these are properly regarded as emergent properties, profound ones to be sure, but nonetheless emergent patterns that result from the interaction of the human brain with its complex environment. The complexity and capacity of nonbiological entities is increasing exponentially and will match biological systems including the human brain (along with the rest of the nervous system and the endocrine system) within three decades. Indeed many of the designs of future machines will be biologically inspired, that is to say derivative of biological designs (this is already true of many contemporary systems). It is my thesis that by sharing the complexity as well as the actual patterns of human brains, these future nonbiological entities will display the intelligence and emotionally rich reactions of humans. They will have aspirations because they will share these complex emergent patterns.

Will such nonbiological entities be conscious? Searle claims that we can (at least in theory) readily resolve this question by ascertaining if it has the correct ”specific neurobiological processes.” It is my view that many humans, ultimately the vast majority of humans, will come to believe that such human-derived but nonetheless nonbiological intelligent entities are conscious, but that’s a political prediction, not a scientific or philosophical judgement. Bottom line, I agree with Dembski that this is not a scientific question. Some observers go on to say that if it’s not a scientific question, then it’s not an important or even a real question. My view (and I’m sure Dembski agrees) is that because the question is not scientific, it is precisely for that reason a philosophical one, indeed the fundamental philosophical question.

Transcendence, Spirituality and God

Dembski writes:

We need to transcend ourselves to find ourselves. Now the motions and modifications of matter offer no opportunity for transcending ourselves. . . . Freud . . . Marx . . . Nietzsche . . . each regarded the hope for transcendence as a delusion.

Dembski’s view of transcendence as an ultimate goal is reasonably put. But I disagree that the material world offers no “opportunity for transcending.” The material world inherently evolves, and evolution represents transcendence. As I wrote in the first chapter in this book, “Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, greater love. And God has been called all these things, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, and infinite love. Evolution does not achieve an infinite level, but as it explodes exponentially, it certainly moves in that direction. So evolution moves inexorably towards our conception of God, albeit never reaching this ideal.”

Dembski writes:

[A] machine is fully determined by the constitution, dynamics, and interrelationships of its physical parts. . . . “[M]achines” stresses the strict absence of extra-material factors. . . . The replacement principle is relevant to this discussion because it implies that machines have no substantive history. . . . But a machine, properly speaking, has no history. Its history is a superfluous rider—an addendum that could easily have been different without altering the machine. . . . For a machine, all that is, is what it is at this moment. . . . Machines access or fail to access items in storage. . . Mutatis mutandis, items that represent counterfactual occurrences (i.e., things that never happened) but which are accessible can be, as far as the machine is concerned, just a though they did happen.

It is important to point out that the whole point of my book and the first chapter of this book is that many of our dearly held assumptions about the nature of machines and indeed of our own human nature will be called into question in the next several decades. Dembski’s conception of “history” is just another aspect of our humanity that necessarily derives from the richness, depth and complexity of being human. Conversely, not having a history in the Dembski sense is just another attribute of the simplicity of the machines that we have known up to this time. It is precisely my thesis that machines of the mid to late twenty-first century will be of such great complexity and richness of organization that their behavior will evidence emotional reactions, aspirations, and, yes, history. So Dembski is merely describing today’s limited machines and just assuming that these limitations are inherent. This line of argument is entirely equivalent to stating that “today’s machines are not as capable as humans, therefore machines will never reach this level of performance.” Dembski is just assuming his conclusion.

Dembski’s view of the ability of machines to understand their own history is limited to “accessing” items in storage. But future machines will possess not only a record of their own history, but an ability to understand that history and to reflect insightfully upon it. As for “items that represent counterfactual occurrences,” surely the same can be said for our human memories.

Dembski’s lengthy discussion of spirituality is summed up by his closing paragraph of his “Humans as Spiritual Machines” section:

But how can a machine be aware of God’s presence? Recall that machines are entirely defined by the constitution, dynamics, and interrelationships among their physical parts. It follows that God cannot make his presence known to a machine by acting upon it and thereby changing its state. Indeed, the moment God acts upon a machine to change its state, it no longer properly is a machine, for an aspect of the machine now transcends its physical constituents. It follows that awareness of God’s presence by a machine must be independent of any action by God to change the state of the machine. How then does the machine come to awareness of God’s presence? The awareness must be self-induced. Machine spirituality is the spirituality of self-realization, not the spirituality of an active God who freely gives himself in self-revelation and thereby transforms the beings with which he is in communion. For Kurzweil to modify “machine” with the adjective “spiritual” therefore entails an impoverished view of spirituality.

Dembski states that an entity (e.g., a person) cannot be aware of God’s presence without God acting upon her, yet God cannot act upon a machine, so therefore a machine cannot be aware of God’s presence. This reasoning here is entirely tautological and human-centric. God only communes with humans, and only biological ones at that. I have no problem with Dembski believing this as a personal belief, but he fails to the make the “strong case” that he promises that “humans are not machines—period.” As with Searle, Dembski just assumes his conclusion.

Where Can I Get Some of Dembski’s “Extra-Material” Thinking Stuff?

Like Searle, Dembski cannot seem to grasp the concept of the emergent properties of complex distributed patterns. He writes:

Anger presumably is correlated with certain localized brain excitations. But localized brain excitations hardly explain anger any better than overt behaviors associated with anger, like shouting obscenities. Localized brain excitations may be reliably correlated with anger, but what accounts for one person interpreting a comment as an insult and experiencing anger, and another person interpreting that same comment as a joke and experiencing laughter? A full materialist account of mind needs to understand localized brain excitations in terms of other localized brain excitations. Instead we find localized brain excitations (representing, say, anger) having to be explained in terms of semantic contents (representing, say, insults). But this mixture of brain excitations and semantic contents hardly constitutes a materialist account of mind or intelligent agency.

Dembski assumes that anger is correlated with a “localized brain excitation,” but anger is almost certainly the reflection of complex distributed patterns of activity in the brain. Even if there is a localized neural correlate associated with anger, it nonetheless results from multifaceted and interacting patterns. Dembski’s question as to why different people react differently to similar situations hardly requires us to resort to his extra-material factors for an explanation. The brains and experiences of different people are clearly not the same and these differences are well explained by differences in our physical brains.

It is useful to consider the analogy of the brain’s organization to a hologram (a piece of film containing an interference pattern created by the interaction between a three-dimensional image and a laser light). When one looks through a hologram, one sees the original three-dimensional image, but none of the features of the image can be seen directly in the apparently random patterns of dots that are visible if one looks directly at the piece of film. So where are the features of the projected image? The answer is that each visual feature of the projected image is distributed throughout the entire pattern of dots that the hologram contains. Indeed, if you tear a hologram in half (or even in a large number of pieces), each piece will contain the entire image (albeit at reduced resolution). The visible image is an emergent property of the hologram’s distributed pattern, and none of the image’s features can be found through a localized analysis of the information in the hologram. So it is with the brain.

It is also the case that the human brain has a great deal of redundancy and it contains far more neural circuitry than is minimally needed to performs its functions. It is well known that the left and right halves of the brain, while not identical, are each sufficient to provide a more-or-less normal level of human functioning, which explains Louis Pasteur’s intellectual accomplishments after his cerebral accident. Half a brain is enough.

I find it remarkable that Dembski cites the case of John Lorber’s reportedly brainless patient as evidence that human intellectual functioning is the result of “extra-material factors.” First of all, we need to take this strange report with a grain of salt. Many commentators have pointed out that Lorber’s conclusion that his patient’s brain was only 1 millimeter thick was flawed. As just one of many such critics, neurosurgeon Kenneth Till commented on the case of Lorber’s patient: “Interpreting brain scans can be very tricky. There can be a great deal more brain tissue in the cranium than is immediately apparent.”

It may be true that this patient’s brain was smaller than normal, but that would not necessarily be reflected in obviously degraded capabilities. In commenting on the Lorber case, University of Indiana Professor Paul Pietsch writes, “How could this [the Lorber case] possibly be? If the way the brain functions is similar to the way a hologram functions, that [diminished brain size] might suffice. Certain holograms can be smashed to bits, and each remaining piece can reproduce the whole message. A tiny fragment of this page, in contrast, tells little about the whole story.”

Even Lorber himself does not resort to “extra-material factors” to explain his observations. Lorber concludes that “there must be a tremendous amount of redundancy or spare capacity in the brain, just as there is with kidney and liver.” Few commentators on this case resort to Dembski’s “extra-material factors” to explain it.

Dembski’s resolution of the ontology problem is to say that the ultimate basis of what exists is the “real world of things,” things irreducible to material stuff. Dembski does not list what “things” we might consider as fundamental but presumably human minds would be on the list, and perhaps other “things” such as money and chairs. There may be a small congruence of our views in this regard. I regard Demski’s things as patterns. Money, for example, is a vast and persisting pattern of agreements, understandings, and expectations. “Ray Kurzweil” is perhaps not so vast a pattern, but thus far is also persisting. Dembski apparently regards patterns as ephemeral and not substantial, but as a pattern recognition scientist, I have a profound respect for the power and endurance of patterns. It is not unreasonable to regard patterns as a fundamental ontological reality. We are unable to really “touch” matter and energy directly, but we do directly experience the patterns underlying “things.”

Fundamental to my thesis is that as we apply our intelligence and the extension of our intelligence called technology to understanding the powerful patterns in our world (e.g., human intelligence), we can recreate—and extend!—these patterns in other substrates (i.e., with other materials). The patterns are more important than the materials that embody them.

Finally, if Dembski’s intelligence-enhancing extra-material stuff really exists, then I’d like to know where I can get some.

Copyright © 2002 by the Discovery Institute. Used with permission.