ARE WE SPIRITUAL MACHINES? | Chapter 7: Applying Organic Design Principles to Machines is Not an Analogy But a Sound Strategy: Response to Michael Denton
June 7, 2001
- Ray Kurzweil
- Discovery Institute (2001)
The Bridge is Already Under Construction
Similar to Dembski, Denton points out the apparent differences between the design principles of biological entities (e.g., people) and those of the machines he has known. Denton eloquently describes organisms as “self-organizing, . . . self-referential, . . . self-replicating, . . . reciprocal, . . . self-formative, and . . . holistic.” He then makes the unsupported leap, a leap of faith one might say, that such organic forms can only be created through biological processes, and that such forms are “immutable, . . . impenetrable, and . . . fundamental” realities of existence.
I do share Denton’s “awestruck” sense of “wonderment” at the beauty, intricacy, strangeness, and inter-relatedness of organic systems, ranging from the “eerie other-worldly impression” of asymmetric protein shapes to the extraordinary complexity of higher-order organs such as the human brain. Further, I agree with Denton that biological design represents a profound set of principles. However, it is precisely my thesis, which neither Denton nor the other critics represented in this book acknowledge nor respond to, that machines (i.e., entities derivative of human directed design) can use—and already are using—these same principles. This has been the thrust of my own work, and in my view represents the wave of the future. Emulating the ideas of nature is the most effective way to harness the enormous powers that future technology will make available.
The concept of holistic design is not an either-or category, but rather a continuum. Biological systems are not completely holistic nor are contemporary machines completely modular. We can identify units of functionality in natural systems even at the molecular level, and discernible mechanisms of action are even more evident at the higher level of organs and brain regions. As I pointed out in my response to Searle, the process of understanding the functionality and information transformations performed in specific brain regions is well under way. It is misleading to suggest that every aspect of the human brain interacts with every other aspect, and that it is thereby impossible to understand its methods. Lloyd Watts, for example, has identified and modeled the transformations of auditory information in more than two dozen small regions of the human brain. Conversely, there are many examples of contemporary machines in which many of the design aspects are deeply interconnected and in which “bottom up” design is impossible. As one example of many, General Electric uses “genetic algorithms” to evolve the design of its jet engines as they have found it impossible to optimize the hundreds of deeply interacting variables in any other way.
Today almost all professional biologists have adopted the mechanistic/reductionist approach and assume that the basic parts of an organism (like the cogs of a watch) are the primary essential things, that a living organism (like a watch) is no more than the sum of its parts, and that it is the parts that determine the properties of the whole and that (like a watch) a complete description of all the properties of an organism may be had by characterizing its parts in isolation.
What Denton is ignoring here is the ability of complex processes to exhibit emergent properties which go beyond “its parts in isolation.” Denton appears to recognize this potential in nature when he writes: “In a very real sense organic forms . . . represent genuinely emergent realities.” However, it is hardly necessary to resort to Denton’s “vitalistic model” to explain emergent realities. Emergent properties derive from the power of patterns, and there is nothing that restricts patterns and their emergent properties to natural systems.
Denton appears to acknowledge the feasibility of emulating the ways of nature, when he writes:
Success in engineering new organic forms from proteins up to organisms will therefore require a completely novel approach, a sort of designing from ‘the top down.’ Because the parts of organic wholes only exist in the whole, organic wholes cannot be specified bit by bit and built up from a set of relatively independent modules; consequently the entire undivided unity must be specified together in toto.
Here Denton provides sound advice and describes an approach to engineering that I and other researchers use routinely in the areas of pattern recognition, complexity (also called chaos) theory, and self-organizing systems. Denton appears to be unaware of these methodologies and after describing examples of bottom-up component-driven engineering and their limitations concludes with no justification that there is an unbridgeable chasm between the two design philosophies. The bridge is already under construction.
How to Create Your Own “Eerie Other-Worldly” But Effective Designs: Applied Evolution
In my book I describe how to apply the principles of evolution to creating intelligent designs. It is an effective methodology for problems that contain too many intricately interacting aspects to design using the conventional modular approach. We can, for example, create (in the computer) millions of competing designs, each with their own “genetic” code. The genetic code for each of these design “organisms” describes a potential solution to the problem. Applying the genetic method, these software-based organisms are set up to compete with each other and the most successful are allowed to survive and to procreate. “Offspring” software entities are created, each of which inherits the genetic code (i.e., the design parameters) of two parents. Mutations and other “environmental challenges” are also introduced. After thousands of generations of such simulated evolution, these genetic algorithms often produce complex original designs. In my own experience with this approach, the results produced by genetic algorithms are well described by Denton’s description of organic molecules in the “apparent illogic of the design and the lack of any obvious modularity or regularity…the sheer chaos of the arrangement…[and the] almost eerie other-worldly non-mechanical impression.”
Genetic algorithms and other top-down self-organizing design methodologies (e.g., neural nets, Markov models) incorporate an unpredictable element, so that the results of such systems are actually different every time the process is run. Despite the common wisdom that machines are deterministic and therefore predictable, there are numerous readily available sources of randomness available to machines. Contemporary theories of quantum mechanics postulate profound quantum randomness at the core of existence. According to quantum theory, what appears to be the deterministic behavior of systems at a macro level is simply the result of overwhelming statistical preponderancies based on enormous numbers of fundamentally unpredictable events. Moreover, the work of Stephen Wolfram and others has demonstrated that even a system that is in theory fully deterministic can nonetheless produce effectively random results.
The results of genetic algorithms and similar “self-organizing” approaches create designs which could not have been designed through a modular component-driven approach. The “strangeness. . . chaos, . . . the dynamic interaction” of parts to the whole that Denton attributes only to organic structures describe very well the qualities of the results of these human initiated chaotic processes.
In my own work with genetic algorithms, I have examined the process in which a genetic algorithm gradually improves a design. It accomplishes this precisely through an incremental “all at once” approach, making many small, distributed changes throughout the design which progressively improve the overall fit or “power” of the solution. A genetic algorithm does not accomplish its design achievements through designing individual subsystems one at a time. The entire solution emerges gradually, and unfolds from simplicity to complexity. The solutions it produces are often asymmetric and ungainly, but effective, just as in nature. Often, the solutions appear elegant and even beautiful.
Denton is certainly correct that most contemporary machines are designed using the modular approach. It is important to note that there are certain significant engineering advantages to this traditional approach to creating technology. For example, computers have far more prodigious and accurate memories than humans, and can perform certain types of transformations far more effectively than unaided human intelligence. Most importantly, computers can share their memories and patterns instantly. The chaotic non-modular approach also has clear advantages which Denton well articulates, as evidenced by the deep prodigious powers of human pattern recognition. But it is a wholly unjustified leap to say that because of the current (and diminishing!) limitations of human-directed technology that biological systems are inherently, even ontologically, a world apart. The exquisite designs of nature have benefited from a profound evolutionary process. Our most complex genetic algorithms today incorporate genetic codes of thousands of bits whereas biological entities such as humans are characterized by genetic codes of billions of bits (although it appears that as a result of massive redundancies and other inefficiencies, only a few percent of our genome is actually utilized). However, as is the case with all information-based technology, the complexity of human-directed evolutionary engineering is increasing exponentially. If we examine the rate at which the complexity of genetic algorithms and other nature-inspired methodologies are increasing, we find that they will match the complexity of human intelligence within a few decades.
To Fold a Protein
Denton points out we have not yet succeeded in folding proteins in three dimensions, “even one consisting of only 100 components.” It should be pointed out, however, that it is only in the recent few years that we have had the tools even to visualize these three-dimensional patterns. Moreover, modeling the interatomic forces will require on the order of a million billion calculations per second, which is beyond the capacity of even the largest supercomputers available today. But computers with this capacity are expected soon. IBM’s “Blue Gene” computer, scheduled for operation in 2005, will have precisely this capacity, and as the name of the project suggests, is targeted at the protein-folding task.
We have already succeeded in cutting, splicing, and rearranging genetic codes, and harnessing nature’s own biochemical factories to produce enzymes and other complex biological substances. It is true that most contemporary work of this type is done in two dimensions, but the requisite computational resources to visualize and model the far more complex three-dimensional patterns found in nature is not far from realization.
In discussing the prospects for solving the protein-folding problem with Denton himself, he acknowledged that the problem would eventually be solved, estimating that it was perhaps a decade away. The fact that a certain technical feat has not yet been accomplished is not a strong argument that it never will.
From knowledge of the genes of an organism it is impossible to predict the encoded organic forms. Neither the properties nor structure of individual proteins nor those of any higher order forms—such as ribosomes and whole cells—can be inferred even from the most exhaustive analysis of the genes and their primary products, linear sequences of amino acids.
Although Denton’s observation above is essentially correct, this only points out that the genome is only part of the overall system. The DNA code is not the whole story, and the rest of the molecular support system is needed for the system to work and for it to be understood.
I should also point out that my thesis on recreating the massively parallel, digitally controlled analog, hologram-like, self-organizing and chaotic processes of the human brain does not require us to fold proteins. There are dozens of contemporary projects which have succeeded in creating detailed recreations of neurological systems, including neural implants which successfully function inside people’s brains, without folding any proteins. However, I understand Denton’s argument about proteins to be an essay on the holistic ways of nature. But as I have pointed out, there are no essential barriers to our emulating these ways in our technology, and we are already well down this path.
Contemporary Analogues to Self-Replication
To begin with, every living system replicates itself, yet no machine possesses this capacity even to the slightest degree….Living things possess the ability to change themselves from one form into another…The ability of living things to replicate themselves and change their form and structure are truly remarkable abilities. To grasp just how fantastic they are and just how far they transcend anything in the realm of the mechanical, imagine our artifacts endowed with the ability to copy themselves and—to borrow a term from science fiction—“morph” themselves into different forms.
First of all, we do have a new form of self-replicating entity that is human-made, and which did not exist a short while ago, namely the computer (or software) virus. Just as biological self-replicating entities require a medium in which to reproduce, viruses require the medium of computers and the network of networks known as the Internet. As far as changing form is concerned, some of the more advanced and recent software viruses demonstrate this characteristic. Moreover, morphing form is precisely what happens in the case of the reproducing designs created by genetic algorithms. Whereas most software viruses reproduce asexually, the form of self-replication harnessed in most genetic algorithms is sexual (i.e., utilizing two “parents” such that each offspring inherits a portion of its genetic code from each parent). If the conditions are right, these evolving software artifacts do morph themselves into different forms, indeed into increasingly complex forms that provide increasingly greater power in solving nontrivial problems. And lest anyone think that there is an inherent difference between these evolving software entities and actual physical entities, it should be pointed out that software entities created through genetic algorithms often do represent the designs of physical entities such as engines or even of robots, as recently demonstrated by scientists at Tufts. Conversely, biological physical entities such as humans are also characterized by the data contained in their genetic codes.
Nanobots, which I described in the first chapter of this book, will also provide the ability to create morphing structures in the physical world. J.D. Storrs, for example, has provided designs of special nanobots, which he calls “foglets,” which will eventually be capable of organizing and reorganizing themselves into any type of physical structure, thereby bringing the morphing qualities of virtual reality into real reality.
On Consciousness and the Thinking Ability of Humans
Finally I think it would be acknowledged by even ardent advocates of strong AI like Kurzweil, Dennett and Hofstadter that no machine has been built to date which exhibits consciousness and can equal the thinking ability of humans. Kurzweil himself concedes this much in his book. As he confesses: “Machines today are still a million times simpler than the human brain. . . . Of course Kurzweil believes, along with the other advocates of strong AI that sometime in the next century computers capable of carrying out 20 million billion calculations per second (the capacity of the human brain) will be achieved and indeed surpassed. And in keeping with the mechanistic assumption that organic systems are essentially the same as machines then of course such machines will equal or surpass the intelligence of man. . . . Although the mechanistic faith in the possibility of strong AI still runs strong among researchers in this field, Kurzweil being no exception, there is no doubt that no one has manufactured anything that exhibits intelligence remotely resembling that of man.
First of all, my positions are neither concessions nor confessions. Our technology today is essentially where I had expected it to be by this time when I wrote a book (The Age of Intelligent Machines) describing the law of accelerating returns in the 1980s. Once again, Denton’s accurate observation about the limitations of today’s machines is not a compelling argument on inherent restrictions that can never be overcome. Denton himself acknowledges the quickening pace of technology that is moving “at an ever-accelerating rate one technological advance [following] another.”
Denton is also oversimplifying my argument in the same way that Searle does. It is not my position that once we have computers with a computing capacity comparable to that of the human brain, that “of course such machines will equal or surpass the intelligence of man.” I state explicitly in the first chapter of this book and in many different ways in my book The Age of Spiritual Machines that “this level of processing power is a necessary but not sufficient condition for achieving human-level intelligence in a machine.” The bulk of my thesis addresses the issue of how the combined power of exponentially increasing computation, communication, miniaturization, brain scanning, and other accelerating technology capabilities, will enable us to reverse engineer, that is to understand, and then to recreate in other forms, the methods underlying human intelligence.
Finally, Denton appears to be equating the issue of “exhibit[ing] consciousness” with the issue of “equal[ing] the thinking ability of humans.” Without repeating the arguments I presented in both the first chapter of this book and in my response to Searle, I will say that these issues are quite distinct. The latter issue represents a salient goal of objective capability, whereas the former issue represents the essence of subjective experience.
In summary, Denton is far too quick to conclude that complex systems of matter and energy in the physical world are incapable of exhibiting the “emergent . . . vital characteristics of organisms such as self-replication, “morphing,” self-regeneration, self-assembly and the holistic order of biological design,” and that, therefore, “organisms and machines belong to different categories of being.” Dembski and Denton share the same limited view of machines as entities that can only be designed and constructed in a modular way. We can build (and already are building) “machines” that have powers far greater than the sum of their parts by combining the chaotic self-organizing design principles of the natural world with the accelerating powers of our human-initiated technology. The ultimate result will be a formidable combination indeed.
Copyright © 2002 by the Discovery Institute. Used with permission.